Security of the browsers

2007/03/06

I have just finished listening to the back-episodes of Security Now! # 38, where Steve Gibson describes his approach to securely browsing Web without antivirus and with Internet Explorer. The idea in a nutshell is – use properly locked down IE zones. Steve has modified the security settings of the default (Internet zone) to maximum: not allowing any scripting, cookies etc. Which makes many sites unusable, of course because increasing number of browsers does require Javascript enabled – or else game is over.

For the sites that do need the scripting, Steve recommends adding them to list of trusted site EXPLICITLY, one by one, site by site. This way, only the sites you use and are interested in will get any chance of running code within you browser.

This is very good idea, but has two weak points. First is that it is Internet Explorer and Windows only technique. True enough – combination of Windows users with IE defines the most virus/malware sensitive group of the Net population, but many exploits are impacting Firefox users as well and in Firefox, the zone technique does not work. The second problem is that your list of trusted sites is machine specific. If you are using multiple computers, you will have repeat the process of granting trust to your sites on each of them. I am afraid that few users will have the stamina of doing it … Even with single computer, it requires patience of a saint.

As many times before: when there is a trade-off between security and convenience, guess what will win ?


Nice resource on TCP/IP and Networking

2007/02/03

As of today, I am at podcast #26 of Security Now! and Steve with Leo are have started their “foundation series” on How Internet works. It is really great and enjoyable topic, nicely and very enjoyably presented. Given the delivery format limitation (voice without possibility of adding any drawing), there is of course limit on the amount of details that can be presented. Being a curious creature, I wanted to dig little bit deeper into “excruciating details” of TCP/IP and started to search for a good book.

Today at Chapters, I scanned few books and here is the result: What I thought would be the best authoritative guide – The TCP/IP Illustrated – did not work for me. It is supposedly “the” book on TCP/IP authored by the Richard Stevens of the Advanced Unix Programming fame (which is a great book), but the writing style is fairly dry and often it looks like you cannot see the forest for the trees.

Fortunately the second book I picked – TCP/IP Guide – was IMHO much better written. And even better, the book’s content is available on-line as well. You can read for free on-line or purchase the PDF.

Hooray into the Alphabet soup: ICMP, ARP, RARP, IPSec, CIDR, RIP, OSPF, BGP, OMG … oops, that is not a protocol ๐Ÿ™‚


The joys of open source NAS

2007/01/21

Well, as it looks like, I was prematurely ecstatic about the new open source based NAS server. While the hardware works great, I had not much luck so far with setting up the software on top of that.

My original idea was to use FreeNAS, let it boot and install into USB drive and use all 4 SATA disks in RAID 5 set up by BIOS. It did not work. FreeNAS does not see the RAID5 volume created by BIOS and keeps referring to 4 separate SATA drives. So does Openfiles and few other distributions. After the USB key was initialized, the system did not boot and stopped with error message ‘No Ufs’.

Some research later, I found out that the drivers for the chipset needs to be installed in order the hardware RAID to be recognized. In the process of searching I’ve learned more about RAID’s than I ever wanted to know ๐Ÿ™‚ and found out that the hardware RAID I have is in reality half software solution and without loaded drivers and help from OS will not work. Not much surprise, one cannot expect from $110 mainboard to be everything for everybody.

I did few experiments with using 4 SATA disks as separate volumes and set up software RAID 5 in FreeNAS. It worked OK, so as long as you resolve the problem with booting, this almost would be a workable solution. For now, while I am experimenting with the system, it is booting from additional 40GB IDE drive. The “almost” part is bad surprise in FreeNAS capabilities. It allows you to create users and even groups (dunno why), but the access control is all or nothing. For volume, you can set level of authentication required – anonymous, local user or domain, but you cannot define any restrictions on access. For example, you cannot have read only access. This makes FreeNAS completely unsuitable for what I need – I must be able to export read only shares. To do that, I will very likely have to use normal Linux distribution (preferably some with Web based admin interface), and properly configure servers and security. It should not be terribly hard, the trouble is that I know too little about all that Linux-hardisks stuff. On the other hand, it is a great learning opportunity.

As for RAID, there are two possible ways ahead: option one is to get the BIOS RAID working. This would require to find the proper drivers for the Linux kernel version I will be using and learn how to add driver during Linux installation. The other is use software RAID provided by several distributions – e.g. by Openfiler. It may not be as bad as it sounds, because using software RAID inside Linux distributions is exactly what cheaper NAS devices are doing. It does not even have to mean that the performance will be much worse: the main reason these lower end NAS devices are slow in RAID configuration is not enough CPU power and enough memory – typically they have some ARM processor and 256 MB RAM. On my box, I have full Athlon 64 and 1 GB RAM, which is way more powerful.

The tricky part is how do divide the 4 disks into partitions so that I can place /boot and swap somewhere and keep the root partition on RAID-ed disk. It can be tricky, because the partitions that participate in RAID should have same size and you still need to place the /boot swap and the root partition somewhere. Because they are so important, it would be great if they could sit on RAID, but of course it is a chicken-egg problem, because the RAID is created after Linux boots.

I see two options (and you guys who actually do understand this stuff, feel free to correct me if I am completely wrong):

a) keep the IDE drive (which will hold the MBR, /boot, swap and root file system) for boot and Linux installation and create one partition per SATA disk, all combined into large RAID-5. This way, all space on SATA drives is utilized. The IDE is single point of failure, but if it fails, it should be quite easy to boot some LiveCD and reconfigure the access to the data, because the sofware RAID support is built into new kernels and should work the same, regardless of distribution. The most of the IDE disk space will be available – the Linux distribution will comfortably fit into 2-4 GB, and the rest ofย  80 GB (the smallest disk you can buy) can be exported as quick, no-RAID, working disk space (staging area or temp).

b) Partition the SATA disks so that the boot, swap and system partition are on the first disk. The size of the rest of disk wil be determining the primary RAID volume size. The equivalent sized partitions to the system size on the other disks are combined into second RAID5 volume. For example

hda1 – 100 MB = boot, hda2 – 2 GB = swap, hda3 – 4 GB = system, hda4 – 3xx GB = space for RAID 5
hdb1 – 100 MB = (copy of boot), hdb2 – 6 GB = space for vol2 RAID, hdb3 – 3xx GB = space for RAID 5
hdc1 – 100 MB = (copy of boot), hdc2 – 6 GB = space for vol2 RAID, hdc3 – 3xx GB = space for RAID 5
hdd1 – 100 MB = (copy of boot), hdd2 – 6 GB = space for vol2 RAID, hdd3 – 3xx GB = space for RAID 5

After that, there will be two RAID5 volumes: one created from hda4, hdb3, hdc3 and hdd3 – which have all same capacity and one created from hdb2, hdc2 and hdd2. The capacities of the volumes will be 3 x 3xx GB (about 900 GB) for big one and about 12 GB for the smaller one. If any of the disks hdb, hdc and hdd fails, nothing happens and after replacement data will be restored. If disk hda fails, in order to restore, the system must be started from LiveCD, reinstalled on hda (with exact partitions and RAID table) and after booting the data will be restored. Kind of complicated but maybe doable.

There is always plan B, of course: stay with BIOS RAID and use Windows 2003. It would have exactly same issues with drivers as Linux had, but I know how to install this one (as I have done it when we were setting up the development lab). The machine is powerful enough to run it. What would be nice, is that both OS and data would sit on RAID-ed volume. What I do not like on the Windows idea is the necessity of using GUI to do anything – the Remote Desktop is pretty much only practical way how to administer the system. And I would not learn much new either, I think …

Yep, more thinking and planning required. I will shelve the RAID project until next weekend. I have mixed feelings about all this. On one hand, it is great to be discovering new things and learning, but it takes so much time: after Yan built the box, we have tried to get it working until 3AM … Why things cannot “just work” as in Mac world ? If budget would not be problem, here is perfect RAID solution :-).