Vista security and starting from scratch …


I have read somewhere that quotation is safe way how to say something potentially dangerous or controversial without being legally exposed :-). So today I am going to quote an authority in computer security – Steve Gibson on some very interesting Vista related security information.

Steve is doing (together with Leo Laporte) excellent weekly podcast “Security Now!”. It is so great that I have decided to download as many back episodes as I could find on their site and started to listen to them in the chronological order, back from June 2006. It was worth of every minute. I am now somewhere in August 2006 and Steve Gibson (== SG) covered interesting tricks what to do with you hosts file, very nicely explained the usefulness of netstat program and started multi-episode sequence on virtualization and virtaul machines. Thanks to many years of experience, Steve always gives a historical perspective on every topic he covers.

What I want to mention (and quote) is the Vista security and Microsoft claims that

a) Vista is the most secure Windows system ever written and much more secure than previous versions of Windows

b) the Vista code was written from scratch which should be in support and a contribution to a)

As SG said, security is something that is earned, not claimed. No company can say that system XYZ is most secure – it can just release it and let the reality – how the system stands in hard battles of attacks – decide how secure or insecure the system is. For that reason, according to SG, saying anything about Vista security is at minimum premature – and nothing really new, because similar claims were made with release of Windows 2000 and Windows XP. The reality did not quite support that claim – and XP was at least unti SP2 a “land of worms”.

Microsoft claims to have fresh new TCP/IP stack in Vista, written from scratch. Unfortunately – according to SG – this time it seems to be so. The reason why “unfortunately” are implication of having new, untested “virgin stack” for security. SG offers nice historical perspectives on issues of Windows networking implementation – mentions famous problems like machine freeze on packet with spoofed source address being same as destination, SYNC flood attacks, the “ping of death” and how all these were found and eventually fixed. Now, with fresh new stack, at least one of these problems that was fixed long back in the times of Windows 95 re-appered again in Vista.

The episode also gives very interesting peek on history of network stack in Windows. As SG points out, Microsoft suddenly got in Windows 2000 very good, solid and mature networking implementation, a huge improvement in stability and performance against previous versions of Windows.

TCPIP stack is very complex piece of software and traditionally the most solid, most performant and certainly most secure implementation were found in open source Unix variants like FreeBSD and OpenBSD. Network experts use special tools to “fingerprint” the implementation – by sending specially crafted packets are analyzing the response, they can tell apart one implementation from the other, without actually having access to the stack’s source code. And strangely enough – according to SG – the greatly improved Win2000 stack showed amazing compatibility and similarity in responses, quirks and “fingerprint” to BSD implementation :-). Draw your own conclusion

I am pretty sure that decision to start from scratch and rewrite was carefully considered at Microsoft. There certainly were very many reasons for this: new features added, support for IPv6, and so on. Starting new often makes sense. What is wrong with it is presentation and marketing message that “it is much better because it is new”.

As this article in Joel On Software puts it

All new source code! As if source code rusted.The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive. Au contraire, baby! Is software supposed to be like an old Dodge Dart, that rusts just sitting in the garage? Is software like a teddy bear that’s kind of gross if it’s not made out of all new material .. “

We will probably never know what was the real reason for TCP/IP stack rewrite, neither will know how to interpret “amazingly similar” behaviour of Windows 2000 stack and BSD implementation. The recent events of complicated deals related to intellectual property between Novell and Microsoft, and rumors of possible legal battles on IP with other Linux vendors, do not make it any clearer either.

What almost sure, that despite the claims, we can expect old new bugs appearing in Vista networking and it will take quite some time until security experts as SG can test Vista enough to be comfortable with degree of security it really offers. I am in no hurry to upgrade – running Vista in virtual machine sounds like best approach right now, at least until Service Pack 2 :-).

FEOTD: Linkify


Sometimes you see a Web page whose creator was lazy and did not wrapped displayed URL into <A href=”…”></A> tags. In short, you see something like instead of (btw, if you have installed Linkify, you will see both as links). As result, you can see the link on the page, but you cannot really click on it. In order to follow the link, you must select the link text, copy and paste it into address bar and press Enter. With Linkify installed, you will always see all textual links as real links and can just click on URL as if the page would contain the proper <A> tags.

Linkification shows it’s presence by small icon with letter L in status bar – tooltip on this icon shows how many links were converted. Right click on icon will allow to set options, and specify for example which protocols should be recognized.

Podcast du jour


This is for Gabo, Derek, Steve and others who talked to me about podcasts and about which podcasts are worth listening to.

Since I bought Nano2, I spend lot’s of time listening to podcasts. Pretty much all driving time, walking time and time in the gym is spend listening to wide selection of topics and to people producing them.

Here is my current list plus some additional recommendations:

1) TWIT and all around Leo Laporte.

This is classic. Go to and choose from over 10 various podcasts. I am listening to This Week in Tech (Leo with John C. Dvorak and Wil Harris are great), Windows Weekly with Paul Thurrot, and definitely Security Now with Paul Gibson. Paul has amazing talent to explain complicated things in very understandable way. When I listen to this podcast, there are only two options: I either learn something new I did not know before, or I found better way how to explain something I knew before to somebody who does not.

For Mac users, you want to try iLifeZone and videocast MacBreak. The This Week in media is not bad either, but I find commercial breaks in this podcast pretty annoying. To subscribe to any of them, click on iTunes tab on the website.
2) NOVA Science now

Less “geekish” than TWIT, but very interesting, targeting science in wider range. Highly recommend the Twin Prime Conjecture episode.

3) BBC Podcasts

I always liked BBC for two reasons: their balanced, mostly objective news presentation (meaning both balanced point of view as well as understanding that North America affairs do not equal 90% of newsworthy world news) and for really cool British accent. The web newscast offers you the first benefit, but to enjoy the second, you need podcast. From the website you can subscribe to variety of podcasts – or listen in browser.

These three sources give me much more content than I have available time. But if you look into more variety, here are few more tips: try 43 folders podcast, Steve Pavlina podcast for for personal development, or Zencast if you are curious about meditation and Zen buddhism. Or just open iTunes, click on iTunes store (requires account at iTunes store), click on Podcasts and select from hundreds of free podcasts available. Happy listening.

FEOTD: Session Saver


Today’s extension is about session management. Great news about Firefox is it’s ability to use tabs and have opened many webpages at the same time. This impacts people’s browsing habits – rather than following one path of the links and diving deep inside the Website, you explore many paths concurrently, jumping from one to another, adding more and more tabs at the virtual crossroads. As in real life, when you travel, bad things can happen – what in this case means that your browser may crash, leaving you in the middle of nowhere with no trace of how you have got there and what you have already visited. If you were using the Session Saver extension, this would be prevented – after Firefox restart you would end up with exact collection of open tabs as before the accident.

Often you may want to close and reopen the browser deliberately: as you surf, the memory consumption goes up and up, because browser caches the visited pages to allow faster back arrow operations. When your memory consumption start approaching the fiscal deficit of a small European country, you may consider shutting browser down and restarting to get rid of some cache. With session saver on, you can do that without loosing context.

Using session saver you can simply close the session and continue browsing next time at the very same position as you stop. You can even have multiple named sessions saved and transfer session (the saved session file) between machines, using is as very complicated mass-bookmarking tool.

Do not get discouraged by the very low version number: 0.2. Despite it, it works very well, much better than many software packages with version number 7 and higher ….

Weekend nuggets


Technically, it was Sunday so rather than working or reading some serious stuff, I have decided to take a long walk (to catch up with lot’s of new podcasts downloaded yesterday) and to surf the net just for fun. Here is some of the interesting discoveries:

There was a rumor on the net that some Indian student in Kerala discovered a way how to store over 450 GB of data on a sheet of paper. They say

instead of using zeroes and ones for computing, he used geometric shapes such as circles, squares and triangles for computing which combine with various colors and preserve the data in images

This discovery offers fantastic possibilities and could mean that CD’s, DVD’s and external hard disks will be soon obsolete. The only problem is that it is completely fake. Let’s ignore for a moment the above quote – which btw violates basic principles of digital information storage: using just two values 0 and 1. Everybody with some understanding of electronics can explain why using more than 2 values is bad idea and does not help …

The information capacity of the sheet of paper depends on it’s size and print density. Let’s do a ballpark estimate what this capacity can be: a sheet of paper of the size 8.5 x 11 inch has area of 93.5 square inch or roughly 100 square inch. If the technology used to put information on the paper is printing, with normal paper and normal printers we can assume realistic resolution about 1200 dpi (dots per inch) or 1200 bits per inch, which translates to about 1.5 megabits per square inch – or about 150 megabits per whole sheet. Which is less than 20 megabytes – a number several orders of magnitude smaller than claimed 450 GB.

All this was without any consideration for control information, division to some manageable units, correction codes etc and assuming than we would be able to exactly read all that was printed – with some sort of exact scanner that does not miss a bit.

In order to achieve capacity of 450 GB on sheet of paper, we would have to store 4.5 GB on square inch, which translates to density about 67 MB per inch or over 500 millions dpi. Which is seriously above capacity of any printer or scanner :-). If we would assume that we can indeed store more than just zeros or one in one pixel of information on paper, with the reasonable resolution of of 1200 dpi it would mean that we really have somehow represent over 400’000 values in single dot…

I really do not understand why the large newspaper do not either avoid publishing materials from area they have no clue about or at least do not put at least minimal review process in place. Many people came to the same conclusion as me – e.g. this blog – as well as one my favorite internet celebrities John C. Dvorak did not miss a chance to bust the scam.

But enough about the fakes. Let’s look at more real content: this google video puts very interesting sci-fi idea – going back in time – into software development and describes the debugger implementation that does exactly this.

I also visited few pages that have really nothing to do with computers or technology – and found out that the type of alcohol you drink does make a difference on the second day. That was not a good discovery, though because my favorite – red wine – ended up second worst. Ouch. If you ever wanted to create a human face from component, see this web page
(flash required). And finally, visit if you feel you need a fresh doze of (de)motivation :-).

Are blogs the threat for professional journalists ?


Thanks to 6 hours difference, I usually read tomorrow edition of Slovak and Czech newspapers on the Net one day ahead. Most of them switches to new edition at midnight, which is 6PM our time. I am doing this for several years already. What is different between now and 2-3 years ago is what I read. While before it were mostly articles written by journalists and news industry professionals, now I read mostly blogs. On SME blog, there are few hundreds, maybe few thousands bloggers, and content they produce is usually much more interesting and readable that the content of official newspaper.

Interestingly enough, it is not only newspapers. If I sum up my information sources from the Net, maybe 70% of it is “user created content” – sites like Wikipedia, social bookmarking sites and aggregators such as, programmer’s blogs and programmers sites (SourceForge,, podcasts – all of them share one common feature: the content is not created by professionals, which are paid to create content.

How far will this continue ? Does this trend present a threat to professional journalists ? How far can these sites grow ? Will they be able to keep the quality and focus ? Is this just a new fad that will disappear – or will it change the landscape of news industry same way as internet changed software distribution or music industry ? What will motivate people to buy paper version of newspaper when you can get much more than you can eventually consume for free online ? How many sites can survive on ad income only – without charging subscription fee for access ? Only time will tell …

FEOTD: DownThemAll


In previous FEOTD entries I have mentioned excellent tools that help you maintain the URL’s of the pages you want to remember ( , and in case the URL is not enough, save the content of the Web page for offline access or archive. But what if the web page is not really what you are interested in, you just want download all files files attached to the page – and there is a *lot* of them.

If this is the problem, do not search any further, this plugin is exactly what you need. On activation it provides you with links from current page, allows selecting which types of files or even individual links to download and creates a download queue. You can specify download directory and template for renaming files(this is very handy when several linked files have same name – e.g. images linked from different URL’s).

The downloader is good netizen – it inserts delays between to downloads to give the web server on the other side time to breathe and serve other requests.

The URL is

Happy downloading !