.NET on Linux faster than on Windows ? Hmm

2007/06/27

An interesting article on JavaLobby caught my eye today: Do .NET Applications Run Better on Java?

Normally, knowing the not exactly impartial focus of the Java centric site such as JavaLobby or theserverside.com, one should be careful when reading how much Java outperforms .NET. The bias works the other way too – just look at the Theserverside.net or other .NET centric side how much is C# superior :-). With that in mind, I looked at the technical report.

The report was produced by Mainsoft, the company behind the cross compiler product MainSoft for Java EE. The crosscompilation means that the C# or VB.NET code is first compiled into CLR bytecode using standard Microsoft tools and then transformed into Java bytecode using the CLR byte as input. The study was based on fairly large project – 260’000 lines of code. The published result show that translated code running on Java VM and Websphere platform outperformed the .NET stack on both Windows as well as Linux platform.

So far so good. I have no plan to question the results of the test. One could argue that because the evaluation was not done by independent third party, but by the authors of the crosscompiler, the result must be like it was – simply because if the measurement would show that .NET performs better, no report would be published 🙂

First of all “faster” does not really means much faster. The speed increase measured is 8% in throughput and very much the same for requests per second. 8% increase is much too small to suggest doing anything major with the application, certainly not re-platforming …

Second, comparison based on single port of an application proves absolutely nothing about results of repeating same process for other application or even every application. It can be indication of reality as easy as an exception. I am pretty sure that given the chance to respond, Microsoft or some other party interested in the opposite outcome, could find a C# application that would perform worse after conversion.

More interesting questions is why would you want to do this – replace Win2003 + .NET CLR with some other operating system (Windows/Linux/something else) plus Java plus Websphere. Clearly, performance cannot be the reason – at least not based on this results.

Price is not a good reason either. From cost saving perspective, crosscompiling .NET application to run in Java under Windows makes no sense, because .NET runtime is part of Win2003 license and cost of the license is there in both cases. This leaves using Linux as platform (or some other free alternative). True, Linux is free – but support is not, neither is labor. In real life, initial costs of licenses are small compared to accumulated costs of supporting application in production – and professional support for Windows or Linux is comparably priced. Besides, I bet that savings gained from not paying for Windows license will not cover cost of Websphere license plus the Mainsoft Java EE Entreprise edition license. True, you could use free j2EE server just as Tomcat or Glassfish or JBoss with the free Grasshoper version of the crosscompiler – but you may not get the same performance numbers. I like Tomcat and use it all the time: it is nice, flexible, easy to configure – but, not the fastest servlet container out there (there must be a reason why did Mainsoft pick Websphere with their own enterprise version after all) …

What is the conclusion ? The article above and approach it describes can be a life saviour if you have lot’s of .NET code and *must* for some real reason switch the platform. The reason may be technical – or not – just consider the magic Google did with Linux. The report does hint one possible good reason – moving your application from PC’s to really big machine – multiprocessors (multi means more than 16 these days when desktop machines start getting quadcores ;-)) running Unix or mainframe. The report shows that AIX on Power5+ based system with 4 CPU did ~ 3400 requests per second whereas the PC based did 2335. This would be interesting if the comparison was fair – but it was not. The AIX had 32 GB RAM whereas PC (with Linux or Windows) had 2 GB and you can imagine the price difference in the hardware.

But if there is no really compelling business reason of switching platforms, sticking with Windows when you want to run .NET application may save you lot of work – and most likely some money as well.

Advertisements

Avalon – reloaded …

2007/03/20

Now this is something really interesting: as found on Adobe Labs site, their technology codenamed Apollo is approaching Alpha status. What Apollo is – in a nutshell – is another virtual machine, similar to Java runtime or .Net framework, with few minor twists: it is multiplatform (as Java) as well as multi-language (as .Net) at the same time. Before flame wars start – I am aware that JVM is capable (more or less) to support multiple languages beyond Java and also that .Net is (more or less) capable running on  non-Windows platforms (e.g. Mono project), but that is not the point. The point is what is different about the Apollo compared to JVM or CLR.

First thing that is different is developers skill-set. Adobe is trying to leverage the experience of Web application developers and allow to use traditionally Web oriented technologies to create desktop applications: HTML, Flash, Javascript and PDF in context of desktop applications. The other is that Apollo is designed with notion of being “occasionally connected”, or in other words online/offline applications. It does support well the regime when you can work offline with local copy of the data and reconnect / synchronize with master copy on-line, providing both access to local resources (as traditional desktop application) as well as rich asynchronous XML capable communication library (as Web 2.0 application running in browser on the client).

Using Javascript/HTML for desktop-ish apps is not really an innovation. If you look on how the Firefox extensions are created, or on Widgets/Gadgets in Vista or OS-X you will see something very similar. The same idea was also implemented in Avalon – renamed to Windows Presentation Foundation – which uses XML to define the user interface and “scripting” that determines the logic. In WPF, you use the .Net languages to do “scripting” (Javascript being one of them) and you need a .Net 3.0 capable platform to run it (currently Windows XP SP2, Windows 2003 and Vista, unless I am mistaken). Even with similar language (Javascript), programming WPF is quite different and requires different skills from Web application programming. Allowing the use the Web app development skills and adding variety of Flash/Html/Javascript/Pdf combinations may be very appealing for somebody who needs to create desktop-like application without learning WPF. Plus the ability being platform-independent is added bonus and could be finaly a solution for a problem that Java did not really addressed well. It has been possible to create rich client Web-startable applications  for several years and yet, it has not become the mainstream. Possibly because of the complexity of creating Swing-UI applications in a first place ?

Compared to Firefox important point is that Apollo departs from browser while keeping the Web capabilities – such as rendering Web pages or creating mixed apps. Eliminating browser is important from security point of view. Installed runtime can give the Apollo application access to local machine resources such as local files without compromising security – as it would be in case of browser based applications. Access to local resources together with modern approach to remote connectivity is very interesting. The browsers are very much Web 1.0 with the request/response shaped vision of the world and adding the asynchronous capability in AJAX was one grandious hack … Another good reason why getting  rid of browser is simplicity of supporting one version of runtime versus making sure that you new great Web2 app works with wildly different Javascript/DOM capabilities of Internet Explorer 5, 6 and 7, Firefox 1.0, Safari, Opera, and so on …

The demonstration videos on Lynda.com show few interesting capabilities of new application types – follow the ‘Sample Apollo applications’ link and also here.

It is still Alpha so it is too early to get excited, we have no data about performance, resource requirements or real world application development experience. Positive is that both runtime as well as SDK should be free. And it is always good to have more options available 🙂


NAS Odyssey: Up and running

2007/02/12

After lots of attempts (see here, here and here), I have finally resolved my disk space problems and the RAID-5 NAS server is up and running since about two weeks now. I have stayed with Fedora 6, successfully installed Samba, configured the shares and copied all family JPEG’s, videos, MP3’s etc onto the huge 907 GB large RAID5 share. The old NSLU2 is still dead – I did not have any time to try to re-flash it, but all content of it’s disk was without any issues readable by the new server (the NSLU2 was using ext3 filesystem) so I did not even have to try to use backups.

Most of things in Fedora can be configured graphically. The only case I had to go to shell were final configuration changes in Samba. As it turned out, the default umask allowed search on directories (rwxr-xr-x), but none on files (rwx——). This lead to strange behaviour when the images were visible from Windows client – user could see the file names in the directory but no viewer could display them. Fix was trivial:

find . -name “*.jpg” -exec chmod +r ‘{}’ \;

It’s kind of funny how old skills reapper when you need them. Unix is like riding a bike: you get uncomfortable and out of form but you never loose it completely. I even remembered quite a few vi and emacs key combinations (but happily switched to use gedit or JEdit).

One thing that did not work out of the box was VNC server (I am getting ready to disconnect monitor and keyboard, move the box under the desk and forget about it for next year or so :-)). When attempting to start the VNC server using the GUI interface – it does not report any error, but does not start either. When trying from command line (using /sbin/service), the error explained the problem with Xstartup. I found this useful info, will try it out.

So far I am very happy with performance of the system (I have copied about 150 GB of content on the NAS).

Lessons learned:
– the “NAS” like distributions (FreeNAS, NASLite+) have variety of limitations (like lack of readonly/readwrite access support, user access rights) and are basically all-or-nothing solutions
– OpenFiler is not good solution for home network either. It goes to the other extreme – unless you have domain controller, you are out of luck, as it does not support local users
– from full distributions, Fedora provides everything you need to have NAS as well as well user friendly environment for administration. To be fair, I did not try OpenSUSE or Mandriva. Ubuntu – despite being known as the “most user friendly” distro did not work for me: the desktop edition (which indeed was great user experience) does not contain server components and the server edition requires really experienced Linux admin as it has no GUI at all.

All summed up: lots of fun. Worth every penny.


NAS Odyssey: Almost there

2007/02/02

As the tittle says, Fedora Core 6 is it. Everything is installed, everything works. The box is very fast, Samba network server is visible on all computers – I have not had time yet to do some tests on access and write speed. I have 930 GB shared space on RAID 5 which is in process of being subdivided into separate shares for video, images, music, and so on.

Installation was really nice and I was very pleasantly surprised how huge progress did Linux made since 2000 when I last time was really spending some time on it (I was always using Linux as file server and Webserver, but since year 2000 never actually was working in Linux environment). The only minor issue during configuration was that after installing suggested updates (one almost feels like with Windows when Linux box tells you that there newer versions for 200 something packages available), the mouse disappeared. It was there, I could drag and drop, click – but it was invisible. It took less then minute googling to find an answer. As usually in Linux, once you find what is wrong, fixing it is very easy. All I had to do was editing /etc/X11/xorg.conf file (which is of course text file) and adding: Option “HWCursor” “off” to the Videocard device section.

—- xorg.conf —-
Section “Device”
Identifier “Videocard0”
Driver “nv”
Option “HWCursor” “off”
EndSection
———————-

and the cursor automagically reappeared. Hint: do not forget the quotes arounds “off” – otherwise X Windows does not start :-).

So now I have my NAS and the long part of the work starts. I will have to define shares, assign access rights and the start moving the content to the NAS and cleaning it up in the process. I expect to learn and re-discover a lot in the process – more than I learned until now. Cannot wait …


NAS Odyssey: Fedora Core

2007/01/31

As the title says, the Ubuntu is out. Not really because of some technical flaw or missing features (as the FreeNAS, NASLite+ and OpenFiler before), but because of lack of experience and patience on my side. Unlike other distribution which come pre-packed on DVD, Ubuntu installs from a single CD. There are 3 CD variations available: Desktop, which most of the people use and which is likely responsible for Ubuntu fame, Server and Alternative. I did not want to go with desktop – for obvious reasons, so I started to install Server. It installed OK, except it did not offered any option to create RAID. Technically, I still had the RAID created from experiments with OpenFiler, but as I was not sure in which state did I left it, I wanted to recreate it.

The server installation, unfortunately does not install any GUI and all operations are performed from command line. I dare to administer Apache and Samba via command line, but meddling with file system operations made me feel really uncomfortable. I tried installed webmin (using apt-get) but it did not succeed at the first attempt and gave up afterwards. Ubuntu seems to cover well two sides of the Linux experience scale: beginers (ex-Windows users) and experienced Linux guys. For somebody who needs more than simple desktop but does not really want to go to deep internals, it does not provide very much. Maybe the alternative install would do – but I gave up, time to move ahead.

I decided to try Fedora first. There seems to be lot of information available (such as books on Safari Online) and I know few people that use Fedora and may be a good source of information if I get stuck.

I’ve encountered few issues during Fedora setup. First – there seems to be a bug in Anaconda installer: as soon as you try to select additional RPM repositories, it crashes and you can start from beginning – as nothing was saved yet. Lesson learned: do not do that. The other problem I’ve seen had nothing to do with Linux: my setup kept freezing and crashing at first. I figured out that there likely was an IRQ conflict between USB and RAID – after I disabled RAID on BIOS, it disappeared. Right now, I have selected the packages and the installer runs …

Put alltogether: it was great lerning experience so far and much more complicated that I have expected, but quite fun. And that is only beginning – I am pretty sure I’ll learn much more cool stuff in the process of managing my new (still not existing) NAS. I have to hurry up, because the all NSLU2 died and the family archive of images and videos is inaccessible on USB drive formatted as ext3 filesystem …


NAS Odyssey: OpenFiler is out, let’s try Ubuntu

2007/01/30

My love affair with OpenFiler did not last very long. After installation finished, I started browser, pointed to URL I was told by the installer and started to configure. The Web admin was OK, nothing really exciting, but functional. The bad surprise came when I was trying to define users – in order to define volumes later. And here came the screen saying:

Please note that Openfiler needs a central directory service
on the network to function, which it and the client machines can see and use.
You cannot use local users and groups with Openfiler.
Otherwise there is no means to implement authorisation
as one machine’s information about users and groups can differ from another’s.
You can configure the directory service below.

It offered me either use LDAP server or point to Windows domain controller. Well – too bad. I certainly have no intention to install and run LDAP for the four users I have at home, and if I ever will have a Windows server, it will not be an add-on to Linux NAS but a replacement for failed attempt to install Linux NAS (my Plan B). And it certainly will be neither Domain controller nor Active Directory server – as I am not a big fan of either approach. Anyway – I cannot use OpenFiler, the next step is to try out real Linux distributions, in order Ubuntu, Fedora and OpenSuse.

So I started to getting the installation images ready. I have downloaded the DVD ISO for Fedora and OpenSuse and CD ISO for Ubuntu. While doing that I practically tried out Bittorrent for the first time. Stop laughing – I was really not using Torents until now. Somehow the fashion of downloading of MP3’s and movies from peer-2-peer networks avoided by me. The client I used was Azureus – beautiful opensource piece of software written in Java. Maybe, with the current superfast dual-core machines and 1-2 GB RAM as minimum, time has come to re-evaluate the idea of thick client written in Java ?

But back to the installs. First series of downloads did not work. Neither of them. Probably because I have downloaded the x86_64 versions. Each of the systems crashed in various stages of loading Linux kernel – so I assume, there must be some issue with HCL of my configuration. I have read few weeks ago somewhere on the Net that “… trying to go with 64 bits is begging for troubles … ” – but did not know how it is.

So I am back to running torrents – the i386 versions of Ubuntu Server 6.10 CD is already downloaded and the other two DVD’s are coming. It will take few hours. We will see if the problems were indeed caused by the x86_64 versions.

Let’s start with Ubuntu ….

To be continued


Joys of the Opensource NAS – Part 2: OpenFiler

2007/01/29

After spending most Saturday and Sunday fixing, coding and logging bugs, I finally got to moving ahead. Very likely it is about time, because the NSLU2 keeps producing strange sounds

The box has now additional IDE 160 GB IDE disk, which will hold Linux installation as well as “temp” share, which will not be RAID-ed. Sort of staging area for the stuff in flux.

After contemplating for a while which distribution to use – and generally, whether to build more generic Linux server with Samba or more appliance-type box, I went on with trying OpenFiler first. The truth is that my old Linux box did 99% of his services as Samba fileserver and very little as Web server / Java application server. Most of the fancy GUI applications were barely used – including pretty loaded Gnome and KDE installation (one can never decide which one is actually better). So why to repeat same mistake twice ?

Nevertheless, I have created a shortlist of distributions and prepared the media. Here is the shortlist:

a) Ubuntu
b) Fedora Core
c) Open Suse

I have no real rational reason why exactly these 3 distros and why this order – decision was partially based on reviews from Net and partially on personal recommendations.

Installation of OpenFile (still running) was very straightforward. Of course, you have to select manual partitioning, not automatic if you want to create RAID. I have created 4 partitions on /dev/hda – /boot, /, swap and /var – occupying the rest of the system – as ext3. The SATA disk got created one large partition each, formated as Software Raid. The actual raid – /dev/md0 – was created right in disk partitioning tool. The only uncertainty was the number after ‘Number of spares’. I was not sure what it means “spare” – but then decided that it should be the number of spare partitions actually not participating in the RAID, just available, in case something would fail. I have left the default value 0 and moved on.

The created RAID was as expected, over 915 GB large (4×305 GB disks in RAID5 = 3×305+parity). After that, the installer requires to set time zone, IP address and hostname and starts formatting the partitions, which will take long, long long time – it is running now for over an hour …

Few good links to the topic: Linux SATA RAID FAQ, explanation Why software RAID,
To be continued