VMWare and slow clocks – take 2

2009/01/09

The centos 4.x we are using as platform for the VMWare VMs (yes, I know there is version 5, but we need to be compliant with RHEL 4.x because of the ATG requirements) installs by default the NTP client.

Here is sequence of steps that needs to be performed to enable automatic synchronization of the clocks.
1) make sure the DNS works
cat /etc/resolv.conf

sudo vi /etc/resolv.conf

– check that nameservers point to something meaningfull, e.g.
nameserver 208.67.222.222
nameserver 208.67.220.220
nameserver 192.168.16.1

2) check that you have ntpd installed

cat /etc/ntp.conf

– inspect the content of the file, it is quite well documented

3) make it work in runlevels 2345

/sbin/chkconfig –list | grep ntpd

– this will most likely show “off” for all runlevels

sudo /sbin/chkconfig –level 2345 ntpd on

4) Initial sync

sudo /usr/sbin/ntpdate pool.ntp.org

– do this twice, on second you should get very small difference

5) Start service / restart service

sudo /sbin/service ntpd start

Making this a blog post so that next time I need to do it I can easily find it :-). Maybe somebody else will find it useful as well.


Phaser 6120N rocks !

2007/11/23

Today I picked the printer I have mentioned yesterday – Xerox Phaser 6120N. It is somehow more bulky than the ML-4500, but boy, does it work !! From all available connection options (parallel port, USB, ethernet) I have of course chosen ethernet. Printer after switch on spent about 5 minutes doing strange and weird noises, initializing itself and then spit out a page with its DHCP-acquired network address.

To assign different IP address (so that it does not change) was very easy: go to http://192.168.x.x/ – whatever is printed on the status page and follow the Web instructions under ‘Network’. Set new IP, disable the automatic IP settings, wait for printer reset (and new status page) and you can go on setting the clients.

Windows installation is very smooth – drivers are on the CD – and does not even need a reboot. As a nice surprise, the CD also contained PPD files so that now I have the printer accessible from CUPS on Linux as well. And as usual, Mac installation was pretty non-existent. The printer does not support Bonjour, but somehow both Mac’s found it using AppleTalk – yet another thing I know not enough about and need to look up. So finally, all computers around my house can print reliably, in color without going through the hoops. Hooray !

The most amazing surprise was when I tried to print a photo. I have done it before and generally, laser printer comes nowhere even close to what you can get out of the photo inkjet – especially with proper paper. Now here I could not believe how good was the result: on plain paper, without any special calibration or color magic, the picture looked very close to an inkjet output – only waterproof.

This source is certainly not impartial, but anyway shows difference in fine details of output. I am no expert, but the printouts look certainly very good to me!

Disadvantages ? Beside the size, the printer is a bit noisier than ML-4500 when it prints. Other than that, none other issues found yet.


Little pains of switching

2007/11/22

I have been running the Unix only workstations for over a week now. During this week, I have managed to clean up and convert two Windows boxes to VMWARE virtual machines, but actually had to resort to use the VM’s only in two cases. Everything else worked just fine.

The first case was requirement to review an non-trivial Excel file. Normally, Numbers does very decent job when it comes to loading Excel files and for the most the remaining cases, the OpenOffice (in it’s more polished Cocoa version NeoOffice) works just fine. Well – as long as you do not open spreadsheet with pivot tables. In my case, the spreadsheet contained pivot tables and charts based on these tables. Numbers warned during import that this feature is not supported and the converted spreadsheet was very good looking – nicer that in Excel, but not quite so functional :-). Strangely enough, the charts were imported OK but lost the drill-down capabilities. After trying out the NeoOffice, the imported spreadsheet had retained some pivot capabilities of the original, but lost charts completely. The only resort was falling back to Office 2003 in virtual machine and using Excel. Fortunately one of the two VM’s created from old hardware had Office 2003 installed, so that worked out OK.

The other case of “works only with Windows” was the Active X control used by our VPN to get access to internal network. There is no alternative, so to get to our internal Wiki, JIRA or timesheet system, I have to start Windows in VM.

The last issue encountered has not really anything to do with platform change, but it was triggered by this, so here it goes. Back in 2002 I purchased very small and handy laser printer Samsung ML-4500. It worked well for all those years, happily attached to my Windows desktop machine which (being always on) shared it for everybody in home network. Of course, up to the point when that desktop machine was decommissioned. Because ML-4500 is parallel port only printer, has no USB, first issue was to find a machine that actually still has a parallel port (most of new machines do not). By lucky coincidence, my NAS Linux system had parallel port – so I dived into new adventure of configuring CUPS printing system under Linux and sharing the printer via Samba. Which was much easier that it may seem – hats off to Linux hackers. The ML-4500 was part of the CUPS printer’s database and was recognized right away. Only small hickup was disabled parallel port in BIOS – but after this got corrected, things worked OK (thanks to Peter for the hint).

So now I can print from Linux, can even share the printer but still cannot print from Windows or Mac. Why so ? If you try to attach the printer from Windows machine, Windows wants to install driver. The ML-4500 is not in the default driver database. I still have the original CD from Samsung, but attempt to install driver fails with completely useless error message. Updated driver is nowhere to be found. I do not remember how I did install the driver back in 2002, but nevertheless, the machine is gone. Even if I do have the driver inside VM, I do not have VMWARE installed on Linux (and do not plan to do so). So for now, only way how to print from Windows notebooks (or Macs) is to create a PDF file on the client, save it to server and then use CUPS to print it out from Linux. What a pain …

How I see it, the days of ML-4500 are counted … especially when my favorite hardware supplier has very nice color laser network printer for a great price :-).


Ragioni per fare il grande salto

2007/11/09

During the vacation, we stopped in small bookstore in Verona which was selling computer software as well. At the entrance, there was a poster showing smiling Finder’s face, Apple logo and announcing reasons why you would want to make the big jump – il grande salto – to Mac platform. That is to explain the title 🙂

Grande Salto is exactly what I did. Without too much preparation or notice. Since Tuesday evening I am happy owner of the beautiful piece of hardware (17 inch 2.4 GHz Core 2 Duo Macbook Pro with 250 GB HDD and 4 GB of RAM) that is greatly complemented by the amazing software collection (OS X 10.5, iLife 08, iWork 08 and many others) running on it. I spent Wednesday playing with it and between ‘Wows’ and ‘Ahhs’ managed to setup the core applications I need for work: several configurations of Eclipse (with and without MyEclipseIDE), Netbeans with Ruby, couple of gems and of course THE editor. Then transferred the sources, installed Ant, Spring, Tomcat and couple of opensource packages (e.g. Jakarta Commons). And on Thursday, I jumped right into the field, to the client continuing in development of my current Java project in the place where I left it on Monday with Fujitsu N5100 running XP-ProSP2. As a plan B, I had my old notebook also with me – but did not have to resort to using it.

The switch was not easy and for few hours I had to fight old habits and muscle memory. As many years TotalCommander addict, I had created really fast and efficient workflows and habits how to move around. None of them was of course applicable. It is really hard to overcome these unconscious finger movements burned deep into your brain: F3 is File View, F4 is File Edit, F7 is create directory – when you have been using them for over 20 years since MS-DOS and Norton Commander. But eventually, I learned to appreciate and enjoy the New Ways. New Finder is really good and in combination with Spaces and Quicksilver allows at least as efficient ways – sometimes even better – how to accomplish things.

As I was expecting, using bash instead of pretty lame windows command shell is a big relief. I have forgotten 90% of my old shell script and command line edit keystroke skills, but it is still so much better. The file system operations are considerably faster and common tasks can be automated with minimal effort just by using symbolic links, simple scripts and shell variables. What is much better under Leopard is network connectivity – the way how it does browse and connect to Windows machines …

Working in Eclipse is same as under Windows – only it looks better and is faster – but (to make it fair comparison) it MUST run faster on 2.4 GHz Core2 Duo 4GB than on 3.4 GHz P-IV with 1 GB. Ability to use two finger scrolls, quick desktop switches and great screen resolution (1920×1200) makes the Macbook Pro close to perfect developer workstation.

I have still not found replacement for all features and tools I was using in Windows – but I am working on it. Most of them come from 3rd party software, not from OS. Right now, what I miss is (from TotalCommander):

– convenient directory comparison and synchronization with good GUI (with embedded on-demand file diff)

– transparent processing of the packed archives – TC makes them look and behave as directory sub-trees

– opening shell in directory / creating file in directory – aka “I am in the Project/demo/src directory now, please create the empty README.txt file here”. Mac works the other way – open editor with the new file and save the file to Project/demo/src/README.txt. Which is not necessarily worse, but just (still) against my instincts. I was used to get to directory first and press Shift-F4 (bound to start of Notepad++, prompting for name), or to click on TotalCommader toolbar icon “Open command prompt in current directory”

After working two full days, I have not found any real issues and was extremly pleased with the Leopard user experience. It is hard to explain – the differences are subtle but in whole it feels so much better than any other OS. I have been using OS-X for over a year now, but only at home and for mostly hobby-projects or after hour hacks. This is an attempt to make Mac the foundation for work environment as well, and use Windows for .NET development only, running it inside virtual machine. And it is very different experience – in two days I have learned a lot and found out about many more things to discover.

To make the grande salto even more complete (and potentially devastating in case of bad landing ;-)), I have also during last week wiped out my desktop at Thinknostic and instead XP-Pro installed Fedora Linux with KDE desktop. With the Mac available and working mostly out of office I do not get so much time to spend on desktop – but it is good to have same platform on both places. I have already configured first few VMWARE virtual machines – with both Linux and Windows as guest OS. The Vmware player works great under Linux, I cannot wait until final version of the Fusion 1.1 is out to test on on the Mac.


Installing ATG on Fedora Core

2007/11/08

Trying to install ATG on the Linux platform turned out to be not completely hickup-free. The default installer died with the following message:

$ ./ATG2007.1.bin
Preparing to install...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
awk: error while loading shared libraries: libdl.so.2: cannot open shared object file: No such file or directory
dirname: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
/bin/ls: error while loading shared libraries: librt.so.1: cannot open shared object file: No such file or directory
basename: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
dirname: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
basename: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory

Fortunately, the fix was easy after I found out what went wrong: incorrect assumption about kernel version. To make it work, this “hack” helped:

$ mv ATG2007.1.bin ATG2007.1.bin.orig
$ cat ATG2007.1.bin.orig | sed "s/export LD_ASSUME_KERNEL/#xport LD_ASSUME_KERNEL/" >ATG2007-1.bin

None of this of course happens on Windows – oh, yes – the joys of not swimming with the crowd :-). You will encounter issues otherwise unknown – but also learn something in the process.


Going (mostly) virtual

2007/11/01

With my new Macbook Pro on the way from China, I have started the big consolidation of hardware and software platforms. The goal is to improve the change management of software development environments, increase mobility and increase stability of the environments.

The way how I am going to do that is moving all “fragile” environments to virtual machines. My most fragile environment is Windows XP (I am still resisting Vista). What I mean “fragile” is well known “software rot” of the developers machine, caused by frequent installs and uninstalls (or upgrades) of components, tools, etc. After some time (which varies by the kind and frequency of installs), the machine becomes more and more sluggish, registry bloated and occurrence of weird things more frequent – up to the point that developer decides to blow off the Windows install and start from scratch.

This operating system “cancer” can be slowed down a lot in virtual machine by using several baselines: clean install, clean install with Visual Studio, etc. Whenever you need to roll back an install (or test a new Beta version of Visual Studio, many of which were notoriously known to be uninstallable) you just throw away latest VM status. It takes some discipline to keep things documented and to keep the images up to date with all the patches and updates – which may be a challenge ;-).

As host platform I will be using OS X (for the notebook) and Linux (for the desktop). I could use Windows as host on desktop, but the latest series of BSOD I have experienced on my barely one year old desktop machine made me to rethink that. It is time again after few years, to make another try of Linux – for three main reasons. First is to find out whether there is indeed some hardware problem with the machine (the memory tests were showing no issues) or whether it is (as usual) some bug in third-party driver. It is so annoying that any unknown sloppy programming job of some never-heard-of company can shut down XP completely – thanks to Microsoft’s decision to put pretty much everything into kernel space :-(. If Linux works OK, the point is proven – it WAS a software problem.

The second reason for Linux is that several of our clients are using Linux and virtualization and we want to recreate comparable environment in the lab for evaluation and testing purposes. We had lot of fun finding out why did the Oracle Advanced Queueing stop communicating with ATG Dynamo service components recently and I am sure this is not the only scenario where moving physical server into virtualized subtly changes environment up to the point of breaking it 🙂

And the third reason for Linux as a host is convenience having same kind of OS as host on notebook and workstation and power of that OS. Shared availability of tools such as bash and Unix file tools can make administration and automatization much simpler.

I have not made final decision whether to stay with Parallels (which is available for all three platforms or switch to VMWARE. Right now, I am leaning more towards VMWARE – I have tested it out on Linux and Windows and found it working very well – at least the free player version. The VMWARE Server’s UI performance on Windows host was not very impressive – but I guess that was not what it was written for. The reviews of performance I have read favorize VMWARE on Mac and very important factor is availability of ready-made pre-built images – software appliances from Marketplace. Neither have I selected the Linux Distro – albeit it will be one of the major ones.

As the guest OS I will be using XP – the only way how to do the .NET development. As well as Linux, which can be useful to overcome the lack of Java 6 on Mac (until resolved) as also to contain various database environments into their own appliance (MySQL, Oracle, PostgreSQL). I want also compare user experience when doing Java development on Linux and Mac with respect to on Windows – in recent discussions on Javalobby so many people were claiming superiority of Linux for developer’s workstation that I cannot resist the temptation :-).

I am really curious how this will work when implemented. The only scenario where the above described setup would not work is the kind of project we did last year: depending on hardware devices physically attached to the PC by Firewire or even custom board (such as biometric camera with face recognition software or secure document scanner). Right now, we have the the luxury of not needing anything like this – which is great time to try it out …


Like Netmeeting – only much better

2007/10/03

My current client is multi-national company, headquartered in Ottawa with offices in USA and Europe. With teams all over the globe, the conference calls and network meetings are part of the game. This usually means Microsoft Netmeeting to share the screen and teleconference bridge for voice. Netmeeting usually works quite OK – except when it does not – for no obvious reason :-).

Thanks to Yugma, there is now better alternative. Yugma is Sanskrit word meaning “the state of being in unified collaboration” and also a name of a startup, offering teleconferencing product that aspires to unify colaboration between platforms. And they mean it, because their product runs on Windows, Linux and Mac and integrates with Skype to take care of the voice portion.

The free version of the product offers teleconference up to 10 people (albeit it is unclear how that works with Skype which allows teleconference only up to 5 people), desktop sharing, desktop anotations and changing presenter during the session. The paid version in addition to accommodating larger teams (30, 50, 100 or 500 people) also allows keyboard and mouse remote access, session recording and replay, webcasts, filesharing and webinars (for the 500 version). The price of the premium service is very reasonable – the 30 user package costs about $30/month.

With products like this, I have every day less and less reasons not to switch to OS-X as main and only platform – running Windows XP and Linux in virtual machines to cover .NET and LAMP/NAMP stacks. It is coming 🙂


.NET on Linux faster than on Windows ? Hmm

2007/06/27

An interesting article on JavaLobby caught my eye today: Do .NET Applications Run Better on Java?

Normally, knowing the not exactly impartial focus of the Java centric site such as JavaLobby or theserverside.com, one should be careful when reading how much Java outperforms .NET. The bias works the other way too – just look at the Theserverside.net or other .NET centric side how much is C# superior :-). With that in mind, I looked at the technical report.

The report was produced by Mainsoft, the company behind the cross compiler product MainSoft for Java EE. The crosscompilation means that the C# or VB.NET code is first compiled into CLR bytecode using standard Microsoft tools and then transformed into Java bytecode using the CLR byte as input. The study was based on fairly large project – 260’000 lines of code. The published result show that translated code running on Java VM and Websphere platform outperformed the .NET stack on both Windows as well as Linux platform.

So far so good. I have no plan to question the results of the test. One could argue that because the evaluation was not done by independent third party, but by the authors of the crosscompiler, the result must be like it was – simply because if the measurement would show that .NET performs better, no report would be published 🙂

First of all “faster” does not really means much faster. The speed increase measured is 8% in throughput and very much the same for requests per second. 8% increase is much too small to suggest doing anything major with the application, certainly not re-platforming …

Second, comparison based on single port of an application proves absolutely nothing about results of repeating same process for other application or even every application. It can be indication of reality as easy as an exception. I am pretty sure that given the chance to respond, Microsoft or some other party interested in the opposite outcome, could find a C# application that would perform worse after conversion.

More interesting questions is why would you want to do this – replace Win2003 + .NET CLR with some other operating system (Windows/Linux/something else) plus Java plus Websphere. Clearly, performance cannot be the reason – at least not based on this results.

Price is not a good reason either. From cost saving perspective, crosscompiling .NET application to run in Java under Windows makes no sense, because .NET runtime is part of Win2003 license and cost of the license is there in both cases. This leaves using Linux as platform (or some other free alternative). True, Linux is free – but support is not, neither is labor. In real life, initial costs of licenses are small compared to accumulated costs of supporting application in production – and professional support for Windows or Linux is comparably priced. Besides, I bet that savings gained from not paying for Windows license will not cover cost of Websphere license plus the Mainsoft Java EE Entreprise edition license. True, you could use free j2EE server just as Tomcat or Glassfish or JBoss with the free Grasshoper version of the crosscompiler – but you may not get the same performance numbers. I like Tomcat and use it all the time: it is nice, flexible, easy to configure – but, not the fastest servlet container out there (there must be a reason why did Mainsoft pick Websphere with their own enterprise version after all) …

What is the conclusion ? The article above and approach it describes can be a life saviour if you have lot’s of .NET code and *must* for some real reason switch the platform. The reason may be technical – or not – just consider the magic Google did with Linux. The report does hint one possible good reason – moving your application from PC’s to really big machine – multiprocessors (multi means more than 16 these days when desktop machines start getting quadcores ;-)) running Unix or mainframe. The report shows that AIX on Power5+ based system with 4 CPU did ~ 3400 requests per second whereas the PC based did 2335. This would be interesting if the comparison was fair – but it was not. The AIX had 32 GB RAM whereas PC (with Linux or Windows) had 2 GB and you can imagine the price difference in the hardware.

But if there is no really compelling business reason of switching platforms, sticking with Windows when you want to run .NET application may save you lot of work – and most likely some money as well.


Avalon – reloaded …

2007/03/20

Now this is something really interesting: as found on Adobe Labs site, their technology codenamed Apollo is approaching Alpha status. What Apollo is – in a nutshell – is another virtual machine, similar to Java runtime or .Net framework, with few minor twists: it is multiplatform (as Java) as well as multi-language (as .Net) at the same time. Before flame wars start – I am aware that JVM is capable (more or less) to support multiple languages beyond Java and also that .Net is (more or less) capable running on  non-Windows platforms (e.g. Mono project), but that is not the point. The point is what is different about the Apollo compared to JVM or CLR.

First thing that is different is developers skill-set. Adobe is trying to leverage the experience of Web application developers and allow to use traditionally Web oriented technologies to create desktop applications: HTML, Flash, Javascript and PDF in context of desktop applications. The other is that Apollo is designed with notion of being “occasionally connected”, or in other words online/offline applications. It does support well the regime when you can work offline with local copy of the data and reconnect / synchronize with master copy on-line, providing both access to local resources (as traditional desktop application) as well as rich asynchronous XML capable communication library (as Web 2.0 application running in browser on the client).

Using Javascript/HTML for desktop-ish apps is not really an innovation. If you look on how the Firefox extensions are created, or on Widgets/Gadgets in Vista or OS-X you will see something very similar. The same idea was also implemented in Avalon – renamed to Windows Presentation Foundation – which uses XML to define the user interface and “scripting” that determines the logic. In WPF, you use the .Net languages to do “scripting” (Javascript being one of them) and you need a .Net 3.0 capable platform to run it (currently Windows XP SP2, Windows 2003 and Vista, unless I am mistaken). Even with similar language (Javascript), programming WPF is quite different and requires different skills from Web application programming. Allowing the use the Web app development skills and adding variety of Flash/Html/Javascript/Pdf combinations may be very appealing for somebody who needs to create desktop-like application without learning WPF. Plus the ability being platform-independent is added bonus and could be finaly a solution for a problem that Java did not really addressed well. It has been possible to create rich client Web-startable applications  for several years and yet, it has not become the mainstream. Possibly because of the complexity of creating Swing-UI applications in a first place ?

Compared to Firefox important point is that Apollo departs from browser while keeping the Web capabilities – such as rendering Web pages or creating mixed apps. Eliminating browser is important from security point of view. Installed runtime can give the Apollo application access to local machine resources such as local files without compromising security – as it would be in case of browser based applications. Access to local resources together with modern approach to remote connectivity is very interesting. The browsers are very much Web 1.0 with the request/response shaped vision of the world and adding the asynchronous capability in AJAX was one grandious hack … Another good reason why getting  rid of browser is simplicity of supporting one version of runtime versus making sure that you new great Web2 app works with wildly different Javascript/DOM capabilities of Internet Explorer 5, 6 and 7, Firefox 1.0, Safari, Opera, and so on …

The demonstration videos on Lynda.com show few interesting capabilities of new application types – follow the ‘Sample Apollo applications’ link and also here.

It is still Alpha so it is too early to get excited, we have no data about performance, resource requirements or real world application development experience. Positive is that both runtime as well as SDK should be free. And it is always good to have more options available 🙂


NAS Odyssey: Up and running

2007/02/12

After lots of attempts (see here, here and here), I have finally resolved my disk space problems and the RAID-5 NAS server is up and running since about two weeks now. I have stayed with Fedora 6, successfully installed Samba, configured the shares and copied all family JPEG’s, videos, MP3’s etc onto the huge 907 GB large RAID5 share. The old NSLU2 is still dead – I did not have any time to try to re-flash it, but all content of it’s disk was without any issues readable by the new server (the NSLU2 was using ext3 filesystem) so I did not even have to try to use backups.

Most of things in Fedora can be configured graphically. The only case I had to go to shell were final configuration changes in Samba. As it turned out, the default umask allowed search on directories (rwxr-xr-x), but none on files (rwx——). This lead to strange behaviour when the images were visible from Windows client – user could see the file names in the directory but no viewer could display them. Fix was trivial:

find . -name “*.jpg” -exec chmod +r ‘{}’ \;

It’s kind of funny how old skills reapper when you need them. Unix is like riding a bike: you get uncomfortable and out of form but you never loose it completely. I even remembered quite a few vi and emacs key combinations (but happily switched to use gedit or JEdit).

One thing that did not work out of the box was VNC server (I am getting ready to disconnect monitor and keyboard, move the box under the desk and forget about it for next year or so :-)). When attempting to start the VNC server using the GUI interface – it does not report any error, but does not start either. When trying from command line (using /sbin/service), the error explained the problem with Xstartup. I found this useful info, will try it out.

So far I am very happy with performance of the system (I have copied about 150 GB of content on the NAS).

Lessons learned:
– the “NAS” like distributions (FreeNAS, NASLite+) have variety of limitations (like lack of readonly/readwrite access support, user access rights) and are basically all-or-nothing solutions
– OpenFiler is not good solution for home network either. It goes to the other extreme – unless you have domain controller, you are out of luck, as it does not support local users
– from full distributions, Fedora provides everything you need to have NAS as well as well user friendly environment for administration. To be fair, I did not try OpenSUSE or Mandriva. Ubuntu – despite being known as the “most user friendly” distro did not work for me: the desktop edition (which indeed was great user experience) does not contain server components and the server edition requires really experienced Linux admin as it has no GUI at all.

All summed up: lots of fun. Worth every penny.