Evernote Firefox plugin version – do NOT update


Since few months I have slowly switched all my online notebooks to Evernote. In case you are not familiar with it, it is a great service. You can clip pieces of Web pages, type in notes, attach documents and multimedia files (pictures, sounds) and store them online to be always available through nicely done Ajaxian Web interface. The notes can be tagged, organized into notebooks and full text searched. One of the killer features is that for uploaded pictures, Evernote will do OCR and include the photographed tex into full text search

For each platform there is desktop application that seamlessly synchronizes the on-line notes with local database and also allows create notes using thick client interface with all desktop goodies. If you are iPhone users, make sure you get the free app that allows both access the notes as well as capture notes – textual, pictures or voice notes with the GPS information.

The basic service is free and limits you not by storage taken on the servers, but by transfer 40 MB a month, which is a lot unless you do picture notes. The very reasonably priced pro-service increases the monthly transfer quota to 500 MB and gives few more goodies – like PDF and Doc attachments to the notes. I am fairly frequent user and never used more than one half of that amount. To make clipping from the browser easier, Evernote offers Firefox plugin as well as Safari plugin.

The Firefox plugin is equipped with the self-update capabilities. And this is why I am posting this: if you have not updated yet, stick to version, do NOT upgrade to latest but far from greatest The latest version of Evernote ( as of today) is definitely a step back. Unlike the .128 which clips the selected text nicely into the Web, the .45382 always opens the desktop application and creates note there. This is only annoying and there is no obvious way how to prevent starting desktop client.

The real problem is that the captured note contains HTML attachment named ‘Firefox clipping.html’ and does not show the clipped text in the Evernote desktop client. To see what you clipped you need to open the clipping in the browser. To add insult to the injury, the quality of clipping is dramatically worse than in .128. The layout is all over the place, and result looks much worse. It is hard to tell whether this is caused by clipping or by different way how to display result.

If you got to the point of bad surprise after updating, here is how to bring back the version .128:

– uninstall extension

– restart Firefox

– download the previous version of the extension (the .xpi file) from here https://addons.mozilla.org/en-US/firefox/downloads/file/47839/evernote_web_clipper- and save it on local disk

– in Firefox, open the downloaded file and confirm installation

– restart Firefox

I hope that somebody at Evernote will recognize that the version .45382 was a bad idea and bring back the clip-to-web capabilities (or at least make it configurable), avoid creation of useless HTML attachments and fix the clipping engine. Until that happens, I will stick with .128.


Bye bye Ruby, hello Groovy


I first time discovered Ruby back in 2006 (yes, I know, I was late to the game), and immediately fell in love with it. The dynamic nature of the language, the consistency, pure esthaetics and practicality certainly changed the way how I saw software development and programming languages.

Since that time, I made several attempts to integrate Ruby into my professional life and make it part of the toolbox. It was cool to play with Ruby in my spare time, but I wanted to use it on projects whose main development language/platform was mainly Java. Use it as scripting, glue language. Use it as toolkit language to e.g. generate test data, access databases, convert files, build projects and maybe even build a piece of Web applications (admin apps for example).

It never worked. The main problem was availability of the Ruby platform in all environments. While JVM was there by default, Ruby had to be installed and sometimes compiled for the more exotic platforms. And that can be a big deal if you have not full control over the environment – scenario which is pretty much guaranteed in enterprise environment. It is hard to argue with the sysadmin saying “You want to install that in production just to run scripts ? Why do not you use Perl or Bash or Java that are already there ?”

For a little while I thought that JRuby may be the way. After all – all you need is JVM and JRuby is just another JAR, right ? As Goethe said, grey is all theory and green is the tree of life :-). A language is as good and useful as are components and libraries available. One certainly does not want to write everything from scratch. Libraries in Ruby are Gems and Ruby provides very nice, mature and IMHO superior system for component management to Java JAR’s – because it handles different versions of same Gem very well (maybe some day there will be Gem hell after DLL hell and JAR hell 😉 ). Unfortunately, some Gems (by Murphy’s law most of the really interested ones) are for performance reasons built as thin Ruby layer around native (written in C) library. And JRuby does support that, making most of the Gems unavailable.

Even if JRuby had all the gems available, there would still be a problem that the Gem system and Jar system are different and do not quite fit together. Also, from language point of view you certainly can use Java objects in JRuby and vice versa, but doing that makes you feel slightly schizophrenic – what reality am I in? Is this a Java Java class or Ruby Java class ?

Third problem that I have encountered after coming up with some Web App in Rails is that the deployment model is very different from Java deployment model which myself and people in organizations we work with understand really well. We know how to deploy so that it scales, we know how to monitor and maintain a Java enterprise app. But not a Rails app with all those Mongrels, lighttd’s and other creatures :-). This leaves many open questions like “How do we size hardware for expected load ?” for which I do not have answers, and judging by well publicized issues with Rails apps scalability, even the best and brightest in Ruby world may not have either – or at least some people say so.

About at the same time I discovered Ruby, I also become aware of the strange Java dialect called Groovy. It sort of tried to do the same thing I hoped to use Ruby for, only from firm within Java environment. The original reason I did not want to look deeper at Groovy was that compared to straight elegance of Ruby, it looked kind of ugly. The Java skeleton was sticking out in wrong places and alltogether it just did not feel as good as Ruby.

I have to publicly admit I was wrong.

Being a Mac user, I have license for going after good looks and white shiny objects, but when it comes to programming languages, the good looks may just not be enough. The reality is the proof.

During last 12 months, we have quietly and very successfully used Groovy components and pieces on three large projects. It fitted perfectly, never running into anyof the issues above.

Through these projects, I learned to appreciate the Groovy way, my sense of aesthetics stopped to be offended by certain syntax constructs in Groovy and I even started to like them better than Ruby ones. For example, I am now convinced that Groovy categories are safer and better approach that explicitly alerts programmer about using class extension, than re-opening any class in Ruby (which is still possible in Groovy by assigning closure to member in metaclass). Imagine how confusing it can be for software maintenance when reopening and using happens far apart in the source code.

But the most important, the painful realization ” how the heck do I do the XYZ thing in this language ? If I only were coding in Java, it would be so much simpler ” is history with Groovy. Everything that I was used to use in last 12+ years in Java is still there, all the goodies of Jakarta Commons and way more.

Groovy community seems to be less opinionated, less self-righteous than Rails/Ruby community and more understanding for weird requirements and idiosyncrasies of enterprise environments. Rather than telling you “you should not want to do this” and “DHH thinks it is wrong”, you actually may get a helpful pointer to useful website or blog how to do that stupid thing in Groovy or Java or combination of both. Because you know, when one needs to accomplish something that seems to be wrong and illogical, being told that it is wrong and you should better forget about it does not really help. People who worked with real enterprise system’s integration understand, that cost of touching or changing certain systems is so prohibitive that it is out of question and doing the technically wrong thing may right (and only) option for given situation and customer.

Therefore – bye bye Ruby, Hello Groovy. Next things to embrace and embed will be Grails.

The aftermath of mainboard change


In theory, exchanging mainboard on Macbook has no impact because all your data is stored on your harddisk that is untouched. In real life, there are few minor surprises.

First, your MAC address of the network card had changed. This is something you will not notice, unless Murphy’s law plays funny game with you, as it did with me. When arriving back to the office, I was able to connect to WiFi network, but the ethernet was stubbornly getting the “internal IP” – 169.254.x.x address, which is pretty much useless from connectivity point of view. We are running two separate networks – both NAT-ed, one on WiFi – mainly for guests, other internal. Not getting an IP address did not make any sense: the cable was OK, because other machine worked just fine with same drop/cable. The Ethernet connection was OK, because it worked when assigned IP address manually.

The problem was – new MAC address. During last week or so, by playing with virtual machines, I must have allocated all available IP addresses from DHCP server space. The leases are fairly long lived and all slots were taken by either computers around the office or by both running and also now defunct VM’s. One of them was still kept reserved for my old and gone MBP’s MAC address. Nobody would notice the problem, unless you tried to attach new DHCP based VM or new computer. Lesson learned – if you have problems with internal IP address does not want to go away – check the DHCP server. In home environment, resetting the router mostly helps.

Second effect of changed MAC address was that all VMs in Fusion started to ask whether they were moved or copied. Always answer “moved”, otherwise you VM will get new virtual MAC address generated – which can have impact on your DHCP space (if in bridged mode).

Third, quite unexpected effect was that Time Machine stopped working, with ‘volume cannot be found’ message. See e.g. http://www.macfixitforums.com/ubbthreads.php/topics/454151/Time_Machine_can_t_find_backup for more details. It looks like Time Machine is using the MAC address to match the backup volume with the computer it belongs to. Clean solution is to erase backup and start from scratch – or use different disk. Partial solution is a hack – in /Volumes/Time Machine Backup/ locate file that has the MAC address encoded in the name, e.g:


Get the new MAC address (from ifconfig) and copy the old cookie file to new cookie file – using the new MAC as name. This made the volume accessible again – with unfortunate effect that pretty much ALL content was considered as unbacked-up and first back up took away over 90 GB of disk space. But at least, the old data was still there, should the need ever occur.

Maybe this experience will be useful for somebody else – it was quite good learning experience for me.

The myth of premium hardware or why Ottawa needs Apple Store


Back in November 2007, when I was buying Macbook Pro, I did order the AppleCare option, which added several hundred dollars to already pretty expensive notebook price. For moment I was tempted to go without it – after all, Apple makes top grade, high quality hardware and considering pretty low failure rate I have seen with my Windows based notebooks between 1998-2006, why to pay more ? Little did I know how much mileage will I get from the extra cost :-(.

It is sad to admit, but during last 12 months, I had three major hardware issues with my Macbook Pro. Maybe it was just my bad luck and I got a lemon, but frequency and seriousness of hardware failures makes MBP the least reliable notebook I ever owned.

In June 2008, the 250 GB hard drive failed and had to be replaced. Fortunately, I had disk clone and Time Machine brought back  everything, so no data was lost. One month ago, one of two 2GB DIMMs died and had to be replaced (to be fair, this one was not original Apple RAM, but cheaper version purchased and installed by authorized service centre). And third issue was failure of graphic card last week – which will likely mean mainboard replacement. The notebook is still in service and I am for 8th day computationally impaired, locked to spare Mac Mini or my home iMac/Macbook, stubbornly refusing using Windows based notebook …

So how would Apple Store help the situation – other than walking in and buying replacement ? As it seems, from discussion with other Mac users, Apple Stores may be the only type of service provider that actually can (and does) keep stock of replacement parts. All other service centres have to take your computer in, detect the cause, order replacement part and wait until is shipped to them from Apple in order to finish the repair. How quickly the part arrives is completely out of their – as well as your – control.  It can add up quite a few days to your repair time.

Another consequence of no-stock rule is that in this economy, to optimize shipping costs, the parts are most likely to be send in batches.  Which means more delay …

Yes, I know – it could be much worse, one could have to mail the computer somewhere in USA or overseas rather than dropping them to friendly hands of local Apple certified service depot, and have it mailed back to you. Which would add even more time, more cost, and much more chance of additional damage in transport. But on the other hand, if we had Apple Store stocked with replacement parts in the city, with a bit of luck one could walk in, drop the machine and pick it up fixed next day. Would not that be cool ?

Let’s hope that Ottawa’s Apple Store is more than just a rumor …

It’s alive !


I was aching to blog about this since December 18th, when our system quietly and gently slipped into public visibility. Marked as Beta (thanks, Google for making this a legitimate way how to go live) it comfortably made it through Christmas shopping season into 2009.

Now when the site has been announced and mentioned few times in the media, I guess it’s OK to mention it here too.

What is “it” ? A new, fresh eCommerce site selling music. A lots of really good music. Without any DRM or any other nonsense, as plain good high quality (mostly 320 kps) MP3’s. The selection is actually very good – starting from several hundred thousands in December to several few million songs when full catalog is loaded. More great music being added every week.

The design of the site is pretty, modern, leveraging lot of jQuery and flash magic. In the backend powered by the probably most powerful eCommerce platform – ATG eCommerce Suite.

We go back long time with ATG. Starting in 2001 when we (we means Montage at that time) decided to bid solution based on ATG for two major RFP’s in federal government and won them both. During following years, we have digged deeper into very rich and powerful platform and built more functionality and added few more customers. This project was our first full eCommerce implementation based on ATG 2007.1. But definitely not the last one – it looks like despite the economy maladies, demand for eCommerce and specially ATG based eCommerce solution is surprisingly large and interesting amount of work is coming our way …

I am tremendously proud of what our team was able to deliver. It took lots of dedication and sweat: we had pretty aggressive deadline (full store was implemented in under two months) and complexities of the environment. To make me even more proud is that the system is running in production environment architected and developed by our team. It is not often that developers team has opportunity to be involved into complexities of the large enterprise system deployment and putting to production.

Ah, almost forgot – the URL is http://www.zik.ca/. See for yourself. Right now, only in French (primary target audience is French speaking Canadian population), English coming later.


I would like to thank everybody who helped to make this an amazing project experience. We were lucky to build great relationship with both our customer and the end-customer, as well as with our development partners in Montreal working on different stores in Virtual Shopping Mall.

Thanks to ATG for such rich and powerful product suite. It is like great sports car: very powerful, requires skills to master it, but once you get it, you can do amazing things with it.

And last but not least, big thanks to everybody from our delivery team that made this possible. You guys rock.

Btw, if you would like to work with people that can build things like this, and have either ATG experience or at least solid J2EE, Spring, Hibernate and JSP/JSTL skills, send your resume to careers at thinknostic dot com.
We are hiring again :-).

No telecommuters please – you must be able to live and work either in Ottawa or Toronto. Speaking French is not required but is definitely a plus.

How to un-stuck unsuccessful OS-X upgrade


Here is the context: in order to upgrade iLife 08 to iLife’09 (which is very nice, btw), I had to install 10.5.6 upgrade. And according Murphy’s law, one of the 2 GB DIMM’s in my MacBook Pro went bad exactly during the OS-X upgrade process.

It had two rather unpleasant consequences:

  • some of the patch files got downloaded and saved in corrupted state
  • The machine did not boot back after restart

The second problem was fixed by replacing the bad DIMM, but the first caused that upgrade to 10.5.6 was impossible: the files were downloaded, verification failed, after restart I was back to square one. There was no obvious way how to “undownload” the files.

The Apple Support representative recommended downloading the update as DMG from Apple Downloads, and run installer. The DMG of 10.5.6 had over 300 MB, while the patch file was barely 190 MB, so I was wondering whether there is better way. As it turned out, it is very easy.

The location of the downloaded files is  /Library/Updates which is normally almost empty:


During update process, this is the location where OS-X will keep the downloaded files, as shown here:


All you need to do is to delete these downloads (keep the plist, of course) and try Software Updates again. The updater will re-download the files and everything will work as expected:


VMWare and slow clocks – take 2


The centos 4.x we are using as platform for the VMWare VMs (yes, I know there is version 5, but we need to be compliant with RHEL 4.x because of the ATG requirements) installs by default the NTP client.

Here is sequence of steps that needs to be performed to enable automatic synchronization of the clocks.
1) make sure the DNS works
cat /etc/resolv.conf

sudo vi /etc/resolv.conf

– check that nameservers point to something meaningfull, e.g.

2) check that you have ntpd installed

cat /etc/ntp.conf

– inspect the content of the file, it is quite well documented

3) make it work in runlevels 2345

/sbin/chkconfig –list | grep ntpd

– this will most likely show “off” for all runlevels

sudo /sbin/chkconfig –level 2345 ntpd on

4) Initial sync

sudo /usr/sbin/ntpdate pool.ntp.org

– do this twice, on second you should get very small difference

5) Start service / restart service

sudo /sbin/service ntpd start

Making this a blog post so that next time I need to do it I can easily find it :-). Maybe somebody else will find it useful as well.