Calendar synchronization between iPhone, Google calendars and iCal

2008/09/09

Here is my situation: I am using three different Macs (Macbook Pro and two iMacs – one at home and occasionally second one at work) and need to keep the contacts and calendars synchronized at least between two of them (MBP and home iMac). I use iPhone which also has calendar and contacts. And in the cloud, there is Google calendar. Actually, more than one: my own calendar and company shared calendar of my colleagues from Thinknostic. Quite a challenge to keep it all in sync.

Here is the setup that works for me:

To synchronize iCals between different Macs as well as between Mac and iPhone, the most seamless and completely painless way seems to be using Mobile Me. I registered for Mobile Me back when it started – just to grab the name (miro at me dot com), and have not used it much at first. Thanks the 30+60 day extension, I have still until mid November ti decide whether I will keep it or no. The sync feature alone seems to be worth at least half of the subscription price. The other half is push email and really easy way how to synchronize files between Macs … Also to have online access to your contacts when you are not on a Mac can be useful sometimes. But back to the workflow:

Theoretically, both iCal and address book gets synchronized every time you sync the iPhone with Mac. This is not really as useful as it may sound, because I do the sync about once a week to download images from camera and upload new podcasts. And to update the apps, of course. I would not do that more often, for simple reason: the synchronization takes forever. Ten minutes at least. With Mobile me, you can sync as often as you want (every minute if you really want) – and also access the information online via browser.

To synchronize Google calendar used to require some helper applications, like GCalDaemon (free, opensource) or Spanning Sync. Not any more. Since Google opened up the API this summer, it can be done out of the box: read more on here

This solution is almost perfect, the only relatively minor problem is that sync between iCal and Google does not include iPhone: the calendars that you have added to iCal and are coming from GCal are left out – Mobile Me synchronization does not include calendars not created or owned by iCal application. This is not such a big deal as it may seem because from iPhone, you can always access the Google calendar by using the very good mobile Web UI.


iPod Touch: the best PDA I ever had

2007/11/30

Why did I arrive to this conclusion so sudden ? After all, I have the iPod for over a month and half now and never really used it for anything beyond podcasts/music/videocasts.

The answer is simple: 1.1.2. The Firmware update that appeared during last sync upgraded the iPod to latest version which (in addition to obligatory bug fixes) brought the long needed possibility of adding an event or modifying an event in calendar or adding/modifying an address card.

With this in place, I can finally use iPod as the PDA – because you will get information about some new address or phone number change most likely when you are not at the computer or online and without this feature, there would no way how to capture it. I tried to use pen and paper – but it is so easy to loose the paper or forget to enter it …

Because all of my calendar data are living in Google cloud, I needed a way how to sync iCal data from my Mac with Google. There are at least 4 different ways how to do that:

1) SpanningSync – the subscribtion based product

2) gSync – very affordable program from Macness.com

3) OpenSource Java based GCalDaemon – very interesting but with fairly lengthy setup

4) using free Plaxo pluging for address book

I was looking into SpanningSync before and even evaluated it, but I think the value of the service is not worth of $25 a year. I would not mind to buy the application for that price, but for paying every year it is too much. Besides, SpanningSync runs your data using third party server, which is another issue – you pay twice: on top of dollars, by giving up control over your data.

The gCal offers much better price/performance ratio – for one time fee of 10 pounds you can use it as long as you wish – or as long as Google does not modify their API. I cannot comment on it’s functionality, because I ended up using the #4 solution.

I have been using Plaxo already for over a year to keep address books between different platforms in sync. It was then the only practical solution how to synchronize Thunderbird contacts, Address book on Mac, Outlook contacts and Yahoo. Now it can in addition to synchronizing address books also synchronize the calendars and added the Google as data source. The contacts synchronization is one-way only (you can load your contacts from GMail to Plaxo), but calendars are synchronized both ways without any issues. Plaxo basic service is still free, if you want premium you can get de-duplication option and synchronization with LinkedIn. Unfortunately – same as with SpanningSync – the value of the service for me is not as high as their price. YMMV.

I have downloaded the addresses from Plaxo, synced them iPod and tested that you really can edit or add an appointment in your iPod and that with the next sync it makes it up to the cloud and shows in your Google Calendar. And vice versa. If you are online, the updates in address book or iCal are synced almost immediately to Plaxo and with short delay to Google.

Few recommendations: after adding Google sync point and setting up the link, do one synchronization and then switch the contact synchronization off (keeping the calendar sync only). Why ? Because your Google contacts contain most likely just a names and emails or even emails only as Google extracts them from the emails arrived. After sync, you will want to clean up and consolidate your data by assigning the newly collected emails to people in your address book and deleting the incomplete records. Without switching the contacts synchronization off, you will have them back and can start the deleting again 🙂 – as I learned the hard way.

It is also important to understand to which calendar are the newly created appointments from iPod added – especially when your settings in GCal and iCal do not 100% match.

Now back to the subject: why is this the best PDA ? I was using several Palm based devices as well as Windows Mobile based PDA’s (one of them – the Toshiba e830 is still being used as eBook reader) and every one could synchronize the appointments and addresses. The largest difference is the Touch user interface – you have to try it to understand. None stylus based Palm or PocketPC came ever close to the simplicity and functionality of the Touch. Not speaking about the beauty of the UI, of course.

Second reason for being the best is the tool chain behind the iPod. Syncing using iTunes and Mac / iCal / Address book is by order of magnitude better user experience that using ActiveSync and Outlook. True, using Plaxo you could move the data from iPaq to Google same way – or maybe the new Outlook is even Google aware – I have no idea because I stopped using Outlook in 2005.

And last but not least reason is that NONE of my previous PDA’s could serve as iPod, show images or video or even surf a Web via WiFi. I stopped using WiFi on Toshiba PocketPC after it was obvious that for 99% sites the pocket version of Internet Explorer is simply unusable. I stopped trying to use it as music player because to play a song you had to use stylus and both hands. And the music stopped when you switched the device off, so instead of enjoying the podcast I was fiddling around with screen light settings to get at least few hours out the batteries.

Now if the iPod had at least small (1-2 MPix) camera, it would be almost perfect … but for that I guess you need to get an iPhone.


Web based task manager with offline mode – finally !

2007/07/29

Yes, such nice tool really exists and it’s name is Remember the Milk.

At first sight, RTM is not extremely visually appealing, but very nice and very usable Ajax application. After some time using it, you may better appreciate the “less is more” design. The offline access is thanks to Google Gears magic. Just flip the small green arrow (which will download your data to local cache) and voila – you can go offline, keep editing or adding tasks – your changes will be remembered. Later on, after going online the changes will be synced back.

The RTM has another killer feature – it integrates very well with Google calendar. Your tasks with due dates (which all meaningful task should have) will appear in you calendar without needing to go to RTM site. You can even edit them from there. For details, see the help.

RTM has pretty much everything what most people may need: projects, lists, tagging, locations (with map integration), contacts – with task delegation, and of course, API. Give it a look …


Reading blogs and feeds offline

2007/05/30

It happened! After starting my preferred feed aggregator – Google Reader I have noticed the new feature that sounded so good that I almost could not believe it: offline reading. Using new plugin – Google Gears, the reader can cache 2000 items locally and allows you to go offline and read the news, blogs or whatever you have subscribed to absolutely anywhere.

The installation starts with this screen:

screen-10.png

and goes on with this:

screen-11.png

After browser restart, all magic is hidden under small greenish button next to Settings:

screen-12.png

Pressing the button downloads the 2000 items. The download is very fast, on cable modem it takes about 10-15 seconds.

screen-13.png

After download, you can disconnect the cable, switch off the wireless – or whatever you do – and take your latest and greatest blogs and news wherever you want. Here is the link to the “official” blog announcement.
Thanks, Google – this is great feature !


Time to leave CVS ?

2007/04/14

A friend sent me link two days ago making me aware that Mozilla decided to change the source code control system (= SCCS) they use. When such large and important project as Mozilla moves ahead and changes something so basic and fundamental to development as the SCCS, there must be good reason behind it. What was even more interesting was the system selected: no, it was not Subversion, but something much less established – Mercurial. Actually, until they selected it I was barely aware of its existence.

For many years, the synonym for Version Control was CVS. At least in the open source area and really large projects, CVS was the SCCS. Then things started to change and today if you look around in large open source projects, you will see that CVS and Subversion probably still lead but several really large projects are using very different tools e.g. Linux kernel is using  Git.

So – is it time to review our technology toolkit in such basic tool as source code control ? Writing is on the wall – in one of the few remaining computer magazines in Chapters I glanced over today (forgot the name, something Linux related) – was a review of the current free version control systems. The article was comparing and evaluating RCS, CVS, Subversion, GIT, Bazaar, Monotone. In their evaluation the best marks were assigned to Subversion – it got 9/10 points. Mainly because ecosystem, support, documentation, user base and add-on/tools support. The next runner up was one of the new kids on the block – distributed version control system (DVCS) named Bazaar. Good old CVS ended up with 6 points and granddaddy RCS with 3/10.

Among the new tools, two important new trends are visible: new approaches to workflow by using distributed and decentralized VCS (= DVCS) rather than central server based and shift of the implementation platform from traditional very low level languages (C) and low level static languages (C++, Java) to dynamic and interpreted scripting languages.

Using dynamic, high level languages for the SCCS is natural result of increasing computing power of hardware – which makes speed and memory limitations disappear – as well as new, more complex usage scenarios which need more complicated software to address it. The dynamic languages such as Python or Ruby also have excellent support for networking and support most of communication protocols right out of the box and allow inherent portability between all supported platforms. This was often an issue for older systems written in C/C++. Bazaar – for example – is written in Python and can therefore runs on almost any platform.The shift towards decentralized and distributed systems is a logical continuation of the trend that started with abandoning the explicit (reserved) check-out mode of work. Anybody remembers the joys of Visual SourceSafe and issues when your colleague left for two weeks vacations and left checked-out few critical files ? In the systems using reserved checkout, every programmer had read-only copy of the code and could edit only files explicitly checked out – and checkout was limited to at most one person at any given time. The big change of CVS was allowing everybody having all code writable (in his/her own sandbox) and allowing parallel changes to the same file. The simultaneous changes from different people were merged using two step process – update (transfer the changes from repository to local sandbox) and commit – upload the local changes to single, central repository. The possible conflicts were resolved between update and commit.

What is the fundamental difference between centralized and decentralized VCS ? In my limited understanding of how the DVCS work (as I have not worked with any of them yet on real project): unlike with CVS where there was one central repository and every developer had local copy of single state only (sandbox), with DVCS every developer has own copy of whole repository and therefore access to all versions of all files without depending on the centralized server. Every node is a repository and every commit is (or can be) local. Rather than synchronizing the local sandbox with the central repository, the independent repositories are synchronized. Nodes can synchronize directly as long as the changes are made available to other nodes by “publishing” them- usually using HTTP or SSH/FTP. The content of your repository will thus depend on number of nodes you synchronized with – and their own synchronization history.

New models of interaction are just one of the new features emerging. The DVCS can easily simulate workflows and processes from the traditional VCS but also allows several workflows impossible with VCS – like committing changes and getting differences without any connectivity to remote repository. (To be completely fair, you can do that to certain extent with Subversion – because latest status from repository is locally cached, you can always do diff and see local changes made – or undo them, but only for one revision). The DVCS is also very flexible – allows very dynamic creation of groups and branches and does not have single point of failure.

So what is the answer for the question from the title ? Should we throw away traditional VCS and start using new distributed decentralized tools ? I am not sure. With flexibility and freedom comes overhead and responsibility. Synchronizing large repositories can be expensive – both in time and network bandwidth. What is more important, it may require changes to the project management style and additional effort invested into keeping codebase under control – see the Development metodologies in Bazaar manual. When no repository is central, it is much harder to say where is the latest, most complete version of the system’s code. And – unless you address it with your process – it may be hard to tell whether it is currently even available. The DVCS seems to make sense mostly when the team is very large and geographically distributed. I do not see many advantages for the smaller development team working in one location – other than fixing several CVS/Subversion annoyances and possibly providing better and easier merging. What is also important is tools availability and IDE integration. All this need to be better understood. Right now the answer for me to whether to switch: no.  But to start looking into DVCS and evaluating to see the pros and cons – absolutely yes.

Nice comparison of features of the various SCCS is available here.


Avalon – reloaded …

2007/03/20

Now this is something really interesting: as found on Adobe Labs site, their technology codenamed Apollo is approaching Alpha status. What Apollo is – in a nutshell – is another virtual machine, similar to Java runtime or .Net framework, with few minor twists: it is multiplatform (as Java) as well as multi-language (as .Net) at the same time. Before flame wars start – I am aware that JVM is capable (more or less) to support multiple languages beyond Java and also that .Net is (more or less) capable running on  non-Windows platforms (e.g. Mono project), but that is not the point. The point is what is different about the Apollo compared to JVM or CLR.

First thing that is different is developers skill-set. Adobe is trying to leverage the experience of Web application developers and allow to use traditionally Web oriented technologies to create desktop applications: HTML, Flash, Javascript and PDF in context of desktop applications. The other is that Apollo is designed with notion of being “occasionally connected”, or in other words online/offline applications. It does support well the regime when you can work offline with local copy of the data and reconnect / synchronize with master copy on-line, providing both access to local resources (as traditional desktop application) as well as rich asynchronous XML capable communication library (as Web 2.0 application running in browser on the client).

Using Javascript/HTML for desktop-ish apps is not really an innovation. If you look on how the Firefox extensions are created, or on Widgets/Gadgets in Vista or OS-X you will see something very similar. The same idea was also implemented in Avalon – renamed to Windows Presentation Foundation – which uses XML to define the user interface and “scripting” that determines the logic. In WPF, you use the .Net languages to do “scripting” (Javascript being one of them) and you need a .Net 3.0 capable platform to run it (currently Windows XP SP2, Windows 2003 and Vista, unless I am mistaken). Even with similar language (Javascript), programming WPF is quite different and requires different skills from Web application programming. Allowing the use the Web app development skills and adding variety of Flash/Html/Javascript/Pdf combinations may be very appealing for somebody who needs to create desktop-like application without learning WPF. Plus the ability being platform-independent is added bonus and could be finaly a solution for a problem that Java did not really addressed well. It has been possible to create rich client Web-startable applications  for several years and yet, it has not become the mainstream. Possibly because of the complexity of creating Swing-UI applications in a first place ?

Compared to Firefox important point is that Apollo departs from browser while keeping the Web capabilities – such as rendering Web pages or creating mixed apps. Eliminating browser is important from security point of view. Installed runtime can give the Apollo application access to local machine resources such as local files without compromising security – as it would be in case of browser based applications. Access to local resources together with modern approach to remote connectivity is very interesting. The browsers are very much Web 1.0 with the request/response shaped vision of the world and adding the asynchronous capability in AJAX was one grandious hack … Another good reason why getting  rid of browser is simplicity of supporting one version of runtime versus making sure that you new great Web2 app works with wildly different Javascript/DOM capabilities of Internet Explorer 5, 6 and 7, Firefox 1.0, Safari, Opera, and so on …

The demonstration videos on Lynda.com show few interesting capabilities of new application types – follow the ‘Sample Apollo applications’ link and also here.

It is still Alpha so it is too early to get excited, we have no data about performance, resource requirements or real world application development experience. Positive is that both runtime as well as SDK should be free. And it is always good to have more options available 🙂


Offline access to GMail and Google Docs coming ?

2007/02/06

However I like the Web applications, the fact is that as soon as you do not have connectivity, they are of very little use. Well – maybe not for long. The Dojo Offline Toolkit makes promise that sounds almost too good to be true:

Imagine a version of GMail with a “Work Offline” button on the left-hand side of the screen. When pressed, GMail downloads 100 of your most recent emails into Dojo Offline, including pieces of it’s user-interface. A user can now close their browser and leave the network, stepping on an airplane for example. Once in the air, the user can then simply open their laptop and browser and type in mail.google.com. The GMail UI magically appears, along with their 100 most recent emails. A user can read these mails, compose new ones, or reply to existing ones. A flight attendant announces that the plane will land soon; the user closes their browser and laptop. Later, when they are back on the network, they can click the “Work Online” button, which will send all of their locally written emails to the GMail server.

or even for Google Docs:

simply select which documents you want to have locally. Later, open your browser and navigate to docs.google.com, working from anywhere you want, even without a network. When you are done, press the “Sync” button to send it back to the server with your changes when the network reappears.

The technology behind is local proxy with Javascript autoconfiguration and local file cache. It is a project in progress, so unfortunately, you cannot try it out right away 😦 – but seems very promising. Best of all, it should be multi platform (Windows, Mac, Linux) and opensource (BSD License).

Cannot wait !