Bye bye Ruby, hello Groovy

2009/03/17

I first time discovered Ruby back in 2006 (yes, I know, I was late to the game), and immediately fell in love with it. The dynamic nature of the language, the consistency, pure esthaetics and practicality certainly changed the way how I saw software development and programming languages.

Since that time, I made several attempts to integrate Ruby into my professional life and make it part of the toolbox. It was cool to play with Ruby in my spare time, but I wanted to use it on projects whose main development language/platform was mainly Java. Use it as scripting, glue language. Use it as toolkit language to e.g. generate test data, access databases, convert files, build projects and maybe even build a piece of Web applications (admin apps for example).

It never worked. The main problem was availability of the Ruby platform in all environments. While JVM was there by default, Ruby had to be installed and sometimes compiled for the more exotic platforms. And that can be a big deal if you have not full control over the environment – scenario which is pretty much guaranteed in enterprise environment. It is hard to argue with the sysadmin saying “You want to install that in production just to run scripts ? Why do not you use Perl or Bash or Java that are already there ?”

For a little while I thought that JRuby may be the way. After all – all you need is JVM and JRuby is just another JAR, right ? As Goethe said, grey is all theory and green is the tree of life :-). A language is as good and useful as are components and libraries available. One certainly does not want to write everything from scratch. Libraries in Ruby are Gems and Ruby provides very nice, mature and IMHO superior system for component management to Java JAR’s – because it handles different versions of same Gem very well (maybe some day there will be Gem hell after DLL hell and JAR hell 😉 ). Unfortunately, some Gems (by Murphy’s law most of the really interested ones) are for performance reasons built as thin Ruby layer around native (written in C) library. And JRuby does support that, making most of the Gems unavailable.

Even if JRuby had all the gems available, there would still be a problem that the Gem system and Jar system are different and do not quite fit together. Also, from language point of view you certainly can use Java objects in JRuby and vice versa, but doing that makes you feel slightly schizophrenic – what reality am I in? Is this a Java Java class or Ruby Java class ?

Third problem that I have encountered after coming up with some Web App in Rails is that the deployment model is very different from Java deployment model which myself and people in organizations we work with understand really well. We know how to deploy so that it scales, we know how to monitor and maintain a Java enterprise app. But not a Rails app with all those Mongrels, lighttd’s and other creatures :-). This leaves many open questions like “How do we size hardware for expected load ?” for which I do not have answers, and judging by well publicized issues with Rails apps scalability, even the best and brightest in Ruby world may not have either – or at least some people say so.

About at the same time I discovered Ruby, I also become aware of the strange Java dialect called Groovy. It sort of tried to do the same thing I hoped to use Ruby for, only from firm within Java environment. The original reason I did not want to look deeper at Groovy was that compared to straight elegance of Ruby, it looked kind of ugly. The Java skeleton was sticking out in wrong places and alltogether it just did not feel as good as Ruby.

I have to publicly admit I was wrong.

Being a Mac user, I have license for going after good looks and white shiny objects, but when it comes to programming languages, the good looks may just not be enough. The reality is the proof.

During last 12 months, we have quietly and very successfully used Groovy components and pieces on three large projects. It fitted perfectly, never running into anyof the issues above.

Through these projects, I learned to appreciate the Groovy way, my sense of aesthetics stopped to be offended by certain syntax constructs in Groovy and I even started to like them better than Ruby ones. For example, I am now convinced that Groovy categories are safer and better approach that explicitly alerts programmer about using class extension, than re-opening any class in Ruby (which is still possible in Groovy by assigning closure to member in metaclass). Imagine how confusing it can be for software maintenance when reopening and using happens far apart in the source code.

But the most important, the painful realization ” how the heck do I do the XYZ thing in this language ? If I only were coding in Java, it would be so much simpler ” is history with Groovy. Everything that I was used to use in last 12+ years in Java is still there, all the goodies of Jakarta Commons and way more.

Groovy community seems to be less opinionated, less self-righteous than Rails/Ruby community and more understanding for weird requirements and idiosyncrasies of enterprise environments. Rather than telling you “you should not want to do this” and “DHH thinks it is wrong”, you actually may get a helpful pointer to useful website or blog how to do that stupid thing in Groovy or Java or combination of both. Because you know, when one needs to accomplish something that seems to be wrong and illogical, being told that it is wrong and you should better forget about it does not really help. People who worked with real enterprise system’s integration understand, that cost of touching or changing certain systems is so prohibitive that it is out of question and doing the technically wrong thing may right (and only) option for given situation and customer.

Therefore – bye bye Ruby, Hello Groovy. Next things to embrace and embed will be Grails.


It’s alive !

2009/02/18

I was aching to blog about this since December 18th, when our system quietly and gently slipped into public visibility. Marked as Beta (thanks, Google for making this a legitimate way how to go live) it comfortably made it through Christmas shopping season into 2009.

Now when the site has been announced and mentioned few times in the media, I guess it’s OK to mention it here too.

What is “it” ? A new, fresh eCommerce site selling music. A lots of really good music. Without any DRM or any other nonsense, as plain good high quality (mostly 320 kps) MP3’s. The selection is actually very good – starting from several hundred thousands in December to several few million songs when full catalog is loaded. More great music being added every week.

The design of the site is pretty, modern, leveraging lot of jQuery and flash magic. In the backend powered by the probably most powerful eCommerce platform – ATG eCommerce Suite.

We go back long time with ATG. Starting in 2001 when we (we means Montage at that time) decided to bid solution based on ATG for two major RFP’s in federal government and won them both. During following years, we have digged deeper into very rich and powerful platform and built more functionality and added few more customers. This project was our first full eCommerce implementation based on ATG 2007.1. But definitely not the last one – it looks like despite the economy maladies, demand for eCommerce and specially ATG based eCommerce solution is surprisingly large and interesting amount of work is coming our way …

I am tremendously proud of what our team was able to deliver. It took lots of dedication and sweat: we had pretty aggressive deadline (full store was implemented in under two months) and complexities of the environment. To make me even more proud is that the system is running in production environment architected and developed by our team. It is not often that developers team has opportunity to be involved into complexities of the large enterprise system deployment and putting to production.

Ah, almost forgot – the URL is http://www.zik.ca/. See for yourself. Right now, only in French (primary target audience is French speaking Canadian population), English coming later.

picture-2

I would like to thank everybody who helped to make this an amazing project experience. We were lucky to build great relationship with both our customer and the end-customer, as well as with our development partners in Montreal working on different stores in Virtual Shopping Mall.

Thanks to ATG for such rich and powerful product suite. It is like great sports car: very powerful, requires skills to master it, but once you get it, you can do amazing things with it.

And last but not least, big thanks to everybody from our delivery team that made this possible. You guys rock.

Btw, if you would like to work with people that can build things like this, and have either ATG experience or at least solid J2EE, Spring, Hibernate and JSP/JSTL skills, send your resume to careers at thinknostic dot com.
We are hiring again :-).

No telecommuters please – you must be able to live and work either in Ottawa or Toronto. Speaking French is not required but is definitely a plus.


After the Spring Storm

2008/09/25

Few days ago, I have received an email asking to attend the SpringSource conference and offering $300 discount if I decide to buy before end of month. The timining was pretty ironical, considering the recent contraversy about SpringSource Entreprise Maintenance policy change. The was a heated discussion on TheServerside.com and even FAQ was compiled by Rod Johnson to explain the whole thing.

So what is the issue: SpringSource, commercial entity behind community driven Spring Framework decided to capitalize on the investment and push the community toward paid service suport contract. Whether or not it was related to funding round and VC’s entering the game, is not important. Unfortunately, the terms were not really well explained and SpringSource did initially very poor job explaining and communicating the actual change. The community overreacted and lot of people started to call for code fork, project Summer or similar – or switching to different framework.

What the change really means is that maintenance releases will not be available (as binary, official downloads) after 3 months, unless you pay for the support. It means that using new policy after let’s say release 2.0, only for three months everybody could download binary distributions of 2.0.1, 2.0.3 etc. If at the end of month 3 you would need fixes in the 2.0 branch, you would be on your own. The source code repository with 2.0 branch would still be available, with fixes committed (see later) but without tags (or labels) and certainly without pre-packaged download of 2.0.4, 2.0.5. You could compile the actual head of the 2.0 branch yourself, but it would not be clear what version you actually run.

This is actually not such terrible limitation as it may seem. Maintaining old code base and backporting the changes from trunk into fixes in branches is lot of work, requires testing and so on. SpringSource did invest a lot into the codebase and as commercial entity provides service to the users. The release management and software maintenance is slightly different type of work that developing new great release, and I can imagine that there are even less contributors and volunteers for this (seldom appreciated) kind of work. Multiply that with many branches and you’ll see the magnitude of the problem.

What did SpringSource do wrong, in my opinion is that they are taking away something that was available for the community for free and that created lot of negative sentiment. There are two ways how to push people to pay for free service – and cutting unless you pay is wrong one. Right way is to offer more on top of what is available for free, something that has value specifically for enterprise clients. There are several areas where there is a need: more and much better samples, template applications, guides, tutorials. Their availability has huge value for customers in saving development time and better quality of the products using the Spring platform.

There are several technical subtleties that are in open: the FAQ says that the changes and fixes from trunk will be available in the maintenance branches, but does not say when. Delaying changes can be powerful argument … Lack of tags in public repository also implies existence of private repository with these tags – how else could the SpringSource build the “supported releases”. The existence of two repositories in project (unless you use distributed VCS, of course) is often starting point of splitting project to payware and limited “community edition”. I hope this will not happen to Spring.

The most unfortunate impact of the maintenance policy decision may be for the Spring based opensource projects, that helped a lot to establish acceptance of the platforms over last few years. Dilema of facing either cascading changes of permanent upgrades just to keep in sync with latest trunk version, or permanently monitoring the branch to see which of the changes are important – in absence of “official” maintenance release with defined bug fixes etc can be an issue for many projects. I hope SpringSource will find workarounds and solutions for these.

Despite of that, forking Spring would be very bad idea. The very last thing enterprise Java needs is another framework that is esentially same or very similar than existing one. SpringSource employs brilliant engineers, Spring community also has great contributors – what is the point of splitting the forces when there is no architectural disagreement, the issue is “just money” ?

Forking also does not solve the core problem: who will be providing the community service of maintaining and building the old releases. The maintainers of Summer codebase (or whatever the name of Spring forked successor would be) will face the same dilema – choose between working for free forever or finding way how to employ enough manpower to keep the balls rolling.

I would recommend that instead of forking Spring codebase, whoever feels like it, try instead to fill in the gap and provide the “inofficial official” builds, a “community maintenance releases” as counterpart and alternative to official SpringSource maintenance releases for enterprise (== paying) customers. This would actually help the project and all projects that depend on it much more than asking them to switch.

It would also not hurt SpringSource – I am convinced that they would loose very little revenue (if any) because of that. It has been done (Centos vs RHEL). If my project would be based on Spring platform and not on commercial Java server (which it is not) and the support cost would be reasonable (which it IMHO is), I would happily pay for priviledge to have access to the people of Juergen Holler’s caliber (and many others).

Let’s face it, folks: when it comes to support and software maintenance and similar less attractive areas of our profession, there is definitely no such thing as free lunch …


Trying out Groovy with Oracle database

2008/04/07

There is a saying that necessity is the mother of invention. Such necessity happened last week and forced me to try out the Groovy language.

The trigger was need for creating good data set for testing changes in a full text search. I had to locate few hundred of obsolete technical documents to be used as test data, dump the metadata as well as the BLOB data to disc, create INSERT sql scripts and loader that would from Ant pre-load the database with the test set and insert the BLOB’s. It is fairly simple task, the issue were boundary conditions:

Database in question is Oracle. I have no OCI8 installed on my MBP and refuse to install Oracle on OS-X – it was little fun getting it up and running on Linux. Usually I run Oracle in Windows or Linux VM and with lack of OCI8, only option how to communicate with it is Level 4 JDBC driver.

Because of missing OCI8, Ruby was out of question (the gems for Oracle access needs OCI). So was PL/SQL, for the reason of administator priviledges. I am not exactly a PL/SQL programmer, but know enough to write stored proc that loads or unloads BLOB, assumed that UTL_FILE and DBMS_LOB packages are available and accessible. Which was not the case.

Before falling to default and writing it all in Java (which would work, but take certainly longer than I was willing to invest), I decided try out this Groovy thing :-). My exposure to the language was minimal – I never programmed in it (not counting 2 hours spent browsing Ruby in Action), but I hoped that similarity with Java and almighty Google will help me out …

Connecting to database in Groovy is simple and elegant:


import groovy.sql.Sql

db = Sql.newInstance(
            'jdbc:oracle:thin:@myoraclehost:1521:XE',
            'USER', 'PWD', 'oracle.jdbc.driver.OracleDriver')

      def sql = """
            select d1.document_id, d1.version, d1.filename from document d1
            where d1.document_id in ...

           .... REST OF SQL DELETED ....
        """
        list = db.rows(sql)

The result is hash-like structure that can be used to drive both generating the DELETE and INSERT SQL statements into text file as well as retrieving the BLOB’s and saving them to HDD:


    def writeBlobToFile(db, docid, version, filename, reallyWrite=false) {
        def sqlString = "SELECT d.DOCUMENT_BODY FROM document d where d.document_id = '$docid' and version = $version"
        def row = db.firstRow(sqlString)
        def blob = (oracle.sql.BLOB)row[0]
        def byte_stream = blob.getBinaryStream()
        if( byte_stream == null ) {  println "Error for ${docid} : ${version}"  }

        int total = blob.length();
       
    // Write to a file
       if (reallyWrite) {
            byte[] byte_array = new byte[total]
            int bytes_read = byte_stream.read(byte_array)
           def fullname =     "/Users/miro/tmp/BLOBS/$filename"
           def fos= new FileOutputStream(fullname)
           fos.write(byte_array);
           fos.close()       
       }      
        println "Document $docid:$version, file: $filename, size $total"
        return total
    }

Loading the documents and metadata from disk to database is done by Ant script

<path id="groovy.classpath">
  <fileset dir="${lib.dir}">
    <include name="groovy-all-1.5.4.jar"/>
  </fileset>
</path>

<taskdef name="groovy"  classname="org.codehaus.groovy.ant.Groovy">
    <classpath>
        <path refid="groovy.classpath" />
        <path refid="database.classpath" />
        
    </classpath>
</taskdef>

<target name="reload-document-fixtures" description="Prepare the database for testing">
    <sql
        driver="oracle.jdbc.driver.OracleDriver"
        url="${database.url}"
        userid="${database.user}"
        password="${database.password}"
        print="yes"
        output="database_load-${DSTAMP}.txt"
        autocommit="true"
        onerror="continue"
        >
        <classpath refid="database.classpath"/>
        <transaction src="src/sql/Fixture/DELETE_DOCUMENTS.sql" />
        <transaction src="src/sql/Fixture/INSERT_DOCUMENTS.sql" />

    </sql>
    
    <groovy src="src/groovy/com/company/app/utils/LoadDocuments.groovy">
        <arg value="src/sql/Fixture/BLOBS"/>
        <arg value="src/sql/Fixture/METADATA.txt"/>
    </groovy>
</target>    

The Groovy task must be defined before used (with proper classpath). The groovy
task shows passing arguments and executing script. The most important part of
the script (loading binary file and inserting the BLOB) is here:


// each line in METADATA.TXT looks like this:
reg =  /document_id: \'(\w+)\', version: (\d+), filename: \'(.+)\', size: (\d+)/

db = Sql.newInstance(
        'jdbc:oracle:thin:@myoraclehost:1521:XE',
        'USER', 'PWD', 'oracle.jdbc.driver.OracleDriver')
counter = 0
new File(args[1]).eachLine { line ->
    line.eachMatch(reg) { match ->
        try {
            str = "${args[0]}/${match[3]}"
            println "Opening file: ${str}"
            FileInputStream fis = new FileInputStream(str)
            int size = fis.available()
            byte[] data = new byte[size]
            fis.read(data)
        
            // get the BLOB
            row = db.firstRow("select document_body from document where document_id = ? and version = ? for update",
                    [match[1], match[2]])
            my_blob = (oracle.sql.BLOB)row[0]
            if( my_blob == null ) println "my_blob is null!"
            outstream = my_blob.getBinaryOutputStream();
            outstream.write(data);
            outstream.close()
            fis.close()    
            counter = counter+1
        } catch (Exception e) {
            print "Exception: ${e}\n"
    }
}
println "Processed files: ${counter}"

The loader is controlled by the file METADATA.TXT that contains information linking document metadata in database (inserted by SQL statement) with file on disk. This indirect way allows easily “inject” document body with required search phrases, reload database and run tests.

The whole experience was quite pleasant and considering how little I knew about Groovy, it took very little time to create something useful. I wish I could have done some thing in Ruby (which I still like better) – but I was amazed how powerful the combination of Groovy + Ant can be.

The most valuable feature is no need to add anything really new or exotic to Java environment – all you need is Jar file and Eclipse plugin, no new/different library packaging scheme – Jars are still jars, not gems.

The only minor hickup and weird thing I found was passing arguments between Ant and Groovy script. It looks like that the groovy script should be a script and not use classes + static main method (which would be natural instinct for Java developer). Thanks to Christopher Judd for the hint.


Showstopper issue with JRuby ?

2008/03/16

After some break, I dusted off the Ruby to try out one interesting idea Peter presented yesterday that literally ASKS for Rails. So I grabbed the latest and greatest RubyNetBeans from Ruby Hudson. For some reason it stopped updates on January 26th so the latest version I have got was build 6327. It is bundled with JRuby 1.1RC1 and the Rails version that comes preinstalled is 1.2.6. Using the menu Tools -> Ruby Gems, they can be easily upgraded to latest and greatest 2.0.2.

The trouble begins when you want to install database connectivity gems such as sqlite3-ruby. The installer fails with the message:


trying to install

INFO:  `gem install -y` is now default and will be removed
INFO:  use --ignore-dependencies to install only the gems you list
Building native extensions.  This could take a while...
extconf.rb:1: no such file to load -- mkmf (LoadError)
ERROR:  Error installing sqlite3-ruby:
    ERROR: Failed to build gem native extension.

/Users/miro/Applications/<a href="http://RubyNetBeans.app/Contents/Resources/nbrubyide/ruby1/jruby-1.1RC1/bin/jruby" class="linkification-ext" title="http://RubyNetBeans.app/Contents/Resources/nbrubyide/ruby1/jruby-1.1RC1/bin/jruby">RubyNetBeans.app/Contents/Resources/nbrubyide/ruby1/jruby-1.1RC1/bin/jruby</a> extconf.rb install sqlite3-ruby --no-rdoc --no-ri --include-dependencies --version > 0

Gem files will remain installed in /Users/miro/Applications/<a href="http://RubyNetBeans.app/Contents/Resources/nbrubyide/ruby1/jruby-1.1RC1/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.1" class="linkification-ext" title="http://RubyNetBeans.app/Contents/Resources/nbrubyide/ruby1/jruby-1.1RC1/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.1">RubyNetBeans.app/Contents/Resources/nbrubyide/ruby1/jruby-1.1RC1/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.1</a> for inspection.
Results logged to /Users/miro/Applications/<a href="http://RubyNetBeans.app/Contents/Resources/nbrubyide/ruby1/jruby-1.1RC1/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.1/ext/sqlite3_api/gem_make.out" class="linkification-ext" title="http://RubyNetBeans.app/Contents/Resources/nbrubyide/ruby1/jruby-1.1RC1/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.1/ext/sqlite3_api/gem_make.out">RubyNetBeans.app/Contents/Resources/nbrubyide/ruby1/jruby-1.1RC1/lib/ruby/gems/1.8/gems/sqlite3-ruby-1.2.1/ext/sqlite3_api/gem_make.out</a>

The missing file mkmf.rb is indeed missing from the JRuby distribution. This is entered as a bug in JIRA 1306 with resolution ‘Won’t fix’. Tough luck.

I understand the reasons and motivation for this decision – the JRuby team decided not to support native extensions in Gems, to keep the platform Java only. I also understand that in this particular case, there are workarounds – using ActiveRecord-DBC gem and JDBC drivers for the database will most likely work. Unfortunately, this decision makes choice of JRuby as platform very questionable.

I really liked JRuby for the comfort of having platform that is portable and safely wrapped within boundaries of the good old trusted JVM. I feel much more comfortable maintaining possibly several different versions of JRuby and corresponding Gems sets than maintaining same several configurations on the OS level and sudo-ing just to install Gems. I was more that happy to trade the lack of speed for this security.

The two main attractions of Ruby (from my point of view) are elegant, powerful language with beautiful syntax as well as sheer amount of code available as Gems to be reused. With the bug 1306, many of this code may not be available for JRuby – unless Gem authors make specific provisions for Java version of the Gem. I cannot think how this is a good idea and certainly not a good news for the future of the language.

One way out is use native Ruby interpreter, of course and make sure you do not mess up your installation by trying out new things. This does not allow the easy way into enterprise that JRuby was promising – by being basically ‘just another jar’ and running on Tomcat.
The other way out is to reconsider the Groovy. I still do not enjoy the syntax anywhere close to Ruby, but every Groovy class is Java class, there is no need for artificial bridges. It has own clone of Rails – GRails – that seems to provide lots of Rails magic and goodies and is based on Spring which I am very familiar and quite happy with. I still do not know whether the amount of “gems” in Groovy world is in the same league as Ruby (which is still limited compared to Python or even CPAN Perl bonanza) – but as long as I can find that what I need, it may be just enough.

For now, I will revert back to Ruby (no time to start learning Groovy+Grails), but I definitely will look at it later on.


Versioning with Spring and Ant

2008/03/10

This is post is NOT about version control in the sense of source code control. It addresses the issue how to easily tell which version of the Web application is running as well make sure that the same version is reflected in the snapshots performed by Ant. We have used this approach on several applications and found it quite useful.

The version I talk about is not the automatically generated version from VCS system (such as SVN revision). The version number is set manually.

In every project, we include property file named application.properties, placed at the root of the classpath (next to ‘com’ directory in the source tree). The content can look like this


app.name = MyApp
app.version = 0.5.4
app.info = User authentication implemented

This property file will be available to Java code because of its location in classpath. Our build.xml file ususally starts with few includes like this


<property environment="env" />
    <property file="${env.COMPUTERNAME}.properties" />
    ...
    <property file="${src.dir}/application.properties" />

We try to load first the property files named after the computer. This way we can easily address differences in e.g. Tomcat locations, pathnames to libraries etc. This is very useful if not all team members have identical development environment – which is almost always the case, as I am on Mac and the others mostly on Windows :-). The environment variable COMPUTERNAME is automatically set for you if you are on Windows, for Mac/Linux users all you need is an export in your .bashrc file.

The second include loads the application properties and makes the ‘app.version’, ‘app.name’ available for Ant tasks. Externalizing the app.name allows reusing same Ant script for multiple project. Here is an example of the task that creates and archive of current source code snapshot using this information:


<target name="dist" depends="prepare" description="Package the source distribution">
      <zip destfile="${dist.dir}/${app.name}-${app.version}-${DSTAMP}-${TSTAMP}@${COMPUTERNAME}-src.zip"
           basedir=".">
            <exclude name="${build.dir}/" />
            <exclude name="lib-src/" />
            <exclude name="build-ecl/" />
            <exclude name="${web.dir}/WEB-INF/classes" />
            <exclude name="dist/" />
            <exclude name="misc/**" />
            <exclude name="**/*.bak" />
            <exclude name="**/CVS/*.*" />
            <exclude name="**/.svn/*.*" />
      </zip>
    </target>

The DSTAMP and TSTAMP are set in prepare task:


<target name="prepare">
        <tstamp>
            <format property="TIMESTAMP" pattern="d-MMMM-yyyy_hh:mm:ss" locale="en"/>
        </tstamp>
        <mkdir dir="${build.dir}" />
    </target>

To access the information from application.properties, we simply add it to the list of message sources:

    <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource">
        <property name="basenames">
            <list>
                <value>messages</value>
                <value>application</value>
            </list>
        </property>
    </bean>

This allows to easily display the version number and information as part of e.g. JSP page:


<spring:message code="app.version"></spring:message> -
            <spring:message code="app.info"></spring:message>

For production, I usually still leave the version in the JSP, only include it in HTML comments, so that it does not interfere with the UI but is still accessible by viewing page source.

I usually set version number and info string manually, but it possible to automatically write information about e.g. build date and time of the application and make it a part of the version info. To do that, use the Ant task:


<target name="set_version" depends="prepare">
    <propertyfile file="${src.dir}/application.properties" comment="setting timestamp">
        <entry  key="app.stamp" type="string" value="${TIMESTAMP}" />
    </propertyfile>
    <echo>File test.properties updated at ${DSTAMP} ${TSTAMP}</echo>
  </target>

In this case, the value of property app.stamp will be overwritten every time the ‘set_version’ task executes.


iBatis, Date null values and Oracle

2008/03/07

I better blog this before I forget what was the issue :-).

During daily run of unit tests, I started to receive this (very well explained) exception:

org.springframework.jdbc.UncategorizedSQLException: SqlMapClient operation; uncategorized SQLException for SQL []; SQL state [null]; error code [17004];
--- The error occurred while applying a parameter map.
--- Check the APPROVAL_TASK.insert-InlineParameterMap.
--- Check the parameter mapping for the 'scheduledDate' property.
--- Cause: java.sql.SQLException: Invalid column type; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred while applying a parameter map.
--- Check the APPROVAL_TASK.insert-InlineParameterMap.
--- Check the parameter mapping for the 'scheduledDate' property.
--- Cause: java.sql.SQLException: Invalid column type
	at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.translate(SQLStateSQLExceptionTranslator.java:121)
	at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.translate(SQLErrorCodeSQLExceptionTranslator.java:322)
	at org.springframework.orm.ibatis.SqlMapClientTemplate.execute(SqlMapClientTemplate.java:212)
	at org.springframework.orm.ibatis.SqlMapClientTemplate.insert(SqlMapClientTemplate.java:397)
... 

Now the problem was that scheduledDate was null. Here is partial SQL map used:

<insert id="insert" parameterClass="com.zarlink.cdca2.domain.ApprovalTask" >
    insert into APPROVAL_TASK (APPR_ID, STEP, TASK, EMAIL_TO, CREATE_DATE)
    values (#apprId:DECIMAL#, #step:DECIMAL#, #task:DECIMAL#, #emailTo:VARCHAR#, #createDate:DATETIME#)
</insert> 

The createDate time jdbcType DATETIME was actually one of my changes. What Abator generated originally was this:

<insert id="insert" parameterClass="com.zarlink.cdca2.domain.ApprovalTask" >
    insert into APPROVAL_TASK (APPR_ID, STEP, TASK, EMAIL_TO, CREATE_DATE)
    values (#apprId:DECIMAL#, #step:DECIMAL#, #task:DECIMAL#, #emailTo:VARCHAR#, #createDate:DATE#)
</insert></pre>
<pre>

This map does not suffer by the Null value problem, but unfortunately does not store the time portion of the date – which was the main reason why I used DATETIME, unaware of the Null sensitivity.

There are three ways how to fix this. First is obvious – do use DATETIME and make sure that the field has value. This may be good enough as long as you do not need to save null values.

Second solution is to keep DATETIME and use iBatis magic with conditionals in map definition:

</pre>
<pre><insert id="insert" parameterClass="com.zarlink.cdca2.domain.ApprovalTask" >
    insert into APPROVAL_TASK (APPR_ID, STEP, TASK, EMAIL_TO, CREATE_DATE)
    values (#apprId:DECIMAL#, #step:DECIMAL#, #task:DECIMAL#, #emailTo:VARCHAR#,  	
                <isNull property="createDate">
			null
		</isNull>
		<isNotNull property="createDate">
			#createDate:DATETIME#
		</isNotNull>   )
</insert></pre>
<pre>

This deals with the null value differently and avoids “guessing” the column type, which caused the problem.Third solution may not work on other databases, but on Oracle works prefectly. The data type TIMESTAMP does both store the time portion as well as handles Null values without any problems. This is what I used at the end.

Final map:

</pre>
<pre><insert id="insert" parameterClass="com.zarlink.cdca2.domain.ApprovalTask" >
    insert into APPROVAL_TASK (APPR_ID, STEP, TASK, EMAIL_TO, CREATE_DATE)
    values (#apprId:DECIMAL#, #step:DECIMAL#, #task:DECIMAL#, #emailTo:VARCHAR#, #createDate:TIMESTAMP#)
</insert></pre>
<pre>

How to generate and send emails using Spring and Velocity

2008/03/05

Testing applications that communicate using email is more challenging than e.g. testing database access. Thanks to GMail and Spring, it can be done pretty easily.

Here is the scoop: I need to generate email notifying users about business events and after sending the email, store the content in the database. In real environment you will use corporate SMTP server and real email addresses of real people.
For development we can avoid bothering and spamming our customers by few simple tricks.

First step is to get new GMail account. Name it e.g. Company.Notifier or something easily distinguishable. In Spring, configure the sender bean:

<bean id="mailSenderGMail" class="org.springframework.mail.javamail.JavaMailSenderImpl">
           <property name="host">
                  <value>smtp.gmail.com</value>
           </property>
           <property name="javaMailProperties">
                  <props>
                          <prop key="mail.smtp.auth">true</prop>
                          <!-- this is important, otherwise you will get the
                                 exception: 530 5.7.0 Must issue a STARTTLS command -->
                          <prop key="mail.smtp.starttls.enable">true</prop>
                          <prop key="mail.smtp.timeout">25000</prop>
                </props>
            </property>
            <property name="username">
                   <value>company.notifier</value>
            </property>
            <property name="password">
                   <value>secret</value>
            </property>
            <property name="port" value="587"/>
            <!--   you don’t need to set the port number, 25 is default -->
     </bean>

The port can have value 25 (default) – but if you are using ISP provider such as Rogers, chances are the default port is blocked for outgoing connections – you can use port 587 instead.Second piece is component that actually uses the mailSender: notificationService.

    <bean id="notificationService" class="com.thinknostic.APP.service.NotificationServiceImpl">
           <property name="mailSender" ref="mailSenderGMail"/>
           <property name="velocityEngine" ref="velocityEngine"/>
           <property name="emailsToFile" value="true"/>
       <property name="alwaysCCList">
           <list>
               <value>miro.adamy+alwaysCC@thinknostic.com</value>
           </list>
       </property>

    .... deleted ...

   </bean>

Note the velocityEngine bean that is used to generate the body of the email from the template. The ‘alwaysCCList’ property is using nice little trick available with GMail: if you send email to YOURNAME+anything@gmail.com, the ‘+anything’ will be ignored but kept with email and it will arrive as if the address were just YOURNAME@gmail.com. You can use the postfix to find or autotag the emails.The code that sends email is actually pretty simple (the method of the notificationService)

public EmailContent sendNotificationEmail(EmailParameters params, EmailContent _content, boolean isDryRun) {

    final EmailContent c = mergeTemplate(params, _content);

    MimeMessagePreparator preparator = new MimeMessagePreparator() {
         public void prepare(MimeMessage mimeMessage) throws Exception {
            MimeMessageHelper message = new MimeMessageHelper(mimeMessage);
            message.setTo(c.getEmailTo());
            message.setFrom("DO-NOT-REPLY@company.com"); // could be parameterized...

            message.setText(c.getEmailBody(), true);
            message.setSubject(c.getEmailSubject());

            if (alwaysCCList != null && alwaysCCList.size() > 0) {
                message.setCc(alwaysCCList.toArray(new String[0]));
                c.setEmailCcFromList(alwaysCCList);
            }
            ...
         }
      };

      if (isDryRun || emailsToFile)
      {
              // save to file
            ...
      }

      if (!isDryRun)
          this.mailSender.send(preparator);
      return c;
}

The class EmailContent is container for email address, subject, body, CC list.
It gets created as empty class with only recipient name and email address passed from business method as well as name of the Velocity template that is used to render email body. The method mergeTemplate loads the Velocity template and renders the actual email body, using the parameters (which is more or less) a hash map, cotaining e.g. URL’s or information that needs to be inserted to email. The generated content is stored back to EmailContent, which will be after successful delivery written to database for audit and archiving purposes.If you are JavaScript or Ruby programmer, you will immediately recognize the ‘functor’ pattern: preparator is set to instance of anonymous inner class and is used with configured preparator.The actual rendering of the content using Velocity can be done like this:


private EmailContent mergeTemplate( EmailParameters params, EmailContent content) {
    Map<String, Object> model = new HashMap<String, Object>();
    model.put("param", params);
    model.put("content", content);

    String text = "MISSING TEXT";
    String subject = "Notification email";
    String template = content.getTemplateId();
    try {

        // get subject line

        if (template_names.containsKey(template)) {
            subject = template_names.get(template);
            VelocityContext velocityContext = new VelocityContext(model);
             StringWriter writer = new StringWriter (  ) ;
             PrintWriter out = new PrintWriter ( writer ) ; 

             Velocity.evaluate(velocityContext, out, "subject", new StringReader(subject));
             subject = writer.toString();

             model.put("subjectLine", subject);
            // now the body
             text = VelocityEngineUtils.mergeTemplateIntoString(velocityEngine,
                    "com/thinknostic/APP/service/email/"+template, model);

        }
        else {
            // TODO: log configuration error - template not found
        }

        content.setEmailBody(text);
        content.setEmailSubject(subject);
        return content;

    } catch (VelocityException e) {
        // back to untranslated
        // TODO: error report
        // subject = params.subject;
    } catch (IOException e) {
        // TODO Auto-generated catch block
        e.printStackTrace();
    }

    return null;
}

In the above, template id is actual file name with body of the HTML email and the hashmap template_names (configured in Spring XML) maps the id to another string, which is then used as subject line. Both body and subject can contain macros.
Note: if you get conflict on ${} syntax, see this entry.

Now, finally – how do we test this ? It is quite easy, thanks to JUnit support in Spring.


@ContextConfiguration(locations={"/com/thinknostic/APP/test/test-context.xml",
"/com/thinknostic/APP/test/service/NotificationTests-context.xml"})
public class NotificationTests extends AbstractTransactionalJUnit4SpringContextTests {

    @Autowired
    NotificationServiceImpl notificationService;
    public NotificationServiceImpl getNotificationService() {
        return notificationService;
    }
    public void setNotificationService(NotificationServiceImpl notificationService) {
        this.notificationService = notificationService;
    }

    @Autowired
    ApprovalDAO        approvalDao;
    public ApprovalDAO getApprovalDao() {
        return approvalDao;
    }
    public void setApprovalDao(ApprovalDAO approvalDao) {
        this.approvalDao = approvalDao;
    }

      Document doc;
      EmailContent cont;
      Approval app;
      ApprovalStep step1;
      ApprovalStep step2;

      @Before
      public void createMockApproval() {
          this.logger.info("Before");
          // controlls whether to use emails
          notificationService.setReallySend(true);

          doc = (Document)DocumentInfo.createMockDI("103", 1);

          // create the approval and insert it into
          app = Approval.createMockApproval(1L, doc);
          approvalDao.insert(app);

          step1 = ApprovalStep.createMockStep(app.getApprId(), 1);
          approvalDao.insert(step1);
          step2 = ApprovalStep.createMockStep(app.getApprId(), 2);
          approvalDao.insert(step2);

          cont = EmailContent.createMock(app.getApprId(), step1.getStep(), "miro adamy", "miro_adamy@rogers.com");

      }

      @After
      public void cleanup() {
          this.logger.info("After");
          // reset
          notificationService.setReallySend(false);
          doc = null;
          cont = null;
          app = null;
          step1 = null;
          step2 = null;
      }

      private void assertEverything()
      {
          assertTrue("The email service is empty", notificationService != null);
        assertTrue("The email parameteres not available", emailParameters != null);
        assertTrue("The document does not exist", doc != null);
      }

    @Test
    public void approval_all_cancelled_test() {
        doc.setFilename("approval_all_cancelled");

        notificationService.notifyApprovalCancelledAll(doc, cont);
    }

    @Test
    public void approval_cancelled_test() {
        assertEverything();        

        doc.setFilename("approval_cancelled");
        EmailContent ui = EmailContent.createMock(app.getApprId(), step1.getStep(), "miro adamy", "Miro.Adamy@gmail.com");

        notificationService.notifyApprovalCancelled(doc, ui);
    }

    // and so on - many more tests
    ....
}    

Assumed that your Spring context configuration files defined in annotations are OK, all you have to do is run the suite. The XML defines beans with names corresponding to the test properties and autowiring will take care of the rest.Also notice that @Before method (which runs before each test) actually DOES modify the database and inserts records with same primary key over and over. This works thanks to transaction magic of Spring, which rolls back the inserts at the end of the test.Last thing to mention is using of “mockXXX” static methods to generate the instance of object. IT not really a mock object we are creating here, but a well defined instance of business object with some properties parametrized. You can even define it as a bean in Spring context XML, but that is IMHO an overkill – keeping everything in test Java class makes things easier to understand.

This method does belong to tests, but it is very convenient just to keep it with domain object. I usually create the ‘createMock’ method right after I add a new domain object


Spring PropertyPlaceholderConfigurer with Velocity

2008/02/25

Part of the configuration magic is (one of several) Spring post-processors, that allows load the property file or files and refer to the values in these files inside XML configuration files. The rationale behind these is that property files are much easier to edit that possibly huge XML config file and using property values prevents repeating same value over and over again. With this approach, the following configuration:

 	<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
		<property name="username" value="some_user"/>
		<property name="password" value="secret"/>
	</bean>

can be written as

 	<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
		<property name="locations">
			<list>
				<value>WEB-INF/mail.properties</value>
				<value>WEB-INF/jdbc.properties</value>
			</list>
		</property>
	</bean>

	
 	<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
		<property name="username" value="${jdbc.username}"/>
		<property name="password" value="${jdbc.password}"/>
	</bean>

 

and (in jdbc.properties)

jdbc.username=some_user
jdbc.password=secret

If you are using in the same project Velocity and want to utilize the VTL macros inside the configured values, the approach above will not work, because PropertyPlaceholderConfigurer will try to replace ALL macros in form ${something} with a value from property file – and throw exception if not found.The solution is easy: you need to switch the prefix of the PropertyPlaceholderConfigurer to something else than the default ${ and avoid conflict wih Velocity. The modified configuration could look like:

 
	
	<bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
		<property name="placeholderPrefix" value="#{"/>
		<property name="locations">
			<list>
				<value>WEB-INF/mail.properties</value>
				<value>WEB-INF/jdbc.properties</value>
			</list>
		</property>
	</bean>
 	
	<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
		<property name="username" value="#{jdbc.username}"/>
		<property name="password" value="#{jdbc.password}"/>
	</bean>

 

Oracle SQLDeveloper and mystery of the missing host

2008/02/23

If you work with Oracle database, a very good tool you should not ignore is the Oracle SQL Developer. It gives you much more power when it comes to manipulating data or managing database. I use most of the time MyEclipseIDE plugins, but when I need e.g. export data, the SQLDeveloper is way to go.

Today I tried it for first time on the Mac. I started to configure and database connection and was completely confused by the error message “Port must contain numerical value”. Here was the dialog box:

picture-6.png

I typed the hostname:1521 but the error message kept coming back. There was no obvious way how to put 3 values – host, port and SID into two field. After checking the documentation, I remembered that the Windows version did have 3 text fields instead of two.

What was the problem ? The layout manager in Java did unfortunately completely hide one of the 3 fields. After resizing the dialog, all fields would be visible:

picture-4.png

After this, everything worked OK.

Obviously, Mac version of the SQLDeveloper could use some minor improvement. For example, the menu looks everything but like a finished Mac application:

picture-5.png