Wednesday, December 17, 2008

Impala 1.0M4 released

I am pleased to announce the release of Impala 1.0M4, with preliminary support for OSGi.

Impala 1.0M4 now includes the ability to arrange Impala modules and their corresponding class loaders in a graph, instead of in a hierarchy as in previous releases. This feature offers application developers much more flexibility in choosing the appropriate module structures for applications.

1.0M4 is the first release of the project to include OSGi support. All Impala jars are now OSGi-compliant bundles. It is now possible to run Impala and its application modules in an OSGi container (currently Eclipse Equinox). Further improvements to Impala's support for OSGi are expected in subsequent releases.

At this stage Impala support for OSGi is still experimental. It is not recommended at this stage to use it in a production environment.

Impala promises to become the first and only project to offer a seamless transition to (and from) OSGi. There is no intention to make OSGi the default deployment model for Impala. However, OSGi is likely to become a good choice in the future for projects which will benefit from more rigorous management of third-party libraries than is possible using the traditional model.

More information on Impala's position with respect to OSGi can be found in this blog entry: http://impalablog.blogspot.com/2008/12/announcing-osgi-support-in-impala.html

For more information on this release see: http://code.google.com/p/impala/wiki/Release1_0M4Announcement

Phil Zoio

Impala Home: http://impala.googlecode.com

Friday, December 5, 2008

Announcing OSGi support in Impala

The soon to be released version of Impala will contain OSGi support. Does this mean that I've abandoned my mission to make Impala the simplest, most lightweight, test-friendly Java dynamic module framework. Absolutely not. The idea is simply to translate the benefits of Impala into an OSGi environment.

In the future, you will no longer need to choose between OSGi and Impala. You can have the best of both worlds. As long as you have some appreciation for the value and benefits of dynamic modules in Java, then Impala can work for you, no matter what your stance on OSGi is.

If you have not looked at OSGi and have no intention of doing so, then you're already well covered by Impala. If you are likely to look at OSGi in the future, then you can take advantage now of a framework which already does most of what you'd want, and move towards OSGi when you're ready. If you're already an OSGi enthusiast, I'd encourage you to take a look now at the Impala OSGi sample. Just point your Eclipse SVN repository explorer to http://impala.googlecode.com/svn/trunk/osgi-sample, check out the contained projects, and run any of the unit tests. You'll see something like the following, for MessageServiceTest.





















Note that the unit test is actually run on the Equinox OSGi container using the Spring Dynamic Modules test infrastructure. If this seems very easy, it's because it is - you won't even need to run any build scripts to get the example working!

If you're familiar with Impala you will notice that your OSGi-based application is configured in an identical manner to regular Impala applications, allowing a seamless transition from a plain Impala to an OSGi-based runtime (and back again). Impala's abstraction for representing relationships between modules provides a convenient mechanism for expressing and reloading module subgraphs, something you need to do continually during development.

Impala's OSGi support leverages much of the ground work covered by the Spring DM project. Impala also automatically generates OSGi compliant jars using Peter Kriens' BND tool when using Impala's build support, and provides a convenience target to generate an OSGi manifest so that you don't need to continually create a new set of OSGi jars each time you modify your application.

So is Impala's OSGi support production ready? Not at this point. Apart from the lack of any serious road testing, the big blocker is there is no explicit web support (yet). Additionally, Impala applications should have the option to use OSGi's service registry. I'd also like to better test framework integration so that Impala's interactive test runner and integration test suites can work equally well in an OSGi environment.

There are still some interesting technical challenges to overcome, and plenty of work to do to bring this project to its full potential. So if there is anyone out there with an interest in dynamic modules, Spring, OSGi , etc. and would like to help out, I'd love to hear from you.

So why add OSGi support to Impala?

The sweet spots in Impala are the ease in writing and running integration tests, ability to reload application modules, great web support, and a really simple, no-nonsense development environment. The sweet spots with OSGi is the way in which it gives you control over loading of third party libraries. OSGi allows you to dynamically reload third party libraries as well as concurrently run different versions of the same third party library classes, two things that you cannot do with Impala on its own.

I might stick my neck out a bit by saying that the features that I described above that are available in Impala on its own are much more valuable for the typical Java application than the extra features provided by OSGi. It is for this reason that Impala is built from the ground up to work in the simplest, most developer-friendly way possible without any reliance on third party runtime environments, tools or build systems.

That being said, there are clearly valid and potentially important use cases for the unique OSGi features. This, together with the growing industry momentum behind the technology, make OSGi support pretty much mandatory for any Java dynamic module system which demands to be taken seriously.

As I made clear at a talk in London last week, OSGi is the most powerful and sophisticated Java dynamic module system, particularly in its use of class loaders. But for many projects a simpler technology would suffice. Impala is the only dynamic module framework that offers the flexibility to choose easily between alternatives.

It is worth noting that in the upcoming version 1.0M4, Impala introduces support for expressing module dependencies via a graph rather than simply as a hierarchy. This allows individual modules to reuse functionality from multiple dependencies, rather than a single parent. This feature is particularly important both because it narrows the gap between Impala's built-in module support and that provided by OSGi, allowing for a more seamless and less constrained transition between the technologies.

Monday, October 13, 2008

Impala 1.0 M3 released with major web enhancements

I am pleased to announce the release of Impala 1.0M3, with major web enhancements.

Impala is a dynamic module framework for Java enterprise application development, based on the Spring framework. With a focus on simplicity and productivity, Impala radically transforms application development using Spring and related technologies.

The Impala 1.0M3 release includes a range of improvements aimed at more robust and powerful support for web applications consisting of multiple dynamic modules. This release provides the underlying capabilities required for building dynamically reloadable web applications using frameworks other than Spring MVC, such as Struts, Wicket, Tapestry and others.

Impala 1.0M3 also includes a dynamic properties framework, as well as various other feature improvements and bug fixes. A more detailed release announcement is here: http://code.google.com/p/impala/wiki/Release1_0M3Announcement

With Impala, you can take your Spring application development to the next level without having to grapple with unfamiliar technologies or tool sets. It requires no additional third party libraries beyond those required for vanilla Spring applications. It works within existing Java deployment environments, and requires no complex runtime infrastructure or tool support.

Impala Home: http://impala.googlecode.com

Friday, October 3, 2008

Using the Thread's context class loader in a multi-module environment

When you working in a multi-module environment, a problem you bump up against is how to handle the thread's context class loader.

In case you haven't seen this before, you can access the thread's context class loader using the code

Thread.currentThread().getContextClassLoader();

and you can set it using

Thread.currentThread().setContextClassLoader(classLoader);

Why it exists

So why does the context class loader exist, and how is it used? The thread context class loader was introduced in Java 1.2 to enable frameworks running in application servers to access the "right" application class loader for loading application classes.

Lets take a framework like Struts, which uses the thread's context class loader (as do many other similar frameworks). Struts allow you to declare action classes in an XML configuration file. Once the framework has read the file and figured out what class to load (at this point only identified name), the framework needs a classloader to load the class.

One naive approach to this problem would be to use the code

this.getClass().getClassLoader()

This would return the the class loader responsible for loading the framework class. There is an obvious limitation here. The framework jar would need to be packaged in a location visible only to the application class loader, and not to the system class loader or to some shared or server class loader. Again, coming back to the example of Struts, say running in Tomcat, the framework jar would need to be placed in WEB-INF/lib and not on the system class path or in a server class path such as TOMCAT_HOME/common/lib. This restriction might hold in many cases, but is not one which can be relied upon.

To solve this problem, the context class loader was invented, to give framework code a mechanism to find the "correct" class loader to load application classes. In the case of the web application, the server typically applies the web application class loader as the context class loader.

So what's the problem?

This mechanism has worked well enough until now, but is a bit of a hack, something which is certainly exposed by multi-module frameworks such as OSGi.

There is an interesting article on the Equinox wiki page on this problem and how it addresses. To save some typing (and a bit of thinking), I've picked out a quote:

Many existing java libraries are designed to run inside a container (J2EE container, Applet container etc). Such containers explicitly define execution boundaries between the various components running within the container. The container controls the execution boundaries and knows when a boundary is being crossed from one component to the next.

This level of boundary control allows a container to switch the context of a thread when a component boundary is crossed. Typically when a container detects a context switch it will set the context class loader on the thread to a class loader associated with the component which is being entered. When the component is exited then the container will switch the context class loader back to the previous context class loader.

The OSGi Framework specification does not define what the context class loader should be set to and does not define when it should be switched. Part of the problem is the Framework is not always aware of when a component boundary is crossed.


Some Solutions

Eventually, the OSGi specification will presumably come up with a standard way of addressing this problem. Until then, it is up to vendors to implement their own solutions. Equinox does so through Buddy Class Loader and Context Finder mechanisms, which you can also read about here.

Rob Harrop describes his solution here for the Spring Application Platform:
Each bundle in OSGi has it's own ClassLoader, so therefore, only one bundle can be exposed as the thread context ClassLoader at any time. This means that if a third-party library needs to see types that are distributed across multiple bundles, it isn't going to work as expected.

The Platform fixes this by creating a ClassLoader that imports all the exported packages of every module in your application. This ClassLoader is then exposed as the thread context ClassLoader, enabling third-party libraries to see all the exported types in your application.


So this will work as long as you export the types that the third party libraries needs to load.

Impala's Solution

Impala uses proxies to call across module boundaries. The proxy interceptor has access to a service reference for the target object being invoked. This service reference, as well as containing a reference to the target instance, also has a reference to the class loader responsible for loading the module which exported the target bean. Before invoking the instance, it sets the thread context loader to this class loader.

That's quite a mouthful - if code makes more sense that this English then see this souce.

Additionallly, Impala Servlets - either extensions of the Spring DispatcherServlet or those responsible for integrating with other servlets - perform a similar operation.

There are one or two gotchas. It is possible to call beans defined in parent modules without going through a proxy. In this case, the described mechanism will not be applied. I'm pretty confident though that this particular scenario won't cause any serious problems.

I'd certainly be interested in finding out over the course of time whether there are other strategies that Impala's solution won't cover.

Monday, September 22, 2008

Is the case for forking Spring building?

SpringSource recently announced a new maintenance policy which I, like many in the developer community, find quite disturbing. The basic idea is that SpringSource will only publish releases to non-paying customers of SpringSource for three months after the initial release. After that, it will be up to the rest of the community to do its own releases, with community releases only coming out every so often.

Reactions to this on The Server Side and other places have been understandably very negative. Personally, I have been less than enthusiastic about some of the developments that have been coming from SpringSource since it got VC funding. See a previous entry.

This time, it has gone to far. By biting the hand that feeds it, it has risked alienating its most loyal users and putting many people off the project altogether. It also really undermines the increasing number of other community projects which are built around Spring, projects like Impala.

As the author of a number of open source projects which despite obvious (to me) technical merits never attracted a very large number of users, I know only too well that users are by far the most precious feature of an open source project, to be valued above all else.

SpringSource has not only begun to ignore it's users, it has shown the willingness to outrage them. This shows an unbelievable arrogance and complacency.

All of this is quite apart from reservations I increasingly have about the direction that SpringSource is taking Spring.
  • The project is no longer open
    if you look at the history of SVN commits in Spring, you will notice that the project has become much less open. Around early 2005 there were plenty of commits going into Spring from individuals who are not SpringSource employees. Now commits to Spring core (the part that most people use) are only being made by SpringSource employees.

    Also, strategically important projects are increasingly developed outside of the public domain - including Spring 3.0, which will be the next major release of the core framework.

  • We should be nervous about future licencing moves
    So far the official line is that none of the none of the licences will be changed for any existing project. It would probably impossible to do this anyway. However, many new projects from SpringSource are no longer coming out with friendly open source licences (specifically ASF 2). How do we know, for example, that Spring 3.0 won't be released as a "new" project with a less friendly licence.

  • Obsession with OSGi
    SpringSource has become obsessed with dragging the entire Java community into using OSGi. Don't get me wrong, I do like OSGi, and hope to support it with Impala, and at some point I am sure that I will be using it in real applications. But it shouldn't be at the exclusion of other things which could be done to make Spring more usable without OSGi. It's exactly for this reason that I started Impala. I wanted modules, reloadability, isolation, etc. but not necessarily with the baggage of OSGi at this point.

  • Key man dependencies
    Every project has it's major contributors, the individuals who do most of the heavy lifting while others go around doing the talking. Spring is no different - as the commit graph shows, this is Juergen Hoeller, and it is very much down to his excellence that Spring is so well regarded. If Juergen decided to go off to pastures new, Spring core would be in a very bad place. For a project which is relied upon by so many around the world, this is not a good thing.
In short, it's hard to like the way that Spring is going. But it's not too late to change. One of two things need to happen:
  • SpringSource needs to take Spring back to where it came from. This doesn't mean winding up the company and going back to working for free. But it needs to reaffirm and re-demonstrate it's commitment to the spirit of open source - openness, transparency and a desire to serve it's users. This probably means lowering it's financial objectives. There's still plenty of money to be made from consulting, training, and from providing value added products and tools. But not when it contravenes the essence of what made Spring popular in the first place.
  • If SpringSource cannot get it's act together, the project should be forked. Because Spring runs on an ASF licence, this is quite feasible. But there are lots of question to answer. Who would be willing to do this? Would they be the right people? And where would they take the project?

    Forking a project could be a dangerous move for the community, but history shows that it won't necessarily fail. Perhaps it's better for the project to take the pain now than to have death by a thousand cuts.
Judging by the strength of ill-feeling regarding SpringSource's latest announcement, I'm sure there must be other people in the wider Spring community thinking the same thing. I wonder if there are people working within SpringSource who feel the same way, or is the gravy train now just moving too fast for them to hop off?

Friday, September 12, 2008

Generic support for dynamically reloadable web frameworks

One little feature that I think we're close to pulling off with Impala is the ability to embed any Servlet-based web application and have it dynamically reloadable as a module. I'm thinking of the likes of Tapestry, Struts, etc.

Up to version 1.0M2, this capability has only really existed for Spring MVC. However, to make Impala really attractive to users of other frameworks, you'd want to support these as well. After all, not every Spring user is also using Spring MVC as the web framework.

More recent changes now allow you to embed any Servlet into a the application context XML belonging to an Impala web module, using code such as:

<bean id = "delegateServlet" class="org.impalaframework.web.integration.ServletFactoryBean">
<property name = "servletName" value = "delegateServlet"/>
<property name = "servletClass" value = "servlet.SomeFrameworkServlet"/>
<property name = "initParameters">
<map>
<entry key="controllerClassName" value = "servlet.ServletControllerDelegate"/>
</map>
</property>
</bean>

Note how the init parameters, the servlet name and the servlet class have been specified, as they would be for entries in web.xml.

The InternalFrameworkIntegrationServlet, whose definition is shown below, is what allows the servlet to "live" within an Impala module. It contains the glue code
that ties an invocation from outside of a module (within the context of a servlet container) to the servlet within the module.

<bean class="org.impalaframework.web.integration.InternalFrameworkIntegrationServletFactoryBean">
<property name = "servletName" value = "myservlet"/>
<property name = "servletClass"
value = "org.impalaframework.web.integration.InternalFrameworkIntegrationServlet"/>
<property name = "delegateServlet" ref = "delegateServlet"/>
</bean>

Finally, to communicate with the module itself, there is the ModuleRedirectingServlet, which simply delegates to the a servlet which is registered in the ServletContext using a name which corresponds with the name of the module (and happens to correspond with the first part of the URL's servlet path).

Here's an example configuration in web.xml:

<servlet>
<servlet-name>module-redirector</servlet-name>
<servlet-class>org.impalaframework.web.integration.ModuleRedirectingServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>module-redirector</servlet-name>
<url-pattern>*.htm</url-pattern>
</servlet-mapping>

Together, these offer the capability dynamically embedding applications using arbitrary web frameworks. The idea is that you can register and load a Struts, Webwork, Tapestry or other module after the JVM has started, without impacting any existing code or without having to restart the JVM or reload the entire web application within the application server. A powerful concept.

There are definitely some gotchas to take care of. For one, it relies on object instances being loaded using the classloader obtained via Thread.currentThread().getContextClassLoader(). Also, how can we make sure that objects saved to sessions don't cause ClassCastExceptions when the module is reloaded? Also, how can we make sure that the same result does not occur because of one module attempting to access objects saved to the session by another module? And of course, we'd need to prove it working with real applications based on these frameworks, not example code assuming noddy pseudo frameworks. These kinds of problems will need to be taken care of before the problem can be considered to be fully solved. It should be fun figuring it out, though.

Impala talk at Spring User Group UK

After quite a few weeks of being very busy at work, followed by a few weeks of taking it pretty easy and enjoying the summer, I'm now back to work, and trying to ramp things up a bit with Impala. On Wednesday night I did a talk at the Spring User Group UK on Impala. You can view the talk here - it's a bit grainy, but you can still hear everything. You can view the slides here.

I was given half an hour for the presentation, and I managed to squeeze it into 27 minutes, including a demo. I was pretty happy with the talk - my little example demonstrating creating a new Hibernate entity within a Spring application went without a hitch.

Hope to do a few more talks in the coming months.

Sunday, July 13, 2008

Impala 1.0 M2 released

Today I dropped a release of Impala 1.0M2.

It is mostly an incremental release, with a few relatively minor enhancements and bug fixes, but there is one significant feature: a mechanism for allowing module metadata to be contained within the module, instead of being externally specified, as in previous releases. I will post a bit more detail on this feature, as it does make specifying modules more intuitive and concise, as soon as time constraints ease. This change is covered in this issue report, and does involve minor backward incompatible changes.

A total of 20 issues are covered in this release. The full set are covered in this search. Enjoy.

Sunday, May 25, 2008

How lightweight is Impala?

In my last blog, I described a set of criteria which can be used to determine how lightweight a framework or technology is. Let's consider Impala against the criteria described above:

1. How much system resource does the technology use?
Impala adds an almost negligible overhead to a plain Spring application.

2. Do applications need special runtime support to run?
Impala simply runs on the JVM. It's only runtime dependencies are Spring and logging libraries.

3. Are any special build steps required for the application?
No, it can be run from within Eclipse without any build steps.

4. How difficult is it to test applications which use the technology?
The interactive test runner and integration test support make Impala applications easy to test in a very productive way.

5. Can applications be updated dynamically, or do updates require a restart?
Yes.

6. How easy is it to modularise applications which use the technology?
Modularity is built in from the ground up.

7. Do applications require any code generation, either source code or byte code?
None, other than that which is used in the equivalent Spring application.

8. How quick are applications to start?
Not perceptibly slower to start initially than the equivalent plain Spring application, but requiring dramatically fewer restarts.

9. How complex is the technology to understand?
Impala is simple to understand. The developer only needs to have a understanding of the basic principles, architecture and project conventions to get going. Only in very few places is interaction with Impala APIs required.

10. Is any special tool support required to be productive with the technology?

Impala works in a vanilla Eclipse installation without any additional required plugins.

Overall, I'd argue that Impala passes this lightweightness test with flying colours. I'm also pretty convinced that an equivalent comparison with alternative technologies would be less favourable.

See more details on these definitions for lightweightness.

What makes a technology lightweight?

What makes a technology lightweight? Being lightweight is considered a good thing. Lightweight technologies are popular with developers because they are simpler and less cumbersome to work with. The downside, of course, is that lightweight technologies can be less feature rich than their more "heavyweight" alternatives. Whether this matters depends on whether the missing features are actually required, that is, whether the value that these features provides outweigh the overhead that is added.

Plenty of technologies are "sold" on the basis of being lightweight. While most of us have an intuitive idea of what lightweight means, it's quite useful to think of this concept in terms of a more concrete set of questions or criteria. This can be helpful in making comparisons between frameworks and technologies, and indeed for assessing claims made on behalf of particular technologies.

1. How much system resource does the technology use?
This is the most obvious measure of how heavyweight a technology is. What is the memory footprint of an application? A more narrow definition of lightweight vs heavyweight technology is most likely to focus on this aspect of the definition.


2. Do applications need special runtime support to run?
If the technology requires special runtime support, then it automatically becomes more heavyweight. A good example is EJB 2, which requires applications to be run and tested in a container. Even if the container is embeddable in a standard JVM (for example within the IDE), this can make it more difficult to test applications, particularly if tests themselves need to run within the container interactive with the code under test.

3. Are any special build steps required for the application?
If the technologies require artifacts to be packaged in a certain way, then this adds an overhead to the build cycle. A related question is, can applications be run directly in an IDE without an explicit build? The most lightweight technologies by this definition require no build steps at all - you simply fire up the application in your IDE.

4. How difficult is it to test applications which use the technology?
If a technology requires special runtime support (see 2.) or it is complex to set up dependencies for running tests, the barrier to writing effective integration tests is raised, resulting in a reduction in productivity and/or quality.

5. Can applications be updated dynamically, or do updates require a restart?
If every change made requires an application restart, this can make the technology feel more heavyweight, especially if restarts are time consuming.

6. How easy is it to modularise applications which use the technology?
Technologies which allow applications to be packaged and run in a modular way will feel more lightweight than those which require everything to be run up as an amorphous "blob", especially as the application grows in size and the need to be more fine-tuned about what is deployed becomes more obvious.

7. Do applications require any code generation, either source code or byte code?
Applications which require code generation generally involve extra build steps which can complicate the build process and/or slow down the build/test cycle. Byte code generation can hide much of this pain, but does introduce a measure of complexity which can make a technology feel heavyweight.

8. How quick are applications to start?
Technologies that are slow to start are clearly more heavyweight - they are doing more, have more going on behind the scenes, and are using more memory. They also are more likely to leave behind the perception that they are inefficient.

9. How complex is the technology to understand?
The more complex a technology is, the more heavyweight it will feel. One way of thinking of it is this: for each step you take in working with the technology, the more time you will need to spend thinking about what you are doing, relative to simply doing.

10. Is any special tool support required to be productive with the technology?

If special tool support is required to make you productive with a technology, then this technology will feel more heavyweight than one which imposes no such requirements. Now, not only do you need to worry about dependencies in your runtime environment, you need to worry about those in your tooling environment. Also, its slower to get going with the technology to start with.

You can take this methodology further by ranking frameworks and technologies on a scale of 1 to 10 on each of these points. Some frameworks and technologies which purport to be lightweight are actually considerably less so when evaluated against these criteria.

In my next blog, I will evaluate Impala's lightweightness against these criteria.

Wednesday, May 21, 2008

JavaWUG BOF 37: talk on Impala Framework

Just a quick update on my talk at the Java Web User Group in London last night. Thanks for those who made - as always it was an enjoyable evening.

Here's a copy of the presentation.

A main focus of my presentation was a demo based on an Impala version of the Spring Petclinic application - one of the main reference applications bundled with the Spring Framework.

During the demo I introduced a new feature to the application, involving a new database table, some new Hibernate mappings, persistence code, as well as some enhancements to the web presentation layer. The changes were made without having to restart the application during the demo, all updates were made on the fly.

The source code for the demo is available from the Impala subversion repository. The changes made during the demo are in this file.

Friday, May 16, 2008

Talk next Tuesday on Impala at JavaWUG

Next Tuesday evening I am doing a talk on Impala in London. Here are the details:

http://www.jroller.com/javawug/entry/javawug_bof_37_impala_framework

You can register for the event here:
http://skillsmatter.com/event/java-jee/javawug-bof-37

The talk is at 6:30pm at Skills Matter in London. I'll be explaining why I created Impala, how it can dramatically accelerate Spring-based development and allow you to create more modular, maintainable applications. The format will a mixture of slides and demos.

Saturday, May 10, 2008

Impala 1.0M1 released - first public release

I am pleased to announce the first public release of Impala, a dynamic module framework for Java enterprise application development.

Impala 1.0 M1 can be downloaded from http://code.google.com/p/impala/downloads/list

Impala builds on the Spring Framework to provide a genuinely modular, highly productive environment for web-based applications. It allows you to divide Spring-based applications into a hierarchy of modules which can be dynamically added, updated or removed to an existing running application.

Impala's modularity features make it possible to write applications which are much easier to maintain than plain Spring applications. Impala enables applications which can grow very large without exploding in complexity. Impala also enables genuine productivity enhancements over plain Spring development, through the dynamic module loading capability, seamless integration with Eclipse, and the efficient test management features. Impala also features basic built-in build support, based on ANT, and dependency management capabilities.

With Impala, you can take your Spring application development to the next level without having to grapple with any unfamiliar underlying technologies or tool sets. It requires no additional third party libraries beyond those required for plain Spring applications. It works within existing Java deployment environments, and requires no complex runtime infrastructure or tool support.

For more information on getting started with Impala, see http://code.google.com/p/impala/wiki/GettingStarted.

Monday, May 5, 2008

Spring Application Platform validates Impala's existence

A couple of days ago I wrote a blog in which I was somewhat negative about the Spring Application Platform. My issue is partly with the problems SpringSource is trying to solve, and also the way that they are trying to solve them.

The problems being targeted are twofold: first, the ability to get more precise control over which versions of which class are loaded by which libraries in the JVM, and secondly, the need for modular applications.

What I've been saying all along is that while the ability to get precise control over class versioning is worthwhile, it also involves quite a lot of pain, and for most developers, this pain will not be worthwhile.

What is really needed is modularity. SpringSource have gone straight for OSGi to achieve both modularity and class versioning. The problem is that this solution also brings is a lot of complexity. So in order to manage this complexity, an extra solution is needed to hide this complexity. Enter the Spring Application platform.

The problem with all of this is that there seems to be an implicit assumption that effective, dynamic modularity is not possible without OSGi. How about another approach? Why not try to see how far you can get with just the standard Java class loading mechanisms? Spare yourself all the hassle where it's not needed, and go straight for the sweet spot. That's the Impala approach. OSGi may come to Impala too, but it won't be forced on users.

Surprisingly, you can get pretty far without OSGi. For one, you can have multiple versions of your own applications loaded simultaneously in different modules. You can do on the fly updates. You can also get tremendous productivity advantages from simple and lightweight mechanisms for managing integration tests, with fewer restrictions on your runtime environment than you will have with OSGi.

What you don't get without OSGi is the ability to have multiple versions of third party libraries loaded concurrently within the same JVM. The question you have to ask yourself is this: how important is this feature to you? How much pain are you prepared to go to achieve it? After all, hundreds and thousands of Java applications have been written that don't rely on this feature. In my time as a Java developer, I can only think of at most a handful of occasions where class versioning issues have presented an issue. It does not seem like the kind of problem that deserves a solution with a far reaching impact on your technology choice, runtime environment and tool set.

In short, I am apparently in agreement with SpringSource that modularity and dynamic reloading are two features whose absence has been a real limitation to development using the Spring Framework. This certainly convinces me that the last year or so that I have spent working on Impala has not been a waste of time. Indeed, the benefits experience by projects I have worked on using Impala suggest the contrary.

Friday, May 2, 2008

Spring Application Platform - is it a big mistake?

It was with some consternation that I read the Server Side thread on Spring's announcement of a new application platform.

Firstly, the Spring Application Platform, as it's called, is essentially a Spring OSGi integration on steroids. I've voiced concerns on how well OSGi is going to be received by ordinary developers, as against application server vendors. I'm yet to be persuaded that regular
Java developers will be particularly enthusiastic about dealing with the idiosyncracies of OSGi. There is more than a hint of EJB history about it. SpringSource (and application server vendors?) have identified OSGi as the way that we ordinary mortals must now follow.

Spring Application Platform (and Spring Dynamic Modules) are great if you've already bought into OSGi. As a general platform for enterprise application development - at this point I'm yet to be convinced.

Secondly, with its Application Platform, SpringSource has moved squarely into the application server market. This is something which I didn't expect them to do, and I don't think it's a very good idea. It's too invasive, too big a step in one particular direction for me to feel comfortable with. I think of the Spring Framework as a technology which works seamlessly with a broad variety of platforms and application servers. With the Application Platform, they're pinning their colours to a particular mast - a technology stack based on Equinox and Tomcat, and setting themselves in competition with other application servers.

Thirdly, there is the question of the license. For the first time, SpringSource is introducing a major product which is not based on the Apache V2.0 license, instead going for a GPL license. That's a major change that reflects a real difference in the way that the company is positioning itself in the marketplace.

Finally, the question in my mind is where the Spring Framework itself fits into all of this. I've always thought of the Spring Framework as the flagship product coming from the Rod Johnson crew. It didn't matter if SpringSource came up with one or two turkeys in its portfolio, because at least the core framework is solid. But with the Spring Application Framework, they're betting big. They've clearly had some of their best brains working on the project for some time. A failure won't be quite as easy for the development community to brush off.

So what will happen now? Will the Spring Framework become the "poor relation" of the Application Platform? Will new features and improvements go into the Spring Framework, or will this suffer in the future at the expense of the Application Platform. The waters have definitely been muddied.

The Spring Application Platform is the biggest announcement to come out of the Spring team for some time. It also looks like it could be a big mistake. Spring became popular in the first place as a practical, community driven solution to the real problems with Java enterprise applications, with a focus on simplicity. The latest offering seems to be moving in a rather different direction.

Wednesday, April 30, 2008

Thoughts on integration testing

Everybody (pretty much) knows what they mean by unit testing, and on the whole, I would expect most people to have a broadly speaking common understanding of what this means. Unit testing of course involves testing the behaviour of a particular unit of code (e.g. Java class) in isolation.

When it comes to integration testing, it's probably not quite as clear cut. For example, does a test need to cover an entire use case to be considered an integration test? What about a database test which only covers a single JDBC-based method? Is this an integration test?

For me, the key point of distinction is the realness of the test. An integration test is such because it embodies the real behaviour of the system. The more a test relies on simulating aspects of a system's behaviour, the less real it is, and further it moves away from being an integration test.

This brings me on to the question of what is an integration test in a Spring/Impala environment. Here, an integration test will typically always involve using a real Spring application context rather than constructing the fixtures programmatically. It will involve involve connecting to a real database, rather than an in-memory test database (e.g. HSQLDB). It will make limited if any use of mock objects. All in all, integration tests in a Spring environment are closer in behaviour to that exhibited by the real system.

Another question worth addressing is this: what is the relative value of integration tests versus unit tests? A working set of integration tests gives me much more confidence than a set of unit tests which are not backed by integration tests.

One difficulty with integration tests is that they are much less direct than testing using unit tests. It's harder to establish the precise link between the tests and what is actually getting tested. On the plus side, you can get a lot of test coverage with very little code through integration tests.

The other problem with integration tests is, of course, they can be painful to write. Specifying dependencies required for integration tests can be fiddly and time consuming. Also, integration tests can be slow to run. For these reasons, some developers shy away from integration tests, preferring to rely on unit tests plus manual testing of use cases using the deployed application. The big cost is that quality of the testing regime in these circumstances is much diminished, making it much easier for bugs to slip through.

With Impala, integration tests are so easy to write and quick to run, that there really are no excuses. Integration tests are easy to write because dependencies are expressed at a high level through composition of modules. They are quick to run because of the dynamic module loading support used by the interactive test runner, and the efficient way that suites of unit tests are managed within Eclipse. See this wiki page for more details.

Saturday, April 12, 2008

Impala and Strecks

A few of you may about my involvement in another open source project, Strecks, which was a set of Java 5 specific extensions to Struts. Yes, remember Struts, the one that used to be so popular a few years ago. There are a few similarities between Impala and Strecks, but also a few differences, that I would like to comment on.

First, the similarities. Just like Strecks was an extension to Struts, Impala is an extension to Spring. I wrote Streck to address what I perceived to be limitations of Struts. These limitations of Struts are now fairly well understood. In a similar way, I wrote Impala to address shortcomings of Spring - in particular, the lack of first class support for modules.

Strecks allowed me to write web applications with Struts the way I believed they should be written. The changes I made with Impala have allowed me to write Spring applications in the way that I would like to write them. Some of the real headaches that came with attempting to scale Spring applications, attempting to build in sophisticated configuration options, have just gone away.

Now to the differences. The timing of Strecks was unfortunate. Just as I was in the reasonably advanced early stages of development, an announcement was made that Struts was to merge with Webwork, with future Struts 2 development using the Webwork code base. That was the death knell to Struts as we knew it. I believed at the time that Strecks was a genuine way forward for Struts, without having to throw away the code base and stick a finger at the huge Struts user community. However, once the decision had been made, any project based on old Struts had little chance of gaining any long term traction. I still took the project to a 1.0 final release, but there seemed little point in actively developing the project from then onwards, unless it served my own needs directly. Since then, I haven't been working on Struts projects more than intermittently, so such a need hasn't arisen in any real sense. That being said, it's still getting 100 to 200 downloads a month, so it hasn't disappeared completely off the map. Oddly, I am now spending a bit more time on a Struts-based project, so it might be a good opportunity to revive Strecks and put in a couple of features I had wanted to add. Also, I might focus on Strecks as a target for web framework integration with Impala.

In contrast to Struts, I don't think there is any chance that the Spring code base is going to be deprecated any time soon. It's an extremely healthy project, arguably more so than any other in the Java community. Secondly, internally, Spring is well architected. It has all the extension points I have needed for Impala to fit naturally into the existing design paradigm. I have never felt the need to correct Spring's faults. Struts, on the other hand, had flaws in its architecture that were not always easy to get around when implementing features in Strecks.

My hope is that the Spring community will share my view that Impala elegantly solves some of the real practical problems which come when developing large, complex Spring-based applications, delivering substantial productivity benefits at the same time. It has been a fascinating and enjoyable project to work on. It would be nice to see some of these benefits being shared more widely. But first I need to get the public release out!

Friday, April 4, 2008

More about Impala scaffolding

It is important to appreciate the developers are busy people - if they are going to spend an hour looking at your project, then this has to be an hour well spent. For this reason, there is a real focus in Impala on making the developer experience as seamless as possible. Everything should work out the box. It's only when you start doing interesting stuff, stuff specific to your problem domain, that you should have to do any real work.

In my previous post I described the simple scaffolding system that is available for Impala. This is not supposed to compete with the scaffolding provided by Ruby on Rails, Grails and such projects. There is a fundamental difference in approach. Impala's scaffolding is only supposed to take you to the point where you have a clean slate to work on. It is not a code generation framework, and I have no intention of moving into that space with it. However, getting to a clean slate position - where you can actually start working on your application - is not a zero work task for many projects. For a project such as Impala which is designed to support very complex applications, you want to make this as simple as possible. I don't want developers who take time out to try Impala to spend their first hour creating directory structures, setting up class paths, and downloading third party libraries from various sources.

The scaffolding I've created with Impala is designed to help you transition to square one as quickly as possible. Once you get there you have a working Impala Hello World application, with working module definitions, a working web application, interactive integration tests and a JUnit suite. With a decent internet connection, running through the scaffolding steps should only take a few minutes - allowing time to play, try things out and do some fun stuff.

Tuesday, April 1, 2008

Scaffolding for Impala

It's been a little while since my last post, and since then, Impala has taken a few steps forward in the background. One of these is a simple scaffolding mechanism, which allows you to go from an empty empty Eclipse workspace to a working application in just a few simple steps.

First, start by downloading the latest snapshot distribution of Impala from the subversion repository.

http://impala.googlecode.com/svn/trunk/impala/impala/dist/impala-SNAPSHOT.zip

You can do this via the web browser. From Linux or Mac OSX you may find it more convenient to use curl.

cd ~
curl -o impala-SNAPSHOT.zip http://impala.googlecode.com/svn/trunk/impala/impala/dist/impala-SNAPSHOT.zip

You can then unzip the Impala distribution:

unzip impala-SNAPSHOT.zip

Once you've unzipped Impala, set the IMPALA_HOME environment property.

IMPALA_HOME=~/impala-SNAPSHOT
export IMPALA_HOME

In windows, you will probably want to use the GUI to do the same.

Now, change to IMPALA_HOME, and run the following command:

ant -f scaffold-build.xml scaffold:create -Dimpala.home=./

You will then be guided through an interactive process where you will need to specify the following information:
  • the name of the project containing the root Impala module. This project is a kind of a master project, and will also be the project from which you will typically run ANT build scripts when this is necessary.
  • the name of the project containing a non-root module. In a real world application, non-root modules would contain implementations of DAOs, service methods - anything really. In a substantial real world application there will be several if not many non-root modules, together forming a hierarchy of modules.
  • the name of a project containing a web module. Although it is possible to have multiple web modules in a single application, the simple scaffolding starts with just one web project.
  • the name of the repository project. This contains the third party jars used by the different application modules.
  • finally, the name of the test project. The test project is really a convenience from which you can easily run suites of tests for the entire application.
The last time I ran this command, the following output was produced:

ant -f scaffold-build.xml scaffold:create -Dimpala.home=./
Buildfile: scaffold-build.xml

scaffold:input-workspace-root:
[input] Please enter name of workspace root directory:
/Users/philzoio/workspaces/scaffold

scaffold:input-main-project:
[input] Please enter main project name, to be used for root module:
main

scaffold:input-module-project:
[input] Please enter name of first non-root module:
module

scaffold:input-web-project:
[input] Please enter name of web module:
web

scaffold:input-test-project:
[input] Please enter name of tests project:
test

scaffold:input-repository-project:
[input] Please enter name of repository project:
repository

scaffold:create-confirm:
[echo] Workspace root location: /Users/philzoio/workspaces/scaffold
[echo] Main (root) project name: main
[echo] First non-root module project name: module
[echo] Web project name: web
[echo] Tests project name: test
[echo] Repository project name: repository
[input] Press return key to continue, or CTRL + C to quit ...


scaffold:create:
[mkdir] Created dir: /Users/philzoio/workspaces/scaffold
[copy] Copying 6 files to /Users/philzoio/workspaces/scaffold/main
[copy] Copied 6 empty directories to 5 empty directories under /Users/philzoio/workspaces/scaffold/main
[copy] Copying 3 files to /Users/philzoio/workspaces/scaffold/main
[copy] Copying 1 file to /Users/philzoio/workspaces/scaffold/main/spring
[copy] Copying 4 files to /Users/philzoio/workspaces/scaffold/module
[copy] Copied 5 empty directories to 4 empty directories under /Users/philzoio/workspaces/scaffold/module
[copy] Copying 2 files to /Users/philzoio/workspaces/scaffold/module
[copy] Copying 1 file to /Users/philzoio/workspaces/scaffold/module/spring
[copy] Copying 11 files to /Users/philzoio/workspaces/scaffold/web
[copy] Copied 8 empty directories to 4 empty directories under /Users/philzoio/workspaces/scaffold/web
[copy] Copying 2 files to /Users/philzoio/workspaces/scaffold/web
[copy] Copying 1 file to /Users/philzoio/workspaces/scaffold/web/spring
[copy] Copying 2 files to /Users/philzoio/workspaces/scaffold/test
[copy] Copying 1 file to /Users/philzoio/workspaces/scaffold/test
[copy] Copying 1 file to /Users/philzoio/workspaces/scaffold/repository
[copy] Copied 2 empty directories to 1 empty directory under /Users/philzoio/workspaces/scaffold/repository

BUILD SUCCESSFUL

Before we import the Eclipse project, there are just two more steps to follow:

First, go the main newly project, and run the following two commands:

cd /Users/philzoio/workspaces/scaffold/main
ant fetch
ant get

The fetch command will copy the Impala libraries into the repository project of the new workspace, as shown by the following output.

ant fetch
Buildfile: build.xml
[echo] Project using workspace.root: /Users/philzoio/workspaces/scaffold
[echo] Project using impala home: /Users/philzoio/impala-SNAPSHOT

repository:fetch-impala-from-lib:
[copy] Copying 12 files to /Users/philzoio/workspaces/scaffold/repository/main

repository:fetch-impala-from-repository:

repository:fetch-impala:

fetch:

BUILD SUCCESSFUL

The get command downloads the necessary third party libraries, as defined using a simple format in dependencies.txt files.

ant get
Buildfile: build.xml
[echo] Project using workspace.root: /Users/philzoio/workspaces/scaffold
[echo] Project using impala home: /Users/philzoio/impala-SNAPSHOT

shared:get:
[echo] Project using workspace.root: /Users/philzoio/workspaces/scaffold
[echo] Project using impala home: /Users/philzoio/impala-SNAPSHOT

download:get:
[mkdir] Created dir: /Users/philzoio/workspaces/scaffold/repository/build
[mkdir] Created dir: /Users/philzoio/workspaces/scaffold/repository/test
[get] Getting: http://ibiblio.org/pub/packages/maven2/commons-logging/commons-logging/1.1/commons-logging-1.1.jar
[get] To: /Users/philzoio/workspaces/scaffold/repository/main/commons-logging-1.1.jar
[get] Getting: http://ibiblio.org/pub/packages/maven2/commons-logging/commons-logging/1.1/commons-logging-1.1-sources.jar
[get] To: /Users/philzoio/workspaces/scaffold/repository/main/commons-logging-1.1-sources.jar

...

[download] ******************************************************
[download]
[download] RESULTS OF DOWNLOAD OPERATION
[download]
[download] org/springframework/spring-webmvc/2.5.2/spring-webmvc-2.5.2.jar resolved from
http://ibiblio.org/pub/packages/maven2/org/springframework/spring-webmvc/2.5.2/spring-webmvc-2.5.2.jar
[download] org/springframework/spring-webmvc/2.5.2/spring-webmvc-2.5.2-sources.jar resolved from
http://ibiblio.org/pub/packages/maven2/org/springframework/spring-webmvc/2.5.2/spring-webmvc-2.5.2-sources.jar
[download]
[download] ******************************************************

get:

BUILD SUCCESSFUL
Total time: 3 minutes 6 seconds
We're now ready to import our projects into Eclipse. Start by opening Eclipse in the newly created workspace.

Use the menus File -> Import ... -> General -> Existing Projects Into Workspace. When prompted, set the import base directory to the workspace root directory. This should bring up a dialog box as shown below.

Select all of the projects and import them.

If you reach this point and no errors are showing in your workspace, the congratulations! You have just set up a new Impala workspace.

Let's test it out:

Running up the web application

Using CTRL-Shift + T, find the class StartServer. Right click, then select Run As ... Java Application.

This will start a Jetty Server and run up a server on port 8080.

The text shown on the console view of Eclipse will look something like this:

2008-04-01 20:56:10.408::INFO: Logging to STDERR via org.mortbay.log.StdErrLog
2008-04-01 20:56:10.479::INFO: jetty-6.1.1
2008-04-01 20:56:10.963:/web:INFO: Initializing Spring root WebApplicationContext
INFO : BaseImpalaContextLoader - Loading bootstrap context from locations [META-INF/impala-bootstrap.xml, META-INF/impala-web-bootstrap.xml, META-INF/impala-jmx-bootstrap.xml, META-INF/impala-web-listener-bootstrap.xml]
INFO : ScheduledModuleChangeMonitor - Starting org.impalaframework.module.monitor.ScheduledModuleChangeMonitorBean with fixed delay of 2 and interval of 10
INFO : LoadTransitionProcessor - Loading definition root-module
INFO : ScheduledModuleChangeMonitor - Monitoring for changes in module root-module: [file [/Users/philzoio/workspaces/scaffold/main/bin]]
INFO : LoadTransitionProcessor - Loading definition module
INFO : ModuleContributionPostProcessor - Contributing bean messageService from module module
INFO : ScheduledModuleChangeMonitor - Monitoring for changes in module module: [file [/Users/philzoio/workspaces/scaffold/module/bin]]
INFO : LoadTransitionProcessor - Loading definition web
INFO : ScheduledModuleChangeMonitor - Monitoring for changes in module web: [file [/Users/philzoio/workspaces/scaffold/web/bin]]
2008-04-01 20:56:12.133:/web:INFO: Initializing Spring FrameworkServlet 'web'
INFO : ExternalLoadingImpalaServlet - FrameworkServlet 'web': initialization started
INFO : ExternalLoadingImpalaServlet - FrameworkServlet 'web': initialization completed in 21 ms
2008-04-01 20:56:12.168::INFO: Started SelectChannelConnector @ 0.0.0.0:8080
DEBUG : ScheduledModuleChangeMonitor - Completed check for modified modules. No modified module contents found
DEBUG : ScheduledModuleChangeMonitor - Completed check for modified modules. No modified module contents found
You can connect to the server using the URL:

http://localhost:8080/web/message.htm



Note that Impala is started up to automatically detect changes in your modules, and reload modules in response to these changes. You can play with this mechanism by making changes to classes such as MessageController (in the web project) and MessageServiceImpl (in the module project).

Running the standalone interactive client

Use Eclipse to find the JUnit test class MessageIntegrationTest. Again, run this as a Java application, and execute tests, reload modules etc, using the interactive test runner. Here's some example output:

log4j:WARN No appenders could be found for logger (org.springframework.context.support.ClassPathXmlApplicationContext).
log4j:WARN Please initialize the log4j system properly.
Test class set to test.MessageIntegrationTest
Unable to load module corresponding with directory name [not set]
Starting inactivity checker with maximum inactivity of 600 seconds
--------------------

Please enter your command text
>test
No module loaded for current directory: main
Running test testIntegration
.Hello World!

Time: 0.055

OK (1 test)


Please enter your command text
>reload
Module 'root-module' loaded in 0.096 seconds
Used memory: 1.5MB
Max available memory: 63.6MB


Please enter your command text
>module module
Module 'module' loaded in 0.035 seconds
Used memory: 2.1MB
Max available memory: 63.6MB


Please enter your command text
>t
No module loaded for current directory: main
Running test testIntegration
.Hello World!

Time: 0.012

OK (1 test)

Run the suite of tests

Any of the JUnit integration tests can be run as a regular unit test in Eclipse, with green bar and all. From the tests project, find the class AllTests. This contains a suite of tests covering all the tests in the project. Run this as a regular unit test, and you will see the following:



There's plenty more to get your teeth stuck into, but having a working web application, a working suite of tests, and interactive tests which can be run out the box is a helpful start.

Tuesday, February 5, 2008

Impala working on Tomcat as war

Today I reached a significant milestone with Impala. I successfully deployed and ran the Impala as a war file with reloadable modules, in Tomcat. Modules are placed as jars in the folder /WEB-INF/modules.

This is a big step forward. Until now, Impala has been developed and deployed in Eclipse or using ANT and an embedded Jetty runner. That's great for development, but for production, a war file is much more convenient - just having a single file which can be dropped into a web container is much more deployment friendly than having to set up a special folder on the target machine which has a structure which mirrors the development environment.

A war deployment option was one of the main features which stands in the way of a public release. There's quite a bit of tidying up to do in the way that the build/packaging process works, but it's fair to say that we're now a big step closer to a first public release of Impala.

Wednesday, January 30, 2008

Modularity: what is it, and why is it important?

Modularity is taken for granted as something which has value in software engineering, perhaps without always a well articulated understanding of what it is, and why it is valuable. I came to this realisation when I found myself having in some difficulty attempting to explain why the modularity that Impala brings can be beneficial to a large application. Instinctively, I felt like I had a good understanding, but it's useful to put this understanding into words.

What is modularity?

I googled remarkably few articles dedicated to the subject. The definition in the wikipedia entry was quite good, from which I quote the relevant sections:

"Programs that have many direct interrelationships between any two random parts of the program code are less modular (more tightly coupled) than programs where those relationships occur mainly at well-defined interfaces between modules.
Modules provide a separation between interface and implementation. A module interface expresses the elements that are provided and required by the module. The elements defined in the interface are visible to other modules. The implementation contains the working code that corresponds to the elements declared in the interface."

Modularity in Impala

This definition fits in very well with how modularity is implemented in Impala. Modules fit into a hierarchy. An Impala module only depends directly on its parent. This means that two modules at the same level have no direct dependencies on each other. Modules communicate with each other through well defined interfaces - specifically Java interfaces defined in a shared parent module.

Child modules dynamically contribute their implementation to a proxy bean defined in a higher level. I'm also planning contributions to a service registry, as is the case with OSGi. The proxies can themselves be dynamically registered, although static wirings of these are necessary to support the traditional named bean style dependency injection.

Why is Modularity Important?

Modularity is a key weapon in reducing with complexity in a large project. In the same way that blocks of functionality within a component should be partitioned into classes according to responsibilities, so should parts of an application be partitioned into modules. Like classes in an OO model, each module should have a well defined - albeit higher level - set of responsibilities. Ideally, these responsibilities should be closely related to each other.

The benefits of modularity, in my view, are as follows:
  • by definition, it reduces the unnecessary cross dependencies between code for different parts of a system. This makes code easier to maintain and extend. Impala enforces this kind of modularity because classes at the same level of the module hierarchy are not visible to each other.
  • it's easier to specify the constituents of a modular system at a higher level. An Impala application is just a collection of modules. In practical terms, it is much more convenient to specify an Impala application as a small number of modules than as a large number of Spring configuration files.
  • it's easier to read and understand a modular system. Each module has a clear and relatively narrowly defined set of functions, which can be more easily documented and explained than in the case of a system which is not modular.
  • a dynamic modular system such as Impala also has the advantage that parts of the system can be separately added, upgraded and removed. This is not possible in a system which is not modular.
Without modularity, a large system rapidly degenerates into one which is extremely hard to maintain. I would go so far as to argue that a large system which is not modular is doomed to excessive complexity, maintenance issues, and an unnecessarily high bug rate.

Wednesday, January 16, 2008

Spring Exchange 2008 in London

Today I attended the Spring exchange, held in London. Not only is it good to get away from the day job occasionally, but Spring is obviously the critical base technology for Impala, so an opportunity to get a quick, thorough and free update from the horse's mouth, as it were, is very welcome. Thanks to Spring Source and organisers Skills Matter.

Rod Johnson delivered the keynote, which as usual found another interesting angle to his perennial topic - the success of Spring. This time, the theme was the "The Changing of the Guard". It was all about the disruption that was taking place in enterprise development as a whole, with the leading technologies of yesteryear, J2EE and .NET, with their "One Size Fits All" approach, increasingly viewed as inadequate. A bleak future awaits J2EE, is his view. It's hard to disagree. Spring of course has pride of place in this future, evidenced for example by the continual upward rise of "Spring and Java" job adverts. As you would expect, he had some interesting ideas and perspectives. But I did feel like his talk went on a bit long ... I'm told it was 59 slides long!

The second talker was Sam Brannen, who went through the new features of Spring 2.5, fairly succinctly although without too much fanfare. Spring 2.5 now targets Java 1.6, fully supports JEE 5, and ships OSGi compliant bundles. However, the biggest areas of development in Spring 2.5 core are the use of annotations, in particular the myriad of ways they can be used to set up Spring bean definitions, effectively reduce the amount of configuration code that you need to write, but not without a few health warnings having thrown in.

Some of the annotation mechanisms include:
  • JSR 250 common annotations, such as @PostConstruct, @PreDestroy and @Resource
  • the Spring @Autowire annotation
  • @Component, which identifies a class as a Spring bean
plus a whole bunch of others. While I am certainly a fan of annotations for specialised uses, such as in web frameworks such as Strecks, there is certainly the danger that such a sudden and vast proliferation of annotations will be confusing to many users. Personally, for most bread and butter bean definitions I think I will carry on preferring to use the basic XML definitions, with namespace support where appropriate.

On the whole, the annotations and new namespace implementations are really syntactic sugar which don't really tackle a real problem which remains with Spring: the absence of modularity as a first class concept at the heart of the framework. SpringSource aims to address this using OSGi, and I'm told that Spring OSGi will ship 1.0 next week. Nevertheless, Spring OSGi will not quite get to the crux of the problem, because a solution is needed now, and OSGi still needs to go through plenty of growing pains before it reaches maturity in the enterprise space.

The third talk before lunch was Dave Syer, who described the improvements to Spring's web offering. This is one area where they really have made impressive strides, and there is plenty more to come. Admittedly, it is also an area where some real catch up was necessary.

The bit I liked best was Spring @MVC, the new Annotation-based controllers, which combined the component scanning, offer a much neater way to wire up web applications. Couldn't help thinking back to my work on Strecks, and the similarity in the approach! JSF support is much improved, but I'm not terribly bothered about this. There's more AJAX'y stuff, and interestingly, gracefully degrading versions of the JSF components.

Both the afternoon sessions involved the impressive Adrian Colyer and Rob Harrop double act. One of the nice things about going to these kinds of conferences is you discover some unexpected things: this time, how they were using Eclipse MyLyn to group together different files used for different parts of their demos. A useful pedagogical technique. I liked Adrian's presentation on the internals of the Spring runtime, with some well crafted diagrams, and Rob had some good perspectives on how to use Spring in the field.

Something which really struck a chord for me was his advice on the structure of projects for large, complex applications, stressing the importance of modularity, and suggesting some practices for selecting between different configurations for development, testing and production. Yes, I'm back to my favourite themes: modularity and configurability, and of course, Impala. Not only does Impala bring modularity to Spring applications, but has some a couple of neat ways of tweaking module configurations without requiring any nasty build hacks.

Unfortunately, I didn't get to the final round table Q&A session due to home commitments, but if I had, I would have asked about Spring 3.0. There was surprisingly little spoken about Spring 3.0, and the plans they seem to have are evolutionary, not dramatic, at least as far as the core project is concerned.

A good day out, and I didn't even miss my train coming home!

Saturday, January 12, 2008

Spring users: seven reasons why you'll like Impala

Here are seven reasons why you'll like Impala if you like already Spring:
  1. Spring encourages interface based programming. Impala takes this one step further: modules communicate with each other using interfaces, and interface implementations are contained within modules.
  2. Modules help you to organise your code, and eliminate unnecessary cross-code and cross-bean dependencies. Modules provide a much simpler mechanism for identifying high level relationships between parts of an application than is possible using raw application context definition files and relationships between individual beans.

  3. What's not to like? There's virtually nothing that you can do in Spring that you can't do in Impala in exactly the same way as you would do it in vanilla Spring. You can still use virtually any of the Spring APIs, any Spring feature, technique, configuration mechanism, etc., unmodified! There's no new API or language to learn. You don't need to throw away any best practices. If anything, Impala makes it easier to enforce best practices, because it strongly encourages modular applications with well defined interfaces (based on Java interfaces).
  4. A small learning curve. To start benefiting from Impala, all you need to do is organise you project according to a well defined set of conventions, and to add a bean definition into the relevant Spring context files. Otherwise, everything is the same.

    Impala is not asking you to embrace a new programming model, set of APIs or even language. Simply leverage your Java and Spring knowledge, but to greater effect.
  5. Spring is great for test driven development in that it helps you manage dependencies better. Impala takes this a step further. The interactive test runner makes writing integration tests - still a pain point with plain Spring development - really easy, hence encouraging test driven development at every level.
  6. Tasks such as reloading configurations, updating application logic, refreshing logging configurations are simple with Impala. Impala leverages Spring's first class JMX support to simplify these tasks.
  7. Impala defines a convention-based project structure which makes sense both in terms of productivity and enterprise Java development best practice.

Monday, January 7, 2008

Spring's Petclinic Sample, Impala style

I've spent a bit of time converting the Spring Petclinic sample so that it runs with Impala. Petclinic it is actually a showcase for the various Spring data access technologies. I've cut down the Impala Petclinic sample to work with following configuration:
  • data access using Hibernate
  • MySQL database
  • simple service and web tiers
To run the application, follow these steps:
  • svn co http://impala.googlecode.com/svn/trunk/petclinic petclinic
  • Open Eclipse, with the workspace set to the checkout directory (petclinic)
Now set up the database:
  • First, from petclinic/db, run createDB.txt (as root). This creates the petclinic user
    mysql -u root -p < createDB.txt
  • Then as the petclinic user, insert the tables
    mysql -u petclinic -ppetclinic petclinic < initDB.txt
  • To insert data, run
    mysql -u petclinic -ppetclinic petclinic < populateDB.txt
To run the tests from Eclipse, simply run the main class AllTests as a JUnit test, which is in the project petclinic-tests.

To run up the web application, simply run the main class StartServer, which is in the project petclinic-web. This time, run it as a (main) Java application.

To run up the interactive test runner, run HibernateClinicTest as a Java application. Type u for usage. You can run this class as a standard JUnit test - it's part of the AllTests suite.

Next on my list is to create a sample based on Spring's Petstore sample (originally Clinton Begin's JPetstore application, a sample with a rather long history). This is more interesting, as it is a larger application with a greater variety of components, so should be an interesting showcase for Impala.

Sunday, January 6, 2008

When will Impala be "go public"?

At the moment, there is still no public release of Impala. There have been no announcements on any popular web sites, to the Spring mailing lists or forums, or to any other public forum. This is deliberate. This is not because the necessary features are not present. Most of the functionality needed for a public release is present, and the code is usable and good quality. There is is enough functionality to provide real benefit to a project using Impala.

What still remains is to be absolutely sure that the Impala internal interfaces are correct. For the last few months (yes, literally!), I have been doing extensive refactoring, with the result that I am progressing towards this point. If users come up with a whole new set of requirements and ideas on how it can be used, there should be a clearly defined set of interfaces to build upon.

The first public release of Impala won't be just before going 1.0. However, it will only occur when I have reached the point that I am satisfied that the interfaces are as correct as I can make them, to support extensibility and maintainability moving forward. I think I am quite close. The interfaces are in pretty good working order, and the organisation of the code seems to make good sense.

I need to spend a bit more time working on the interactive test runner, and on deployment issues. Most of the work I've been doing with Impala has been within Eclipse itself. And of course, samples and documentation, although the first public release doesn't need to depend on these.