If you are interested in the ideas of modularity, there is quite a good chance you have looked at OSGi. But there is also quite a good chance that you have been put off by the extra development overhead required to set up and maintain OSGi-based applications.
If this is the case for you, my message is this: don't let fact that you are put off by OSGi put you off the ideas of modularity. Modularity is a bigger, more important and valuable idea than any particular technology. The ability to modularise an application can massively improve your application's ability to absorb complexity as new features and components are added, which in turn means that you remain much more productive as your application grows. It reduces the rate of entropy in your system, extending its potential life span, which of course if great for business.
This kind of modularity is one of the fundamental benefits of Impala. Also, because Impala's modules are dynamically reloadable, and you also get dynamic redeployment of small parts of your application, allowing you to greatly accelerate application development, maintain high levels of productivity, and retain high levels of code and functionality reuse as you application grows.
Impala does not solve all of the problems solved by OSGi. Notably, it does not provide versioning of third party libraries. However, it solves most of the important ones relevant for day to day development. Fundamentally, it allows you to modularise your application, without having to worry about micro-managing the dependencies of third party libraries in your application. Don't get me wrong - there are times when this capability can be important, especially for very large projects with large budgets and teams. But certainly not for every project, and probably not even for the typical one.
One way to think of choice is to look at the graph below.
The basic idea behind modularity is that the growth in complexity of your application slows as your application grows in size, compared to applications without modules. This applies both for Impala and OSGi-based applications. The difference is that because the barriers to entry are lower for Impala-based applications, the benefits kick in sooner, and accumulate for longer over the duration of the project, greatly reducing the overall cost of complexity over the lifetime of a project.
So, don't shy away from modularity just because OSGi looks complex. The benefits of modularity are too valuable, and an alternative like Impala make these attainable with fewer headaches.
Thursday, November 26, 2009
Thursday, November 5, 2009
The granularity of change in dynamic Java web applications
When writing Java web applications, you are continually making changes to your application, and to be productive you need to be able to deploy and test these changes quickly. The kinds of changes you make are of all sorts: from changes to resources to changes to markup templates to changes in the way your application is wired to changes in the code itself.
The point of this article is that not all changes are equal, both in their frequency and in the difficulty in applying them dynamically in a running web application. Let's go through some examples, and how the changes might be applied in different types of frameworks:
1. Static resources
The simplest kinds of changes for any dynamic web framework to apply are those of static resources - images, JavaScript files, page templates etc. That's because these kinds of artifacts are inherently free of dependents.
2. Application configuration
Configuration consists of things such as settings for data sources, mail servers, etc, as well as switches in your application itself. While it is in general possible to reflect these kinds of changes dynamically, there is a cost. For example, if you are using database connection pooling, pointing to a different data source dynamically is a non-trivial exercise. Also, if your application checks particular application settings at source each time the affected functionality is used, then the system is more dynamic, but also less performant. By contrast, if you only load particular settings at startup (for example using wired in property placeholders in a Spring application context), the application is more efficient but is more likely to require reloads to reflect changes.
3. Application wiring
Application wiring is a configuration of sorts, but relates more to how parts of the system are composed or wired together to form the whole application. In a Spring application, the application wiring is simply your Spring configuration.
In general, changes to application wiring requires reloads. There are special cases where this doesn't apply. For example, you could introduce new Spring beans without reloading the application context. Changes to existing beans and their collaborators are harder to make.
4. Scripts
Scripts are programs in your application which by definition can be altered without having to reload your entire application or even significant parts of it. However, a scripting infrastructure needs to be in place to allow changes to scripts to be introduced, recognised and reflected in the system.
5. Application code
Application code in a Java application are your Java classes. Actually, considering this group as a single category is an oversimplification, especially in a dynamic module system, where making changes to core interfaces and domain classes will impose much greater requirements in terms of reloading than peripheral or implementation classes with fewer dependants.
6. Third party libraries
The libraries in your application are the jar files containing all the third party dependencies.
The challenge for frameworks
The key productivity challenge with a dynamic application framework is to make it as easy as possible to make the kinds of changes you need to make, while at the same time keeping the framework as lightweight as possible. In the next section I take a look a number of technology stacks, what they do to make different types of reloading possible, what they get wrong, and what they get right.
A. Traditional Java web application
A traditional Java web application might consist, for example, of a Struts or JSF front end, Hibernate back end, all wired together using Spring.
The traditional Java web application has no problem reloading static resources without having to reload any other part of the application. Most web containers are able to reload an entire web application, including third party libraries, Java code, application wiring etc.
The problem is they are not much good at reloading any finer grained changes. Any changes you make to your application wiring or code will normally require a full application reload.
B. Scripted applications
Scripted applications are based on scripting languages such as Groovy (Grails) and JRuby (Rails). As well as explicitly providing support for reloading capabilities, these frameworks rely on the fact that all application functionality is in scripts rather than in compiled Java code, making fine grained reloading of parts of the application possible. The downside (if you think of it this way) is that you have to work with scripted code without any of the type safety checking of a statically typed language such as Java.
C. OSGi applications
OSGi applications offer a fairly comprehensive solution to the reloading problem. All artifacts within the application, from resources to application code to libraries are contained within modules which are treated in a more or less uniform manner by the OSGi container. This is a strength, but it is also a weakness. The strength is that it does allow third party libraries to be reloaded in a fine grained way. The weakness is that in your high level view of the application, OSGi doesn't really allow you to easily distinguish between the parts of your application which should be easy to reload - e.g. resources - and the parts which are harder to reload, but are changed much less frequently during the lifetime of the application (third party libraries).
What about Impala?
Impala tries to find the right balance in the strengths of the various approaches. Resource reloading works as with traditional Java applications - nothing special needs to take place. Impala includes mechanisms which make it easier to change configurations dynamically without requiring any module reloading. For changing static configuration and application wiring, Impala allows you to reload parts of your application at exactly the right granularity. If only a single module is affected, then only that module needs to be reloaded. If the change affects core interfaces, then the reload will automatically ripple through to the right dependent modules.
Impala even allows you to dynamically change the structure of modules within the application. Unlike OSGi, it doesn't support reloading of third party libraries. For this, an entire application reload is required. However, Impala's approach does your application modules in central focus, which is important as these are the normally parts of your application which change most frequently.
The point of this article is that not all changes are equal, both in their frequency and in the difficulty in applying them dynamically in a running web application. Let's go through some examples, and how the changes might be applied in different types of frameworks:
1. Static resources
The simplest kinds of changes for any dynamic web framework to apply are those of static resources - images, JavaScript files, page templates etc. That's because these kinds of artifacts are inherently free of dependents.
2. Application configuration
Configuration consists of things such as settings for data sources, mail servers, etc, as well as switches in your application itself. While it is in general possible to reflect these kinds of changes dynamically, there is a cost. For example, if you are using database connection pooling, pointing to a different data source dynamically is a non-trivial exercise. Also, if your application checks particular application settings at source each time the affected functionality is used, then the system is more dynamic, but also less performant. By contrast, if you only load particular settings at startup (for example using wired in property placeholders in a Spring application context), the application is more efficient but is more likely to require reloads to reflect changes.
3. Application wiring
Application wiring is a configuration of sorts, but relates more to how parts of the system are composed or wired together to form the whole application. In a Spring application, the application wiring is simply your Spring configuration.
In general, changes to application wiring requires reloads. There are special cases where this doesn't apply. For example, you could introduce new Spring beans without reloading the application context. Changes to existing beans and their collaborators are harder to make.
4. Scripts
Scripts are programs in your application which by definition can be altered without having to reload your entire application or even significant parts of it. However, a scripting infrastructure needs to be in place to allow changes to scripts to be introduced, recognised and reflected in the system.
5. Application code
Application code in a Java application are your Java classes. Actually, considering this group as a single category is an oversimplification, especially in a dynamic module system, where making changes to core interfaces and domain classes will impose much greater requirements in terms of reloading than peripheral or implementation classes with fewer dependants.
6. Third party libraries
The libraries in your application are the jar files containing all the third party dependencies.
The challenge for frameworks
The key productivity challenge with a dynamic application framework is to make it as easy as possible to make the kinds of changes you need to make, while at the same time keeping the framework as lightweight as possible. In the next section I take a look a number of technology stacks, what they do to make different types of reloading possible, what they get wrong, and what they get right.
A. Traditional Java web application
A traditional Java web application might consist, for example, of a Struts or JSF front end, Hibernate back end, all wired together using Spring.
The traditional Java web application has no problem reloading static resources without having to reload any other part of the application. Most web containers are able to reload an entire web application, including third party libraries, Java code, application wiring etc.
The problem is they are not much good at reloading any finer grained changes. Any changes you make to your application wiring or code will normally require a full application reload.
B. Scripted applications
Scripted applications are based on scripting languages such as Groovy (Grails) and JRuby (Rails). As well as explicitly providing support for reloading capabilities, these frameworks rely on the fact that all application functionality is in scripts rather than in compiled Java code, making fine grained reloading of parts of the application possible. The downside (if you think of it this way) is that you have to work with scripted code without any of the type safety checking of a statically typed language such as Java.
C. OSGi applications
OSGi applications offer a fairly comprehensive solution to the reloading problem. All artifacts within the application, from resources to application code to libraries are contained within modules which are treated in a more or less uniform manner by the OSGi container. This is a strength, but it is also a weakness. The strength is that it does allow third party libraries to be reloaded in a fine grained way. The weakness is that in your high level view of the application, OSGi doesn't really allow you to easily distinguish between the parts of your application which should be easy to reload - e.g. resources - and the parts which are harder to reload, but are changed much less frequently during the lifetime of the application (third party libraries).
What about Impala?
Impala tries to find the right balance in the strengths of the various approaches. Resource reloading works as with traditional Java applications - nothing special needs to take place. Impala includes mechanisms which make it easier to change configurations dynamically without requiring any module reloading. For changing static configuration and application wiring, Impala allows you to reload parts of your application at exactly the right granularity. If only a single module is affected, then only that module needs to be reloaded. If the change affects core interfaces, then the reload will automatically ripple through to the right dependent modules.
Impala even allows you to dynamically change the structure of modules within the application. Unlike OSGi, it doesn't support reloading of third party libraries. For this, an entire application reload is required. However, Impala's approach does your application modules in central focus, which is important as these are the normally parts of your application which change most frequently.
Sunday, November 1, 2009
Slides for Impala talk at the Server Side Europe in Prague
I'm pleased to say that my talk at The Server Side Java Symposium in Prague went well and was apparently well received. There definitely is a growing interest in the ideas of modularity and how the benefits of modularity can be achieved in practice.
I've posted a copy of my slides for the talk here in PDF format. The document contains the slides that I presented, as well as quite a few which were held in reserve but weren't actually presented during the talk.
I've posted a copy of my slides for the talk here in PDF format. The document contains the slides that I presented, as well as quite a few which were held in reserve but weren't actually presented during the talk.
Wednesday, October 14, 2009
How to make your Spring wirings more manageable
When Spring first came along it was a breath of fresh air - a clever way to wire up applications which did not rely on the use of all sorts of singletons all over the place. (I still remember what it was like working that way, and shudder at the thought.) The idea was simple: let classes in the application just do their own job, but leave the business of figuring out how to get collaborators to the IoC container, with the help of some XML configuration. No longer did your application code did not have to deal the messy business of having to resolve their own dependencies.
OK, we've solved the problem with the code, but the job isn't completely done. Actually, the problem has shifted onto managing Spring wirings.
The problem with managing Spring wirings
Applications necessarily get big. You end up having to write a lot of XML. So while your code may stay nice and clean, you end up with some tricky questions about how to manage this part of your application. Of course, here you try to be as "modular" as you can, putting DAOs together, infrastructure related beans together, etc. You try to identify vertical slices for your application and put beans relating to particular vertical slices together.
The problem is that for all the bean definitions that exist in your application, some groups are inherently coupled, while others are inherently free of coupling. In a vanilla Spring application, there is no way to express these dependencies at a system level. So it is very easy for your application wiring to become an unnecessarily fragile collection of invisible dependencies, liable to break in unexpected ways when any rearrangement takes place.
Autowiring, namespaces and class path scanners don't necessarily help
Then of course there is the drudgery of editing XML configuration files by hand. Personally, I think that is less of a problem, but Spring has gone to great lengths to free developers from some of this pain over the years, through the introduction of autowiring, class path scanners, and XML namespaces. I happily embrace all of the above as they reduce the amount of code I need to write, but they don't address the fundamental problem. They don't enhance one's ability to express dependencies between parts of your application at a system level, and where possible, to reduce these dependencies.
So how does Impala help?
Remember how easy Spring seemed when we were working with just small applications? Impala allows you to keep your applications small, or at least keep them feeling small. This is done through modules. You can think of a module as a mini-application which is able to communicate with other mini-applications in the system through well defined mechanisms and through sharing common interfaces.
The Spring configuration for each module remains pretty small. If it starts getting too big, then its a good sign that some of it's functionality needs to be split off into another module. So within the module, you only need to deal with small configurations - bite size chunks.
You can configure beans within a module however you like - through plain Spring XML, custom namespaces or through annotations. If your module needs to use functionality from other parts of the system (as most will), then you can import services directly from the shared service registry (as long as the service has been exported using an interface or class visible to the current module). If necessary, you can allow your module to depend directly on another module, either as a dependent or as a direct child.
If you need to compose your application in different ways according to environment or customer, that's easy too. Simply change which modules your deploy, or you can even vary the configuration of within a module according to requirements.
You no longer need to wait ages for integration tests to load, because you can easily create integration tests which consist simply of the modules you need to use.
And you get the benefits of much more productive, responsive development environment because each of these modules can be reloaded on the fly, either individually, in groups, or as a whole - and this applies whether you are running integration tests or running your application on a web container.
OK, we've solved the problem with the code, but the job isn't completely done. Actually, the problem has shifted onto managing Spring wirings.
The problem with managing Spring wirings
Applications necessarily get big. You end up having to write a lot of XML. So while your code may stay nice and clean, you end up with some tricky questions about how to manage this part of your application. Of course, here you try to be as "modular" as you can, putting DAOs together, infrastructure related beans together, etc. You try to identify vertical slices for your application and put beans relating to particular vertical slices together.
The problem is that for all the bean definitions that exist in your application, some groups are inherently coupled, while others are inherently free of coupling. In a vanilla Spring application, there is no way to express these dependencies at a system level. So it is very easy for your application wiring to become an unnecessarily fragile collection of invisible dependencies, liable to break in unexpected ways when any rearrangement takes place.
Autowiring, namespaces and class path scanners don't necessarily help
Then of course there is the drudgery of editing XML configuration files by hand. Personally, I think that is less of a problem, but Spring has gone to great lengths to free developers from some of this pain over the years, through the introduction of autowiring, class path scanners, and XML namespaces. I happily embrace all of the above as they reduce the amount of code I need to write, but they don't address the fundamental problem. They don't enhance one's ability to express dependencies between parts of your application at a system level, and where possible, to reduce these dependencies.
So how does Impala help?
Remember how easy Spring seemed when we were working with just small applications? Impala allows you to keep your applications small, or at least keep them feeling small. This is done through modules. You can think of a module as a mini-application which is able to communicate with other mini-applications in the system through well defined mechanisms and through sharing common interfaces.
The Spring configuration for each module remains pretty small. If it starts getting too big, then its a good sign that some of it's functionality needs to be split off into another module. So within the module, you only need to deal with small configurations - bite size chunks.
You can configure beans within a module however you like - through plain Spring XML, custom namespaces or through annotations. If your module needs to use functionality from other parts of the system (as most will), then you can import services directly from the shared service registry (as long as the service has been exported using an interface or class visible to the current module). If necessary, you can allow your module to depend directly on another module, either as a dependent or as a direct child.
If you need to compose your application in different ways according to environment or customer, that's easy too. Simply change which modules your deploy, or you can even vary the configuration of within a module according to requirements.
You no longer need to wait ages for integration tests to load, because you can easily create integration tests which consist simply of the modules you need to use.
And you get the benefits of much more productive, responsive development environment because each of these modules can be reloaded on the fly, either individually, in groups, or as a whole - and this applies whether you are running integration tests or running your application on a web container.
Tuesday, October 6, 2009
Talk at the Server Side Europe in Prague
I am doing a talk on Impala at The Server Side Europe's conference in Prague, which is taking place on October the 27th and 28th. Really looking forward to it, especially as I used to be a regular visitor of Prague in the early 90s when I was living in Germany.
Saturday, October 3, 2009
Why web.xml makes it hard to write modular web applications
In a typical Java enterprise application, the web.xml is used to define servlets and filters, which are among the main entry points into your application from the outside world. Since web.xml cannot be reloaded, added to or modified without reloading the entire application, it is not a very convenient place to host application configuration and definitions in a dynamic module applications.
Another related problem is the limitations of the request mapping capability of web containers as defined by the current servlet specification. Currently, these make it possible to map requests to servlets and filters using an exact match (e.g. /myservlet/myresource) or either a prefix and wildcard match (e.g /myservlet/*) or using a suffix wildcard match (e.g. *.do). It doesn't allow you to use a combination of prefix and suffix wildcard matches. This means that you cannot, for example, use the path (/myprefix/*) to match application URLs, and at the same time allow your application's CSS files to be accessible in a resource such as /myprefix/styles.css.
Multi-module web applications in Impala
One of biggest changes in the recent 1.0 RC1 release of Impala is the ability to write web applications which are less reliant on web.xml, allowing both dynamic registration of modules containing servlets and filters, and at the same time solving the path mapping limitation described in the previous paragraph.
In an Impala application, you cannot do away with the web.xml altogether. However, you can reduce the request handlers defined in web.xml to the following:
This module defines a number of servlets and filters who's life cycles are tied to that of the module, rather than that of web.xml. They can be dynamically registered and removed, and don't require an application restart. The modules can contain all the classes and resources necessary to service requests, without relying on the presence, for example, or resources such as JavaScript files on the context path (e.g. in the WEB-INF directory).
What about Servlet 3.0?
The changes described above are very much in line with the changes in the forthcoming Servlet 3 specification which allow servlets and filters to be added via web.xml fragments, and via annotations. It will also allows you to add Servlet and Filter instances programmatically. I expect that Impala will be able to take advantage of this mechanism when it becomes available, perhaps by wrapping the Servlet or Filter to ensure that it is associated with the originating module's class loader, and not the web application class loader. This will have the advantage of allowing Impala to make use of the web container's invocation infrastructure while still supporting dynamic servlet or filter registration.
Another related problem is the limitations of the request mapping capability of web containers as defined by the current servlet specification. Currently, these make it possible to map requests to servlets and filters using an exact match (e.g. /myservlet/myresource) or either a prefix and wildcard match (e.g /myservlet/*) or using a suffix wildcard match (e.g. *.do). It doesn't allow you to use a combination of prefix and suffix wildcard matches. This means that you cannot, for example, use the path (/myprefix/*) to match application URLs, and at the same time allow your application's CSS files to be accessible in a resource such as /myprefix/styles.css.
Multi-module web applications in Impala
One of biggest changes in the recent 1.0 RC1 release of Impala is the ability to write web applications which are less reliant on web.xml, allowing both dynamic registration of modules containing servlets and filters, and at the same time solving the path mapping limitation described in the previous paragraph.
In an Impala application, you cannot do away with the web.xml altogether. However, you can reduce the request handlers defined in web.xml to the following:
In the example above, the ModuleProxyFilter captures requests and routes them into Impala modules. The mapping rules which determine which modules service which requests are contained within modules. Here's an example from the URL mapping sample:
<filter>
<filter-name>web</filter-name>
<filter-class>org.impalaframework.web.spring.integration.ModuleProxyFilter</filter-class>
<init-param>
<param-name>modulePrefix</param-name>
<param-value>urlmapping-web</param-value>
</init-param>
<load-on-startup>2</load-on-startup>
</filter>
<filter-mapping>
<filter-name>web</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
<web:mapping>
<web:to-module prefix = "/webview" setServletPath="true"/>
<web:to-handler extension = "htm" servletName="urlmapping-webview" filterNames = "characterEncodingFilter,sysoutLoggingFilter"/>
<web:to-handler extension = "css" servletName="urlmapping-resources"/>
</web:mapping>
<web:servlet id = "urlmapping-webview"
servletClass = "org.impalaframework.web.spring.servlet.InternalModuleServlet"/>
<web:servlet id = "urlmapping-resources"
servletClass = "org.springframework.js.resource.ResourceServlet"
initParameters = "cacheTimeout=10"/>
<web:filter id = "characterEncodingFilter"
filterClass = "org.springframework.web.filter.CharacterEncodingFilter"
initParameters = "forceEncoding=true,encoding=utf8">
</web:filter>
<web:filter id = "sysoutLoggingFilter"
filterClass = "org.impalaframework.urlmapping.webview.SysoutLoggingFilter">
</web:filter>
This module defines a number of servlets and filters who's life cycles are tied to that of the module, rather than that of web.xml. They can be dynamically registered and removed, and don't require an application restart. The modules can contain all the classes and resources necessary to service requests, without relying on the presence, for example, or resources such as JavaScript files on the context path (e.g. in the WEB-INF directory).
What about Servlet 3.0?
The changes described above are very much in line with the changes in the forthcoming Servlet 3 specification which allow servlets and filters to be added via web.xml fragments, and via annotations. It will also allows you to add Servlet and Filter instances programmatically. I expect that Impala will be able to take advantage of this mechanism when it becomes available, perhaps by wrapping the Servlet or Filter to ensure that it is associated with the originating module's class loader, and not the web application class loader. This will have the advantage of allowing Impala to make use of the web container's invocation infrastructure while still supporting dynamic servlet or filter registration.
Thursday, September 24, 2009
Impala 1.0 RC1 released
I am pleased to announce the release of Impala 1.0 RC1.
With this release, Impala is now feature complete for the 1.0 final release, with only minor enhancements and bug fixes planned before this happens.
Impala 1.0 RC1 contains a number of big feature improvements from the previous release:
Enjoy!
With this release, Impala is now feature complete for the 1.0 final release, with only minor enhancements and bug fixes planned before this happens.
Impala 1.0 RC1 contains a number of big feature improvements from the previous release:
- a more powerful and flexible mechanism for mapping web requests to and within modules, making it easier to build truly multi-module web applications.
- a new Spring web namespace for registering servlets, filters and other web artifacts in Impala web modules.
- enhancements to make the automatic module reloading mechanism more robust and suitable for applying in production environments.
- various other minor bug fixes and enhancements, particularly in the areas of build, dynamic services and class loading.
Enjoy!
Sunday, September 6, 2009
10 reasons for Spring users to try Impala
If you are a user of the Spring framework and you haven't tried Impala, I hope to convince you in this entry that you are really missing out.
These are the reasons why I make this claim.
When Spring was introduced 2004-2005, it brought a big shift in the frontier of Java enterprise software development, offering solutions to many of the challenges faced by developers in a way which other technologies before, notably EJB, had conspicuously failed. Spring is still very much a valid technology in today's environment, but the frontiers have shifted.
One shift is in the increasing recognition of the shortcomings of the Java language itself, and the need for an eventual replacement as the premier JVM language. Another shift is the increasing awareness of the need for modularity in application development. My view is that without the backing of a truly modular framework, it is almost impossible to build a large enterprise application which does not end up becoming unwieldy, difficult to manage, and slow to build and deploy. This more than anything else leads to the perception that Java is unproductive to work with.
1. Impala gives your project a massive productivity boost
The reason for this stems from Impala's dynamic reloading capability. When you make changes to an Impala application, you only need to reload the affected modules and their dependents. Indeed, you can set Impala up so that these modules will be reloaded automatically. This allows code/deploy/test cycles to be reduced to an absolute minimum. In some of the next few points we give some further examples of how Impala improves your productivity.
2. With Impala you can make your applications truly modular
Modularity is about removing all unnecessary coupling between parts of your application, and is essential for building large applications which don't grow exponentially in complexity as they grow linearly in size.
With Impala and other dynamic modularity frameworks, modularity is achieved through a separation of interface and implementation which is just not possible in traditional Java applications. With Impala, modularity is enforced at the class loader level.
Impala makes it simple to deploy multiple flavours of the same application simply by choosing which modules to deploy. Impala gives you mechanisms for configuring individual modules, so that you can support different deployment options in a clean and simple way.
3. Impala works "out of the box"
I can't resist using this term because it features often in Rod Johnson's books. If you try any of the Impala examples, you will notice that all you typically need to do is to check out the code, then run it in your IDE. (If a database is involved, you may need a little extra setup there.) There is no need for any extra build steps, installation of third party environments, containers, etc. It's all self-contained.
4. With Impala, Spring integration testing is a doddle
Impala makes it really simple to write Spring integration tests. Gone is the need for convoluted test application context definitions such as
With Impala, all you need to do in a typical integration test is to specify which modules you want to include (dependent modules get included automatically), and use Impala's API to access beans either from the root or one of the module application contexts. For example:
Tests work equally well whether you are running them interactively (without the need to reload the entire application or even any part of it between successive test runs), or whether you are running it as part of a suite (in which case the application modules are incrementally loaded as required). In both cases, running integration tests is very efficient, allowing you to be much more productive while still practicing TDD.
5. With Impala you can write truly multi-module web applications
A big barrier to truly modular web applications is the reliance on web.xml, because changes to web.xml require a full application reload. The forthcoming release of Impala allows you to define servlets and filters within the modules themselves, with their life cycle tied to that of the containing module. It also allows you to map requests with arbitrary URL path prefixes to individual modules. Once within a module, requests can be mapped to filters and servlets based on file extension.
These capabilities allow you to create a web application tier which is truly modular: you can package all of the filters, servlets, controllers, templates, images, Java script files required to service URLs with a particular prefix into the module jar itself. Indeed, you can even reduce your web.xml filter and servlet declarations to the following:
See the web.xml from the URL mapping sample for the full example.
6. Impala does not reinvent the Spring programming model
Spring has been a huge success for a good reason - because it offers a much simpler programming model than was offered before. Spring offers a great solution for dependency injection as well as for making otherwise key technologies - transactions, AOP, JMX, remoting, etc - accessible or very simple to use.
Impala does not reinvent the Spring programming module - within modules you will recognise all the artifacts familiar to Spring applications: collaborators wired together via dependency injection, configured using XML configuration files, annotations, etc. The big difference is that Impala gives you a well defined way for expressing relationships between modules within the application, allowing for simpler implementations of - and clear boundaries between - the constituent parts of a bigger application.
7. With Impala you can forget about the build (most of the time)
Impala allows you to develop your applications in a vanilla Eclipse environment without having to invoke any build scripts or do any extra environent setup to run your application. This makes getting new developers started on projects extremely simple. Simply check out and go.
Of course, you will need to build for production environment. Impala includes an ANT-based build system which allows you to build either a WAR file or a Jetty-based standalone web application, which you simply unzip in the target environment and run.
8. Impala fundamentally simpler to use than OSGi-based alternatives
OSGi is a powerful technology, a very complete modularity solution. It is also quite a complex technology, requiring a non-trivial investment in time and energy to understand it, both conceptually and in its effect on the application environment.
From a practical point of view, applying OSGi to enterprise environments is far from trivial, and involves solving a number of challenging technical problems. For example, many common Java libraries do not ship out the box in an OSGi-friendly way. Also, the use of the thread context class loader in many libraries poses a problem for OSGi-based applications.
Solutions to these problems do exist, but they typically involve creating a more complex, more restrictive, or less familiar environment than required for traditional Java enterprise apps. Nothing comes for free. The question is whether it is worth paying the price.
9. Impala poses no extra requirements for third party library management
In terms of the management of third party libraries, Impala is no different from traditional Java applications. Third party library jars are bundled in the lib directory of a WAR file. If you change a third party library, you will need to reload your application to apply these changes.
Where Impala differs from traditional applications is the way that you manage application modules. In a WAR file, these will be bundled in a the modules directory under /WEB-INF.
The relationship between modules can be hierarchical, or even in the form of a graph of dependencies. The important point is that modularity is applied to your application's code. That is in my opinion the part which needs it most because that's the part of your application which changes frequently. If you are not sure whether you agree with this point, ask yourself the questions:
Some may regard the traditional approach to third party libraries as broken - but it is also very convenient when it works!
10. Impala is a practical solution for practical problems
Impala did not evolve from an ivory tower view of how Java enteprise applications are supposed to be written. Instead, it was borne around two years ago out of the need to solve practical real world problems which were harming the productivity and maintainability of a large Java project. The most notable of these were the lack of any first class modularity in traditional Spring-based application development, and the slow build/deploy/test cycles arising from unwieldy integration tests.
Impala has been designed and refactored to be simple and intuitive to work with, once you've grasped the basic concepts. It allows you to get the benefits of a modular approach to application development without many of the costs, and without being swamped by technobabble.
What are you waiting for?
Working with Impala will give you the programmer's equivalent of a "spring in your step" - you will be amazed by how easily it is to get things done.
So give it a go. Try one of the samples, read the tutorial, or kick start your own application, and let me know how you get on.
These are the reasons why I make this claim.
When Spring was introduced 2004-2005, it brought a big shift in the frontier of Java enterprise software development, offering solutions to many of the challenges faced by developers in a way which other technologies before, notably EJB, had conspicuously failed. Spring is still very much a valid technology in today's environment, but the frontiers have shifted.
One shift is in the increasing recognition of the shortcomings of the Java language itself, and the need for an eventual replacement as the premier JVM language. Another shift is the increasing awareness of the need for modularity in application development. My view is that without the backing of a truly modular framework, it is almost impossible to build a large enterprise application which does not end up becoming unwieldy, difficult to manage, and slow to build and deploy. This more than anything else leads to the perception that Java is unproductive to work with.
1. Impala gives your project a massive productivity boost
The reason for this stems from Impala's dynamic reloading capability. When you make changes to an Impala application, you only need to reload the affected modules and their dependents. Indeed, you can set Impala up so that these modules will be reloaded automatically. This allows code/deploy/test cycles to be reduced to an absolute minimum. In some of the next few points we give some further examples of how Impala improves your productivity.
2. With Impala you can make your applications truly modular
Modularity is about removing all unnecessary coupling between parts of your application, and is essential for building large applications which don't grow exponentially in complexity as they grow linearly in size.
With Impala and other dynamic modularity frameworks, modularity is achieved through a separation of interface and implementation which is just not possible in traditional Java applications. With Impala, modularity is enforced at the class loader level.
Impala makes it simple to deploy multiple flavours of the same application simply by choosing which modules to deploy. Impala gives you mechanisms for configuring individual modules, so that you can support different deployment options in a clean and simple way.
3. Impala works "out of the box"
I can't resist using this term because it features often in Rod Johnson's books. If you try any of the Impala examples, you will notice that all you typically need to do is to check out the code, then run it in your IDE. (If a database is involved, you may need a little extra setup there.) There is no need for any extra build steps, installation of third party environments, containers, etc. It's all self-contained.
4. With Impala, Spring integration testing is a doddle
Impala makes it really simple to write Spring integration tests. Gone is the need for convoluted test application context definitions such as
new ClasspathApplicationContext(new String[] {
"config-context.xml",
"dao-context.xml",
"config-context.xml",
"some-context-which-you-you-need-for-your-test.xml",
"some-context-which-you-you-dont-need-for-your-test.xml",
"another-context-which-you-you-dont-need-for-your-test.xml",
}
With Impala, all you need to do in a typical integration test is to specify which modules you want to include (dependent modules get included automatically), and use Impala's API to access beans either from the root or one of the module application contexts. For example:
public class InProjectEntryDAOTest extends BaseDataTest {
public static void main(String[] args) {
InteractiveTestRunner.run(InProjectEntryDAOTest.class);
}
public void testDAO() {
//get the entryDAO bean from the root module
EntryDAO dao = Impala.getBean("entryDAO", EntryDAO.class);
... test methods
}
public RootModuleDefinition getModuleDefinition() {
return new TestDefinitionSource("example-dao", "example-hibernate").getModuleDefinition();
}
}
Tests work equally well whether you are running them interactively (without the need to reload the entire application or even any part of it between successive test runs), or whether you are running it as part of a suite (in which case the application modules are incrementally loaded as required). In both cases, running integration tests is very efficient, allowing you to be much more productive while still practicing TDD.
5. With Impala you can write truly multi-module web applications
A big barrier to truly modular web applications is the reliance on web.xml, because changes to web.xml require a full application reload. The forthcoming release of Impala allows you to define servlets and filters within the modules themselves, with their life cycle tied to that of the containing module. It also allows you to map requests with arbitrary URL path prefixes to individual modules. Once within a module, requests can be mapped to filters and servlets based on file extension.
These capabilities allow you to create a web application tier which is truly modular: you can package all of the filters, servlets, controllers, templates, images, Java script files required to service URLs with a particular prefix into the module jar itself. Indeed, you can even reduce your web.xml filter and servlet declarations to the following:
<filter>
<filter-name>web</filter-name>
<filter-class>org.impalaframework.web.spring.integration.ModuleProxyFilter</filter-class>
<init-param>
<param-name>modulePrefix</param-name>
<param-value>urlmapping-web</param-value>
</init-param>
<load-on-startup>2</load-on-startup>
</filter>
<filter-mapping>
<filter-name>web</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
See the web.xml from the URL mapping sample for the full example.
6. Impala does not reinvent the Spring programming model
Spring has been a huge success for a good reason - because it offers a much simpler programming model than was offered before. Spring offers a great solution for dependency injection as well as for making otherwise key technologies - transactions, AOP, JMX, remoting, etc - accessible or very simple to use.
Impala does not reinvent the Spring programming module - within modules you will recognise all the artifacts familiar to Spring applications: collaborators wired together via dependency injection, configured using XML configuration files, annotations, etc. The big difference is that Impala gives you a well defined way for expressing relationships between modules within the application, allowing for simpler implementations of - and clear boundaries between - the constituent parts of a bigger application.
7. With Impala you can forget about the build (most of the time)
Impala allows you to develop your applications in a vanilla Eclipse environment without having to invoke any build scripts or do any extra environent setup to run your application. This makes getting new developers started on projects extremely simple. Simply check out and go.
Of course, you will need to build for production environment. Impala includes an ANT-based build system which allows you to build either a WAR file or a Jetty-based standalone web application, which you simply unzip in the target environment and run.
8. Impala fundamentally simpler to use than OSGi-based alternatives
OSGi is a powerful technology, a very complete modularity solution. It is also quite a complex technology, requiring a non-trivial investment in time and energy to understand it, both conceptually and in its effect on the application environment.
From a practical point of view, applying OSGi to enterprise environments is far from trivial, and involves solving a number of challenging technical problems. For example, many common Java libraries do not ship out the box in an OSGi-friendly way. Also, the use of the thread context class loader in many libraries poses a problem for OSGi-based applications.
Solutions to these problems do exist, but they typically involve creating a more complex, more restrictive, or less familiar environment than required for traditional Java enterprise apps. Nothing comes for free. The question is whether it is worth paying the price.
9. Impala poses no extra requirements for third party library management
In terms of the management of third party libraries, Impala is no different from traditional Java applications. Third party library jars are bundled in the lib directory of a WAR file. If you change a third party library, you will need to reload your application to apply these changes.
Where Impala differs from traditional applications is the way that you manage application modules. In a WAR file, these will be bundled in a the modules directory under /WEB-INF.
The relationship between modules can be hierarchical, or even in the form of a graph of dependencies. The important point is that modularity is applied to your application's code. That is in my opinion the part which needs it most because that's the part of your application which changes frequently. If you are not sure whether you agree with this point, ask yourself the questions:
- How often would you actually want to run multiple versions of the same third party library in the same application?
- If you made changes to third party libraries in your application, how often would you want to apply these without restarting the application
Some may regard the traditional approach to third party libraries as broken - but it is also very convenient when it works!
10. Impala is a practical solution for practical problems
Impala did not evolve from an ivory tower view of how Java enteprise applications are supposed to be written. Instead, it was borne around two years ago out of the need to solve practical real world problems which were harming the productivity and maintainability of a large Java project. The most notable of these were the lack of any first class modularity in traditional Spring-based application development, and the slow build/deploy/test cycles arising from unwieldy integration tests.
Impala has been designed and refactored to be simple and intuitive to work with, once you've grasped the basic concepts. It allows you to get the benefits of a modular approach to application development without many of the costs, and without being swamped by technobabble.
What are you waiting for?
Working with Impala will give you the programmer's equivalent of a "spring in your step" - you will be amazed by how easily it is to get things done.
So give it a go. Try one of the samples, read the tutorial, or kick start your own application, and let me know how you get on.
Tuesday, September 1, 2009
Avoiding over eager reloading in Impala
The forthcoming version of Impala has new features which help to avoid "over-eager" reloading of modules, that is, module reloads which take place either unnecessarily and too frequently.
If you run an Impala application within Eclipse you can have a running application automatically reload modules to reflect changes in your workspace. You do this by adding the line
to the file impala-embedded.properties.
This is really nice, because it means you can redeploy your application without having to do anything, that is, no build step whatsoever.
The trouble is, redeployment can get a bit overeager. For example, some changes will automatically get picked up without the need for redeployment, for example, my Freemarker web templates. Some changes you also want Impala to ignore completely, for example, changes which occur within your subversion (.svn) directories.
The next drop of Impala now contains a mechanism to filter which files get checked for modifications, via the auto.reload.extension.includes and auto.reload.extension.excludes. An example is shown below:
With the setting above, only classes and files ending in context.xml will be checked for modifications, reducing the number of spurious reloads. You can also get complete control over the timing of reloads by specifiying a touch file.
Using a touch file removes all spurious reloads, but does require an extra build step every time in order to update the timestamp of the touch file.
If you run an Impala application within Eclipse you can have a running application automatically reload modules to reflect changes in your workspace. You do this by adding the line
#Automatically detect changes and reload modules
auto.reload.modules=true
to the file impala-embedded.properties.
This is really nice, because it means you can redeploy your application without having to do anything, that is, no build step whatsoever.
The trouble is, redeployment can get a bit overeager. For example, some changes will automatically get picked up without the need for redeployment, for example, my Freemarker web templates. Some changes you also want Impala to ignore completely, for example, changes which occur within your subversion (.svn) directories.
The next drop of Impala now contains a mechanism to filter which files get checked for modifications, via the auto.reload.extension.includes and auto.reload.extension.excludes. An example is shown below:
#Includes only files ending with context.xml and class
auto.reload.extension.includes=context.xml,class
With the setting above, only classes and files ending in context.xml will be checked for modifications, reducing the number of spurious reloads. You can also get complete control over the timing of reloads by specifiying a touch file.
use.touch.file=true
touch.file=/touch.txt
Using a touch file removes all spurious reloads, but does require an extra build step every time in order to update the timestamp of the touch file.
Friday, August 28, 2009
Spring Faces sample project
I recently put together a sample of Impala working with Spring Faces, that is, the combination of Spring Web Flow and JSF. The example is based on the Spring Faces flagship sample, a hotel booking application which uses JSF for persistence, and Spring Security for authentication/authorization.
See instructions on how to run the sample on the Impala wiki.
I have to confess that none of the technologies used in this sample are personal favourites - I'm much more inclined towards Spring MVC, Hibernate on its own, and a more straightforward templating technology like Freemarker for the view layer. That being said, it was an interesting exercise, which threw up some interesting challenges, and also important to verify that Impala could work nicely with each of these technologies without a gargantuan amount of effort on the user's part.
One of these challenges was figuring out how to set up the JSF runtime to be loaded up by the module class loader, rather than the application class loader.
More significant for the evolution of Impala itself was figuring out how to create an arbitrary mapping from the request URI path prefix (the part of the URL after the host, port and application name), and an arbitrary module. In the Spring Faces sample, I needed to map all requests with a prefix of /spring and no extension (e.g. .htm) in the path to the web module containing the Web Flow definitions and faces view. This particular requirement gave me the push I needed to implement the main features coming in the next drop of Impala, that is, the ability to map from arbitrary URI path prefixes to modules, and once within a module, to map individual requests to servlets or filters, themselves defined within the module, based on the URI extension.
The Impala Spring Faces stack is somewhat experimental, but it does offer the promise of truly modular, dynamically reloadable applications based on these technologies.
Outstanding issues
There are a few wrinkles outstanding, though. The Spring web flow long running transaction is still not working. I have added a flush() call so that the booking does persist, but this happens at the time the booking is first entered rather than confirmed. Also, it does seem possible at least in some situations to make simple changes to the flow definitions and reload these without preventing the completion of existing flows, but I'm sure there are some corner case
See instructions on how to run the sample on the Impala wiki.
I have to confess that none of the technologies used in this sample are personal favourites - I'm much more inclined towards Spring MVC, Hibernate on its own, and a more straightforward templating technology like Freemarker for the view layer. That being said, it was an interesting exercise, which threw up some interesting challenges, and also important to verify that Impala could work nicely with each of these technologies without a gargantuan amount of effort on the user's part.
One of these challenges was figuring out how to set up the JSF runtime to be loaded up by the module class loader, rather than the application class loader.
More significant for the evolution of Impala itself was figuring out how to create an arbitrary mapping from the request URI path prefix (the part of the URL after the host, port and application name), and an arbitrary module. In the Spring Faces sample, I needed to map all requests with a prefix of /spring and no extension (e.g. .htm) in the path to the web module containing the Web Flow definitions and faces view. This particular requirement gave me the push I needed to implement the main features coming in the next drop of Impala, that is, the ability to map from arbitrary URI path prefixes to modules, and once within a module, to map individual requests to servlets or filters, themselves defined within the module, based on the URI extension.
The Impala Spring Faces stack is somewhat experimental, but it does offer the promise of truly modular, dynamically reloadable applications based on these technologies.
Outstanding issues
There are a few wrinkles outstanding, though. The Spring web flow long running transaction is still not working. I have added a flush() call so that the booking does persist, but this happens at the time the booking is first entered rather than confirmed. Also, it does seem possible at least in some situations to make simple changes to the flow definitions and reload these without preventing the completion of existing flows, but I'm sure there are some corner case
Sunday, May 17, 2009
Impala 1.0 M6 released
I am pleased to announce the 1.0M6 release of Impala. The 1.0M6 is an important release for Impala, as it is the last pre-1.0 final release to include major enhancements and API changes. I am now pretty comfortable that the APIs and abstractions are correct and suitable for the 1.0 final release, but as always welcome any feedback from users on this.
The 1.0M6 release includes a major reworking of the shared service registry and proxying mechanisms, a new Spring namespace for importing and exporting services, and enhancements to the dynamic reloading of modules.
The headline improvements include the following.
For more information on this release see
http://code.google.com/p/impala/wiki/Release1_0M6Announcement.
The 1.0M6 release includes a major reworking of the shared service registry and proxying mechanisms, a new Spring namespace for importing and exporting services, and enhancements to the dynamic reloading of modules.
The headline improvements include the following.
- Configuration of Impala services is now much simpler, as a new Spring 'service' namespace has been provided for easily exporting and importing services.
- Service export and import can now be done not only by name but also by type or by custom attributes, the latter using a model similar to that used in OSGi.
- Impala's mechanism for proxying services obtained from the service registry has improved, and is now more easily configurable.
- It is now possible to export and import Impala services without having to specify an interface - proxying of the service implementation class is now also supported.
- Impala now supports exporting and importing services based on Spring beans which are not singletons or not created using non-singleton Factory beans. It does this in a way that is totally transparent to users of the services, effectively allowing clients to treat all beans as singletons.
- Impala now provides implementations of java.util.List and java.util.Map, dynamically backed by beans imported from the service registry.
For more information on this release see
http://code.google.com/p/impala/wiki/Release1_0M6Announcement.
Saturday, May 16, 2009
Extending Spring MVC's annotation controller
In my latest project I am using Spring MVC's annotation based controller. I definitely am a fan of annotations for wiring up web applications, and suppose, relatively speaking, I can claim to be an early in this area having created Strecks, an annotation based framework for Struts.
I must say I am enjoying using the new Spring MVC - it's a massive improvement over the original framework which I found pretty clunky, especially with regard to form handling.
One really nice thing about the controllers is the simple way to map URLs to methods as well as to provide arguments to the methods, both using annotations. An example is show below:
What would be nice would be some built in annotation types which you could extract other types of information from the Servlet API environment in an non-intrusive way. Here I am thinking of the following:
Flash scope is especially convenient for certain use cases because it combines the convenience of session-based attributes without the long running overhead of having state hanging around in a session over a long period.
Spring MVC annotations currently don't support flash scope, so added an extension to AnnotationMethodHandlerAdapter which support flash scope. Basically you can use it as follows. In your controller method, simply set a model attribute with the prefix "flash:". The attribute will be available in the next request using the @RequestAttribute annotation. An example below demonstrates this.
I must say I am enjoying using the new Spring MVC - it's a massive improvement over the original framework which I found pretty clunky, especially with regard to form handling.
Setup
Configuring the application is a doddle. All you need to do is register the annotation HandlerAdapter (which does the main request processing work) and HandlerMapping (which maps URLs to your controllers). You can do this using Spring config like:<bean class="org.springframework.web.servlet.mvc.
annotation.DefaultAnnotationHandlerMapping"> <bean class="org.springframework.web.servlet.mvc.
annotation.AnnotationMethodHandlerAdapter">and then you're ready to go. Controllers definitions can found automatically via class path scanning, or added explicitly into the Spring config files, which I prefer to do.
One really nice thing about the controllers is the simple way to map URLs to methods as well as to provide arguments to the methods, both using annotations. An example is show below:
@RequestMapping("/warehouse/postProductsSubmit.htm") public String postProductsSubmit( Map model, @ModelAttribute("command") PostProductsForm command, BindingResult result) { //do stuff //redirect when finished return "redirect:postProductsForm.htm"; }
So what's missing?
There were still a few bits I felt needed to be added to make the Spring MVC annotation truly usable for my application. Here's what they are.Missing annotations for obvious argument types
The Spring MVC annotations recognise a whole bunch of argument types. Many of these will be automatically recognised from the Servlet API including HttpServletRequest, HttpServletResponse, ServletRequest, ServletResponse, HttpSession, Principal, Locale, InputStream, Reader, OutputStream and Writer. Others will be recognised from Spring MVC annotations, such as @ModelAttribute and @RequestParam (which binds a request parameter).What would be nice would be some built in annotation types which you could extract other types of information from the Servlet API environment in an non-intrusive way. Here I am thinking of the following:
- @SessionAttribute: extract and bind a named session attribute.
- @RequestAttribute: do the same for a named request attribute.
- @RequestHeader: extract a request header.
- Plus various others
Flash Scope
Flash scope, popularised initially by Rails, is a mechanism for transferring state from one request to the next without having to pass it via URLs. It is implemented through a session scoped attribute which is removed as soon as the value is consumed in the subsequent request. It works particuarly well with redirecting after a post.Flash scope is especially convenient for certain use cases because it combines the convenience of session-based attributes without the long running overhead of having state hanging around in a session over a long period.
Spring MVC annotations currently don't support flash scope, so added an extension to AnnotationMethodHandlerAdapter which support flash scope. Basically you can use it as follows. In your controller method, simply set a model attribute with the prefix "flash:". The attribute will be available in the next request using the @RequestAttribute annotation. An example below demonstrates this.
@RequestMapping("/submit.htm") public String submit( Map model) { //do stuff //redirect when finished model.put("flash:mydata", mydataObject); return "redirect:show.htm"; } @RequestMapping("/show.htm") public void(@RequestAttribute("mydata") MyDataClass mydata) { //if you redirected using flash, mydata
//will contain the mydataObject instance //from the last call }
No subclass hooks for manipulating model
When I first started with the annotation-based controller I found it a little frustrating that there were no subclass hooks in the provided AnnotationMethodHandlerAdapter for manipulating the model. The only way you can do this is in the mapped request method as well as in special @ModelAttribute methods, which are also present in your controllers. An example is below:@ModelAttribute("command") public PostProductsForm getPostProductsForm() { return new PostProductsForm(); }
I'm not sure if this is still a problem because I have found quite acceptable workarounds which solve the problems I was trying to solve, without having to resort to such a technique. Nevertheless, is does strike me as a sensible thing to be able to do, provided it is is done in a well-defined and controlled way.
Concluding Remarks
Spring MVC annotations have added a great deal of convenience to Spring MVC without sacrificing any of the flexibility which has always been its true strength. It's not perfection, but with a few extra fairly minor features it is easy to use and work very productively with. And of course - here comes the obligatory shameless plug! - it works even better when you use it with Impala.Monday, May 4, 2009
Why developers don't just jump at OSGi
On paper, the choice to use OSGi should be an easy one. After all, OSGi offers an escape from the "jar classpath hell" that Java developers have been living with for years. The ability to compose systems through modules that can be composed and loaded dynamically promises a solutions to a range of important problems that enterprise developers have been grappling with, unsuccessfully, for years.
Yet the takeup of OSGi has been quite slow. I was curious enough the other day to take a look on Jobserve on job postings requiring OSGi, and I found only a handful of positions available. While there seems to inexorable movement towards OSGi driven in particular by application server vendors (who, remember, also drove the adoption of the now infamous EJB 1 and 2), and a few evangelists, we are yet to see a groundswell of enthusiasm from the mainstream developer community, in the same way as, for example, with technologies like Grails and, before it, Spring.
This is a shame, because I believe the ideas that underpin OSGi are fundamentally important to writing flexible systems which can remain manageable as they grow in size and complexity.
I'd like to comment on some of the reasons why OSGi has still not taken off in a way which cements it's role as the foundation for Enterprise Java applications, and also to explain why I haven't used OSGi as the basis of Impala. My intention is not to spread FUD, but to identify some of the perceptions (and potentially misconceptions) held on OSGi and to give my interpretation on to what extent they are justified.
Some people just don't "get it"
Not everybody thinks that the idea of partitioning an application into modules is a good one. Some developers are happier just to lump all classes together under a single source directory, and don't see how an application can benefit from modules. Maybe they haven't worked on projects that really require this kind of partitioning, or have suffered from a botched attempt to modularise an application. Clearly, these developers are not going to be early adopters of OSGi or, for that matter, a technology like Impala.
OSGi won't really help the productivity of my development
Clearly, there is more work involved in setting up an OSGi application than a regular Java application. You need to ensure that all your jars, both for your application and for the third party libraries, are OSGi compliant. For your application's jars, you'll be responsible for the bundle manifest yourself, making sure that its content fits in with the structure and organisation of your application. You'll definitely want some tool to make this job easier. Also, you'll have to source third party libraries which are OSGi compliant, or, in the worse case, add the necesary OSGi metadata yourself.
The productivity advantages of dynamically updatable modules will probably kick in at some point, but not until you have a smooth running development environment set up. You can accelerate this process with the help of an OSGi-based framework such as SpringSource's dm Server, or ModuleFusion.
While OSGi will undoubtedly help you write better and more flexible applications, you don't get many wild claims that OSGi will allow you to build your applications dramatically faster. Developers who come to OSGi with those kinds of expectations will probably be disappointed.
OSGi requires a complex environment
Enterprise Java has a reputation for being complex. Not only do you need to know the Java language, you need to know Spring, Hibernate, web technologies, relational databases, etc. etc.. You need to know all sorts of related technologies, test frameworks, ANT or Maven, and more. And this is just to write traditional Java applications.
To write OSGi-based Enterprise applications, there is much more to know. You'll need a good conceptual understanding of how OSGi works - both in the way that it manages class loaders and the way services are exported and consumed. Not Java 101 stuff. You'll also need a practical understanding of the idiosyncracies of your own OSGi environment. There will be differences in the way you build and deploy applications, and the way you manage the tools and runtime, depending on which OSGi containers and helper frameworks you use. You won't need to be a rocket scientist to figure this all out, but you will need some time, patience and experience. The wave of books coming out on OSGi will definitely help, but don't expect the junior members of your team to be able to jump straight into an OSGi project and hit the ground running.
How do I know it will all work?
Some people might be put off OSGi because of lingering thoughts that they will run into difficulties getting their applications to work in an OSGi environment, especially those with large existing code bases.
Some of the most commonly used frameworks out there are not very OSGi-friendly, typically either because they are designed and packaged in a not very modular way, or because they use class loaders in a way which does not align with the OSGi specification, for example, by using the thread context class loader to load classes. Naive use of these libraries in an OSGi environment will lead to unexpected problems.
You'll need to find a way to work around these issues. The hard way will be to try to do it yourself. The easy way will be to rely on a packaged OSGi solution, again such as dm Server or ModuleFusion. But remember, even here, there are trade-offs. In the case of the dm Server, you'll be very closely tied in to SpringSource as a vendor, and with ModuleFusion, you may need to accept a technology stack which does not include your favourite frameworks.
OSGi applications are difficult to test
This, in my opinion, is a real achiles heel of OSGi. Because OSGi applications need to run in an OSGi container with class loading managed in a very specific way, you cannot run low level integration tests without the tests themselves running in a container. This makes testing OSGi applications particularly challenging.
The only serious attempt I am aware of to address this problem is the Spring Dynamic Modules test framework, which dynamically creates a test bundle using your application code, launches an OSGi container, deploys the test bundle to the OSGi container (as well as the bundles you need to test plus some infrastructure bundles), and runs your test code. It's not especially pretty, but there's no substitute for real integration tests as opposed to unit tests or tests using mock objects.
For me, ease of testing is of fundamental importance in choosing technologies - it certainly is a large part of the reason for the emergence of Spring. I certainly have no appetite for a return to the days of EJB 1 and 2 when applications could only be tested on a container.
Some concluding remarks
Let me make my position clear. I am not an OSGi evangelist. I prefer to think of myself as OSGi-neutral. I have deliberately chosen not to base Impala on OSGi, but I have designed it in a way which accomodates OSGi - indeed I even have a working example of Impala running on OSGi. As OSGi gains traction - and if users demand it - Impala will provide much better support for OSGi and even offer a simple migration route to OSGi which users can choose to adopt on a per project basis.
Yet the takeup of OSGi has been quite slow. I was curious enough the other day to take a look on Jobserve on job postings requiring OSGi, and I found only a handful of positions available. While there seems to inexorable movement towards OSGi driven in particular by application server vendors (who, remember, also drove the adoption of the now infamous EJB 1 and 2), and a few evangelists, we are yet to see a groundswell of enthusiasm from the mainstream developer community, in the same way as, for example, with technologies like Grails and, before it, Spring.
This is a shame, because I believe the ideas that underpin OSGi are fundamentally important to writing flexible systems which can remain manageable as they grow in size and complexity.
I'd like to comment on some of the reasons why OSGi has still not taken off in a way which cements it's role as the foundation for Enterprise Java applications, and also to explain why I haven't used OSGi as the basis of Impala. My intention is not to spread FUD, but to identify some of the perceptions (and potentially misconceptions) held on OSGi and to give my interpretation on to what extent they are justified.
Some people just don't "get it"
Not everybody thinks that the idea of partitioning an application into modules is a good one. Some developers are happier just to lump all classes together under a single source directory, and don't see how an application can benefit from modules. Maybe they haven't worked on projects that really require this kind of partitioning, or have suffered from a botched attempt to modularise an application. Clearly, these developers are not going to be early adopters of OSGi or, for that matter, a technology like Impala.
OSGi won't really help the productivity of my development
Clearly, there is more work involved in setting up an OSGi application than a regular Java application. You need to ensure that all your jars, both for your application and for the third party libraries, are OSGi compliant. For your application's jars, you'll be responsible for the bundle manifest yourself, making sure that its content fits in with the structure and organisation of your application. You'll definitely want some tool to make this job easier. Also, you'll have to source third party libraries which are OSGi compliant, or, in the worse case, add the necesary OSGi metadata yourself.
The productivity advantages of dynamically updatable modules will probably kick in at some point, but not until you have a smooth running development environment set up. You can accelerate this process with the help of an OSGi-based framework such as SpringSource's dm Server, or ModuleFusion.
While OSGi will undoubtedly help you write better and more flexible applications, you don't get many wild claims that OSGi will allow you to build your applications dramatically faster. Developers who come to OSGi with those kinds of expectations will probably be disappointed.
OSGi requires a complex environment
Enterprise Java has a reputation for being complex. Not only do you need to know the Java language, you need to know Spring, Hibernate, web technologies, relational databases, etc. etc.. You need to know all sorts of related technologies, test frameworks, ANT or Maven, and more. And this is just to write traditional Java applications.
To write OSGi-based Enterprise applications, there is much more to know. You'll need a good conceptual understanding of how OSGi works - both in the way that it manages class loaders and the way services are exported and consumed. Not Java 101 stuff. You'll also need a practical understanding of the idiosyncracies of your own OSGi environment. There will be differences in the way you build and deploy applications, and the way you manage the tools and runtime, depending on which OSGi containers and helper frameworks you use. You won't need to be a rocket scientist to figure this all out, but you will need some time, patience and experience. The wave of books coming out on OSGi will definitely help, but don't expect the junior members of your team to be able to jump straight into an OSGi project and hit the ground running.
How do I know it will all work?
Some people might be put off OSGi because of lingering thoughts that they will run into difficulties getting their applications to work in an OSGi environment, especially those with large existing code bases.
Some of the most commonly used frameworks out there are not very OSGi-friendly, typically either because they are designed and packaged in a not very modular way, or because they use class loaders in a way which does not align with the OSGi specification, for example, by using the thread context class loader to load classes. Naive use of these libraries in an OSGi environment will lead to unexpected problems.
You'll need to find a way to work around these issues. The hard way will be to try to do it yourself. The easy way will be to rely on a packaged OSGi solution, again such as dm Server or ModuleFusion. But remember, even here, there are trade-offs. In the case of the dm Server, you'll be very closely tied in to SpringSource as a vendor, and with ModuleFusion, you may need to accept a technology stack which does not include your favourite frameworks.
OSGi applications are difficult to test
This, in my opinion, is a real achiles heel of OSGi. Because OSGi applications need to run in an OSGi container with class loading managed in a very specific way, you cannot run low level integration tests without the tests themselves running in a container. This makes testing OSGi applications particularly challenging.
The only serious attempt I am aware of to address this problem is the Spring Dynamic Modules test framework, which dynamically creates a test bundle using your application code, launches an OSGi container, deploys the test bundle to the OSGi container (as well as the bundles you need to test plus some infrastructure bundles), and runs your test code. It's not especially pretty, but there's no substitute for real integration tests as opposed to unit tests or tests using mock objects.
For me, ease of testing is of fundamental importance in choosing technologies - it certainly is a large part of the reason for the emergence of Spring. I certainly have no appetite for a return to the days of EJB 1 and 2 when applications could only be tested on a container.
Some concluding remarks
Let me make my position clear. I am not an OSGi evangelist. I prefer to think of myself as OSGi-neutral. I have deliberately chosen not to base Impala on OSGi, but I have designed it in a way which accomodates OSGi - indeed I even have a working example of Impala running on OSGi. As OSGi gains traction - and if users demand it - Impala will provide much better support for OSGi and even offer a simple migration route to OSGi which users can choose to adopt on a per project basis.
Thursday, March 26, 2009
Where are all the Groovy web frameworks?
A little while ago when I was exploring web frameworks - as I tend to do once every few months - I spent a bit of time looking into web frameworks built in Groovy.
Groovy has all the raw materials needed for a powerful web framework implementation. It's dynamic nature and metaclass facility make it perfect for layering syntactic sugar on top of common tasks. It's very easy to dynamically update Groovy-based functionality - very little special framework is required - which makes it potentially very productive. And it already has very powerful facilities for templating, string manipulation, etc. - all capabilities required for web application development.
With all of these advantages Groovy should be a very fertile breeding ground for web frameworks. After all, for all the dozens of Java web frameworks, you could at least expect a few out there harnessing the power and convenience of Groovy.
Sadly, this does not seem to be the case. You can use Groovy to embellish plenty of existing Java web frameworks, but precious few web frameworks appear to have been built from the ground up with Groovy's power features in mind.
I'd happily be proved wrong on this point, but from my searches Grails seems to be the only real show in town.
I like Grails' web framework. It is an excellent showcase for the power of Groovy. What I don't want is the full stack Grails experience. If I were to use Grails as a web framework, I would want to embed it in an Impala-based application with a modular back-end. It was originally in the Grails 1.1 roadmap to separate the web part of Grails. However, during a recent meetup, I got it from the horse's mouth that this wasn't going to be happening any time very soon.
According to Graeme Rocher, he would consider splitting the web framework "if his users demanded it". I think he is missing the point that by not providing a separately embeddable flavour of Grails he is isolating himself from a significant untapped user base - those looking for a capable web framework but not the full Grails stack. Maybe because Grails has become so popular it is a luxury he can afford!
So where does this leave Groovy? I don't think that it is healthy for a language to be too dependent on a single framework for its popularity. The language needs - and deserves - more variety, a choice to suit a wider range of appetites.
Groovy has all the raw materials needed for a powerful web framework implementation. It's dynamic nature and metaclass facility make it perfect for layering syntactic sugar on top of common tasks. It's very easy to dynamically update Groovy-based functionality - very little special framework is required - which makes it potentially very productive. And it already has very powerful facilities for templating, string manipulation, etc. - all capabilities required for web application development.
With all of these advantages Groovy should be a very fertile breeding ground for web frameworks. After all, for all the dozens of Java web frameworks, you could at least expect a few out there harnessing the power and convenience of Groovy.
Sadly, this does not seem to be the case. You can use Groovy to embellish plenty of existing Java web frameworks, but precious few web frameworks appear to have been built from the ground up with Groovy's power features in mind.
I'd happily be proved wrong on this point, but from my searches Grails seems to be the only real show in town.
I like Grails' web framework. It is an excellent showcase for the power of Groovy. What I don't want is the full stack Grails experience. If I were to use Grails as a web framework, I would want to embed it in an Impala-based application with a modular back-end. It was originally in the Grails 1.1 roadmap to separate the web part of Grails. However, during a recent meetup, I got it from the horse's mouth that this wasn't going to be happening any time very soon.
According to Graeme Rocher, he would consider splitting the web framework "if his users demanded it". I think he is missing the point that by not providing a separately embeddable flavour of Grails he is isolating himself from a significant untapped user base - those looking for a capable web framework but not the full Grails stack. Maybe because Grails has become so popular it is a luxury he can afford!
So where does this leave Groovy? I don't think that it is healthy for a language to be too dependent on a single framework for its popularity. The language needs - and deserves - more variety, a choice to suit a wider range of appetites.
Friday, March 20, 2009
A natural successor to Java?
Before I started learning Java in the late 1990s, I remember picking up a book on Java and on the back cover it was described as the "natural successor to C++". Java has now reached that stage in its evolutionary cycle. It's not that Java is not not a productive and capable language. Because of the excellent tool set that you get with Java it is still more productive (particularly for me) than anything else out there, including the likes of Groovy and Ruby. But just imagine how much more productive you could be with a language which had Java's tooling support, but was free from the annoyances and deficiencies of Java, which are now generally quite well understood but very difficult to fix without breaking backward compatibility.
So what are we looking for in a language that will succeed Java? Firstly, it has to be based on the JVM. While Java as a language is running out of steam in terms of new features added, the JVM has still got real momentum, if only to be judged by the plethora of languages that run on the JVM, both from the ranks of languages which have independent lives outside of the JVM (e.g. JRuby), through to languages which are designed to work on the JVM.
Second, it still needs static typing. That's a strong personal preference - I don't see myself ditching statically typed languages altogether. Dynamic languages like Groovy are great for certain tasks - web app development, integration, etc, but for the guts of a substantial application or application framework, I really do think static typing is a must have.
Third, it needs to have substantial value add features. The one area most missing in Java is support for a more functional style of programming. As a developer who never studied computing at university, I have a background in commercially popular languages - the trend towards more functional programming styles has been around in academia for ages but is more recently moving into the enterprise programming world. While a more functional style of programming does not come that naturally to me, it's absence in Java is quite often a barrier to effective code reuse, and there is no doubt that adopting a more functional style of programming will help me to become a better programmer.
Scala seems to tick the boxes nicely in these three areas, and the fact that it is still fundamentally OO will make it more accessible to existing Java developers.
Still absorbing Scala, so its too early to say what I don't like about it. One concern is whether it is too feature rich, too complicated. Too many features may make it more powerful but may also raise barriers to adoption enough to prevent it from becoming mainstream.
Another potential contender to watch out for is Fan. I saw Stephen Colebourne's talk earlier this week, and was impressed. It strikes me as a serious attempt to fix the problematic aspects of Java without introducing any unnecessary new complexity. It has a very simple type system, and will be much more accessible than Scala, and allows easy switching from the base static typing to duck typing.
On the down side, it has a few features which may prove to be controversial, such as the inability to share mutable state between threads, and only partial support for generics. Another thing I'm note sure about is that abstracts over .NET as well as Java. This may be of interest to those who have to work in both of these environments, but I don't see much value in this for those working primarily in Java.
While these questions rumble on in the background, I intend to add support in Impala for modules implemented in different languages. Should be very simple to do this - after all, it will be mostly just a question of adding build support.
The multi-module structure of an Impala application makes development in multiple languages a good choice, with Java for core interfaces and domain classes, a language like Scala for the more complex implementation elements, and more dynamic languages suitable for modules closer to the fringe of the application.
So what are we looking for in a language that will succeed Java? Firstly, it has to be based on the JVM. While Java as a language is running out of steam in terms of new features added, the JVM has still got real momentum, if only to be judged by the plethora of languages that run on the JVM, both from the ranks of languages which have independent lives outside of the JVM (e.g. JRuby), through to languages which are designed to work on the JVM.
Second, it still needs static typing. That's a strong personal preference - I don't see myself ditching statically typed languages altogether. Dynamic languages like Groovy are great for certain tasks - web app development, integration, etc, but for the guts of a substantial application or application framework, I really do think static typing is a must have.
Third, it needs to have substantial value add features. The one area most missing in Java is support for a more functional style of programming. As a developer who never studied computing at university, I have a background in commercially popular languages - the trend towards more functional programming styles has been around in academia for ages but is more recently moving into the enterprise programming world. While a more functional style of programming does not come that naturally to me, it's absence in Java is quite often a barrier to effective code reuse, and there is no doubt that adopting a more functional style of programming will help me to become a better programmer.
Scala seems to tick the boxes nicely in these three areas, and the fact that it is still fundamentally OO will make it more accessible to existing Java developers.
Still absorbing Scala, so its too early to say what I don't like about it. One concern is whether it is too feature rich, too complicated. Too many features may make it more powerful but may also raise barriers to adoption enough to prevent it from becoming mainstream.
Another potential contender to watch out for is Fan. I saw Stephen Colebourne's talk earlier this week, and was impressed. It strikes me as a serious attempt to fix the problematic aspects of Java without introducing any unnecessary new complexity. It has a very simple type system, and will be much more accessible than Scala, and allows easy switching from the base static typing to duck typing.
On the down side, it has a few features which may prove to be controversial, such as the inability to share mutable state between threads, and only partial support for generics. Another thing I'm note sure about is that abstracts over .NET as well as Java. This may be of interest to those who have to work in both of these environments, but I don't see much value in this for those working primarily in Java.
While these questions rumble on in the background, I intend to add support in Impala for modules implemented in different languages. Should be very simple to do this - after all, it will be mostly just a question of adding build support.
The multi-module structure of an Impala application makes development in multiple languages a good choice, with Java for core interfaces and domain classes, a language like Scala for the more complex implementation elements, and more dynamic languages suitable for modules closer to the fringe of the application.
Monday, March 9, 2009
New Impala extensions project
I've created a new Impala Extensions project, also on Google Code. The idea is to host modules, typically extensions to Spring or Impala or both, which are generically useful, but not substantial enough to warrant their own project and too far from the core of Impala to warrant inclusion in Impala itself.
So far the extensions project contains a set of extensions to Spring annotation-based MVC framework, and also a general purpose event management framework, particularly useful for managing persistent and asynchronous events (as a lightweight alternative to JMS). I expect the list to into a number of other areas over time.
The extensions project also contains an example application which can be used as testbed for individual modules.
So far the extensions project contains a set of extensions to Spring annotation-based MVC framework, and also a general purpose event management framework, particularly useful for managing persistent and asynchronous events (as a lightweight alternative to JMS). I expect the list to into a number of other areas over time.
The extensions project also contains an example application which can be used as testbed for individual modules.
Thursday, February 12, 2009
Roadmap Update
I took a look at the Impala roadmap and realised that it had got a little out of date. I've corrected this - http://code.google.com/p/impala/wiki/Roadmap.
There are some pretty cool features on there way, and plenty of work to do. The good news is that a final 1.0 is not too far away.
There are some pretty cool features on there way, and plenty of work to do. The good news is that a final 1.0 is not too far away.
Monday, February 9, 2009
Impala 1.0M5 released
I am pleased to announce the release of Impala 1.0M5, which introduces many API and configuration improvements into the framework.
Following 1.0M5, only minor changes in internal APIs are now expected prior to the 1.0 final release. The 1.0M5 release also features improvements which make it much easier to configure Impala-based applications, and to add your own extensions to the framework. While Impala is still very heavily based on the Spring framework, 1.0M5 now also makes it possible to plug in other runtime frameworks into Impala's dynamic module loading mechanism.
For more information on this release see: http://code.google.com/p/impala/wiki/Release1_0M5Announcement
Following 1.0M5, only minor changes in internal APIs are now expected prior to the 1.0 final release. The 1.0M5 release also features improvements which make it much easier to configure Impala-based applications, and to add your own extensions to the framework. While Impala is still very heavily based on the Spring framework, 1.0M5 now also makes it possible to plug in other runtime frameworks into Impala's dynamic module loading mechanism.
For more information on this release see: http://code.google.com/p/impala/wiki/Release1_0M5Announcement
Subscribe to:
Posts (Atom)