OpenSpaces integration with the Service Grid allows you to deploy web applications (packaged as a WAR file) onto the Service Grid. The integration is built on top of the Service Grid Processing Unit Container.
The integration allows you to make use of the following Service Grid features:
The web application itself is a pure, JEE based, web application. The application can be the most generic web application, and automatically make use of the Service Grid features. The web application can define a Space (either embedded or remote) very easily (either using Spring or not).
The web container used behind the scenes is Jetty (with other containers coming in the near future). This page will list the common usage and configuration of web containers. Jetty specific configuration and usage can be found here.
The integration can either deploy a packaged WAR file or an exploded WAR file. In order to deploy packaged WAR file, it can be specified using one of the deployment mechanisms (UI/CLI/Programmatic, see more here). When deploying a WAR file, it goes through the following steps until it gets to the GSC:
Note that the deploy client, the GSMs, and the GSCs can run on different machines.
Deploying an exploded WAR is similar to deploying a packaged WAR. Here are the steps:
The directory where the web applications are extracted (up to the work directory) on the GSC side can be controlled using the com.gs.work system property.
The deploy directory location (up to the deploy directory) used on the GSM side can be controlled using the com.gs.deploy system property.
A Web Application deployed into the Service Grid is, at the end of the day, just another type of a processing unit. This means that it inherits all the options that a processing unit has, among which is the ability to define an optional META-INF/spring/pu.xml configuration file as any other processing unit. Note however that class definitions and libraries on which the web application depends are placed in their standard JEE web application location (i.e. WEB-INF/classes and WEB-INF/lib respectively).
Here is the structure of the class loaders when several web applications are deployed on the Service Grid:
The following table shows which user controlled locations end up in which class loader, and the important JAR files that exist within each one:
The idea behind the class loaders is to create a completely self sufficient web application. All relevant jar files or classes should exists within the web application (as if running it standalone) and then deploying it into the Service Grid will be a seamless experience.
A special case happen with gs-runtime.jar which is automatically removed from WEB-INF/lib if it exists there since it has already been defined in the common class loader.
In terms of class loader delegation model, the web application class loader uses a parent last delegation mode. This means that the web application will first try and load classes from its own class loader, and only if they are not found, will delegate up to the parent class loader. This is the recommended way to work with this class loader model.
When deploying a web application onto GigaSpaces Service Grid the web.xml of the web application will be automatically changed to include a BootstrapWebApplicationContextListener. The Bootstrap Context Listener provides the following services:
There are several ways that a Space (and other components) can be used, and configured within a web application. Some common scenarios are listed below.
A typical usage pattern is connecting remotely to a Space. Here is an example (either using Spring within the web application Spring context file, or using pure Java code):
A web application is still just a processing unit. This means that a META-INF/spring/pu.xml can be added, which can be used to define a Space and GigaSpace. Accessing the beans is relatively simple as they are automatically added to the web application context and can be accessed from there. The key they are stored under is the bean id that each bean corresponds to.
Here is an example that starts an embedded Space as part of the web application within the pu.xml. The following is the content of the pu.xml
Here is an example of a simple JSP that uses it:
The previous section described several options of how to start an embedded Space within the web application. The recommended way to work with embedded Space, is to work with its clustered proxy (the clustered flag in GigaSpace set to true) for interactions that originate from a web request. This is mainly since the load balancer does not know about routing specific classes to each cluster member.
Note, event driven operations should still work with non clustered embedded Space (usually). For example, a web request that results in writing an Order (using the clustered proxy) to the Space, and a polling container that picks it up and processes it asynchronously. The polling container should work with the non clustered, collocated, proxy of the cluster member space.
When deploying a highly available web site, usually a load balancer is used to load balance requests between at least two instances of a web container that actually runs the web application. When using GigaSpaces in order to deploy a web application, running more than one instance of a web application becomes a snap, as well as the manageability and virtualized nature of running web applications.
In order to create a single point of view, in terms of clients connecting to a server, a load balancer is usually used. While there are many different types of load balancers (both hardware and software), solving the load balancing problem is not new (i.e. it is not something that is introduced because the web application is deployed on GigaSpaces). Examples of how to configure load balancers can be found in specific web container sections.
GigaSpaces also comes with a built in integration with Apache httpd load balancer as described in the Apache Load Balancer Agent section.