Version 7.0 includes a number of major themes, such as significantly better administration and monitoring capabilities, optimizations for multi-core environments, highly optimized and flexible local cache, reduced memory footprint, improved deployment model, comprehensive security and support for deployment on the cloud
Version 7.0.4 is a service pack release on top of the 7.0.x branch and as such is backwards compatible with previous 7.0.x releases. It includes mainly bug fixes on top of 7.0.3 (as can be seen in its release notes.
Version 7.0.3 is a service pack release on top of the 7.0.x branch and as such is backwards compatible with previous 7.0.x releases. It includes mainly bug fixes on top of 7.0.2 (as can be seen in its release notes.
Version 7.0.2 is a service pack release on top of the 7.0.x branch and as such is backwards compatible with both 7.0.0 and 7.0.1. It includes mainly bug fixes on top of 7.0.1 (as can be seen in its release notes. In terms of new functionality and improvements, it includes:
A number of optimizations to the SQL join mechanism (relevant to the GigaSpaces JDBC driver) which significantly improve the performance of join operations
Summary: Version 7.0.1 is a service pack release on top of version 7.0.0 and as such is backwards compatible with it. In addition to a number of bug fixes on top of 7.0.0, It includes some very interesting features and improvements around 3 main areas: Enterprise grade security, improved usability and APIs and better troubleshooting and monitoring capabilities. In addition to that, it also includes performance optimizations of embedded space operations.
7.0.1 introduces a revamped security implementation for the best possible data grid security. This implementation was specifically designed to support enterprise and cloud data grid scenarios in which the data grid is accessed by multiple applications and serves as a central data repository. In terms of relevance to the cloud, it brings a new level of data security which enables you to safely store data in cloud environments without having to worry about security hazard related to storing your data off premises. In addition to transport level security for all data grid related communication (based on SSL), it includes support for users and roles, with a comprehensive permissions system to enforce authorization for every operation, starting from the management of the GigaSpaces infrastructure, processing units and space (data grid) instances and ending with content based authorization to access the data grid contents and operate on it. The new security implementation is fully supported by the various management interfaces (GUI, CLI and administration API) and also provides open APIs for integration with 3rd party user registries. For more details please refer to this page. We will also dedicate a separate blog post to this important aspect of the XAP 7.0.1.
API and Usability
All new space based remoting implementation for XAP.NET - Space based remoting has been around since version 6.0 in XAP for Java and extends the Spring remoting stack to provide a wealth of benefits in comparison to traditional remoting implementations, such as: high availability of exposed services, transparent client side failover, location transparency, load balancing across services in the cluster, map/reduce support and asynchronous invocation (please refer to "The Service Virtualization Framework" white paper in our whitepapers section if you'd like to further learn about the benefits of space based remoting). As of version 7.0.1, it is also available for our XAP.NET users with all the goodies which are part of the Java implementation. The documentation of this feature can be found here.
XAP.NET built processing unit container - In order to further ease and simplify the processing unit development and deployment experience in XAP.NET, we've implemented the Basic Processing Unit Container, which is a smart built in implementation of the processing unit container interface. This container automatically starts and manages GigaSpaces related components for you (such as space instances, event containers and remote service endpoints), relieving you from the need to explicitly manage the life cycle of those components and allowing you to focus on your application's business logic.
Easier cache eviction policy configuration - This is a nice little configuration improvement that makes our users' life easier when configuring the cache eviction policy. In previous releases, the recommended way to control the eviction policy was by using space properties. In 7.0.1, this has been made much more elegant with native XML namespace and Java code configuration support. Here's an example of how you would have configured LRU eviction in your pu.xml and in code in 7.0.0 and 7.0.1:
UrlSpaceConfigurer spaceConfigurer = new UrlSpaceConfigurer("/./space").
IJSpace space = spaceConfigurer.space();
UrlSpaceConfigurer spaceConfigurer = new UrlSpaceConfigurer("/./space").
IJSpace space = spaceConfigurer.space();
Extended Indexing at the property level - Extended indexing allows you to index properties of objects written to the space with a BTree index, thus allowing for range queries (based on the property type's natural ordering). Prior to 7.0.1, extended indexing was only available at the class level, which means you had to either use the extended index for all the properties of a certain class or for none at all (and make do with basic indexing). In 7.0.1, we've enabled extended indexing at the property level, so you can now choose the right indexing scheme for each property. For example, the identifier property of a certain class would typically be indexed with a basic index, which does not have a sense of order between indexed values and is therefore more lightweight and faster than extended indexing. For another date/time or number property, you would use the extended indexing since if you would like to perform range queries (e.g. all the objects with date property before 1/1/2009). This can be configured via annotations, e.g.:
As with every release, we have dedicated quite a lot of time to further optimize areas which we knew could be improved. In this release we have focused on performance of embedded space operations and by optimizing the concurrency of internal thread and object pools we have been able to present improvements of up to 200% for embedded space operations. The following graphs show across-the-board improvements in comparison to 7.0.1 (these were measured with 8 threads concurrently accessing the space). It is important to mention that these improvements were achieved without any sacrifice to the consistency and correctness of the space operations:
Better troubleshooting and monitoring
More information in log files - We are continuing to improve the quality and amount of information exposed via our log files. 7.0.0 introduced some major logging improvements (per-pu log messages, improved file naming scheme, time-based rollover policies and more). In 7.0.1 we've added some more helpful information to help you tune and troubleshoot the system. The first one is GC awareness. You can define a GC pause upper threshold (10 seconds by default). In case the garbage collection process takes longer, the system will log this for future reference. In addition, we've added logging for the recovery process (which occurs when starting a backup space instance and recovering the data from the primary) so you can now tell exactly what happened during the process and how long it took
Replication statistics in administration API - This is an important addition to the administration API which enables you to monitor the rate at which objects are replicated and whether or not there's an issue with the replication mechanism. For more details refer to the org.openspaces.admin.space.SpaceInstanceStatistics interface (specifically, the getReplicationStatistics() method).
Version 7.0.0 has been released on July 14, 2009. To get more details on the release please use the following links:
Milestone release date: RC1: May 27, 2009; RC2: June 21, 2009
All new grid based task execution framework for XAP.NET with support for parallel task execution across the grid and Map/Reduce style invocations. For more details please refer to this page.
Better HA and deployment control with Deployment Zones: This version includes support for deployment zones. Deployment zones enable you to tag GigaSpaces container as belonging to one or more deployment zones. Zones can represent physical availability zones (such as cabinets, racks or even physical locations), machine types (e.g. small, medium, large) or anything else you want to consider when deploying your processing units.
At deployment time, you can specify zones requirements on your processing unit, e.g. deploy the processing unit only to specific zones, and make sure that primaries and backups are not deployed on the same zone.
FIFO operation modifier: In previous versions, FIFO support was limited to the class and forced FIFO behavior on all operations related to the annotated class. With RC2, FIFO is supported at the operation level, by using the new ReadModifiers.FIFO modifier, and the new fifoMode attribute of the @SpaceClass annotation. This means that you can control on a per operation basis whether or not you will get the objects from the space in FIFO ordering.
ID based operations in XAP.NET for improved usability and performance of such operations.
Multiple UI improvements and enhancement:
Web application statistics: The UI now shows the total number of requests, the request throughput and the request latency for deployed web applications. These statistics are also exposed via the new Administration API and can be used for dynamically scaling the web application
Expand/Collapse support for UI component tree
Sortable tables in hosts and applications view
New splash screen and icons for better usability
New information in status bar about the number of GSMs, GSCs and GSAs in the cluster
Logging improvements for easier troubleshooting and monitoring:
Time based roll-over policy for log files. Logs are now rolled over daily instead of when reaching a certain capacity.
Improved naming convention for log files: Log file names now include the date, the time they were created, the component that created them (e.g. GSC, GSM) the host name on which they were created and process id of the process that created them. Here's an example for that: 2009-06-18~12.05-gigaspaces-gsc-host01-4792.log
Richer log messages which now also include the logging component's name. For example:
2009-06-10 18:43:56,328 myPU.1  WARNING [com.gigaspaces.core.cluster.replication] -
Replicator: connection with target space will be reestablished
Milestone 7 was never publicly released due to some issues discovered prior to the milestone planned release date.
Therefore it was skipped and the issues were fixed in milestone 8.
Significantly reduced memory footprint: Objects stored in the space consume as little as half the memory capacity compared to previous releases. This is related to changes in internal data structures within the space that are used to store the objects and their related metadata
Significantly improved performance of the local cache in general, and of id based operations with local cache in particular. readById operations are up to forty (!!!) times faster than before and support millions of reads per second. This is due to a new reference based storage model which reduces the overhead of serving read requests to a bare minimum and does not require new object creation or serialization
More efficient read-through database access: concurrent requests for the same data will only require a single database hit, thus easing the load on the database in extreme situations and improving overall read-through performance
Improved application monitoring and consolidated view of the physical infrastructure in the GUI: With m8, the UI contains 3 main tabs:
Applications: Shows the deployed processing units and displays information about their configured components, such as spaces, event containers, remote services and JEE applications.
Hosts: Shows the physical infrastructure, i.e. which machines in the network run GigaSpaces components (GSA, GSC, GSM) and what is the runtime state of each of the hosts and components that are deployed on it. Information provided includes JVM statistics and configuration and will include machine wide statistics such as CPU, memory and network utilization in the next milestone.
Improved logging: There were a number of improvements to the logging framework in this milestone:
Support for time-based rollover policy which starts a new log file every X days, weeks or months
Improved log messages format which now also includes (if applicable) the name of the processing from which the log message was issued and a clearer log message format
Milestone 6 highlights
Milestone release date: March 16, 2009
Simplification of product jars: In an effort to make the initial user experience with the product smoother, we have reduced the amount of jars needed for development and runtime. This is reflected in a new jar structure under the product's lib directory. The lib directory now contains 3 sub directories as follows:
lib/required: contains mandatory jars required for compile time and runtime (when running standalone clients and spaces), as follows:
gs-runtime.jar: includes all classes in former JSpaces.jar, and all jini and service grid jars, i.e. everything needed to run and compile a GS client and Space if you're not using OpenSpaces.
gs-openspaces.jar: former openspaces.jar
spring.jar and commons-logging.jar - mandatory if you use OpenSpaces (since it's requires Spring, and Spring requires commons-logging)
So for applications that use OpenSpaces, altogether 4 jars are required to compile and run a standalone GigaSpaces client or space. For those that do not, a single jar is required. All jars are located under the "lib/required" directory
lib/optional: contains optional jars (such as GS-Mule integration jars) and other build products such as openspaces schema files and sources.
lib/platform: Contains jars which are required by the GigaSpaces runtime environment and tooling. In most cases the user will not need to include any of the jars in this directory in their compile time or runtime classpath. Former lib/ext directory is now under lib/platform/ext
Security enhancements: All product components that can be accessed remotely (GSM, GSC, GSA) can now be secured and will not allow remote access unless remote user authenticated
Additional statistics exposed via the new Admin API: The new Admin API enables you to receive detailed statistics about processing unit components, such as JEE containers, event containers and remote services.
UI Changes: We are continuously improving our UI and m6 is no exception. Main changes include a clearer processing unit view, which includes all components of the processing unit, such as spaces, web application, event containers and remote services. We have also rearranged certain pieces of information to better reflect the runtime state of your application.
Glassfish v3 Prelude support: You can now deploy a JEE web application on an embedded Glassfish v3 container. This is done by using the jee.container=glassfish deployment property. Complete documentation of this feature will soon be available.
Milestone 4 highlights
Milestone release date: Feb. 1, 2009
Improved read concurrency for multicore boxes. This is the first stage of our multicore optimizations and lock free read operations in version 7.0.
Reliable notifications support. Notification can now be reliable, i.e. guaranteed to be delivered at least once to the client. This is documented here and is relevant for the event session API. Notify container will expose this in milestone 5.
Milestone 3 highlights
Milestone release date: Jan. 11, 2009
Significantly improved classloading model - Putting classes in the shared-lib is no longer required. All processing unit libraries can now be put the lib directory and are now deployed to the processing unit's classloader and not the GSC-wide classloader. This enables better undeploy capabilities and eliminates class sharing between processing units.
All new GigaSpaces Agent which manages all the life cycle of the GigaSpaces runtime components (GSM, GSC, Lookup service, etc.) and enables you to control them remotely via the UI or the new Admin API (see next bullet on this list)
Support for time based window scenarios by maintaining the lease as part of the POJO instance. You can now annotate a field with the @SpaceLeaseExpiration annotation and the space will use it to store and retrieve the lease expiration time for the instance. This value can later be propagated to an external data source. Based on it, expired instances can be filtered out of the space when loaded from the database.
UI improvements: Ability to view the processing unit elements and the space cluster that belong to it in the same tab
A refactored cache eviction policy mechanism, providing a significantly improved performance for our LRU eviction policy. The speed-up gained by this new implementation is more than 100%. This is a work in progress, more performance optimization to come in next milestones.