Integration of GigaSpaces with other development frameworks is a key to the GigaSpaces-non vendor locking and interoperability strategy.
Prior to version 6.5, only the Java and .NET frameworks were supported. There was a relatively large gap between the functionality of the Java version, and the functionality of the .NET version, specifically the application server (XAP) capabilities. The .NET version was built to address basic caching scenarios, more than application-server scenarios.
Version 6.5 includes enhanced support for existing frameworks in Java (JEE, Spring, Hibernate) and the new Mule support. The .NET version has been significantly enhanced to support database interaction through nHibernate. The .NET framework supports .NET Processing Units that deploy .NET business logic in the GigaSpaces containers. C++ is a complete re-write of the previous JNI framework, and supports both data-caching scenarios and application-server development scenarios by enabling C++ workers in OpenSpaces deployment. Java, .NET and C++ performance has been significantly enhanced in 6.5. More specific notes on Java and C++ are provided separately.
Keep your existing framework and application code, while injecting GigaSpaces as the scaling and performance engine (as well as the container), to manage the SLA of the entire application.
High-performance interoperability – the language barrier becomes almost obsolete.
Avoid vendor lock-in.
Business Logic Virtualization Layer
Service Virtualization Framework (SVF) – SessionBean++
For services previously exposed via EJB3.0 or Spring remoting, the service virtualization layer provides significant added value. It allows POJO-based business services to enjoy the transparent business logic and data distribution that GigaSpaces provides, and boosts their scalability with zero code changes. It enables virtualization of multiple Service Beans that run on multiple machines as a single service, and can map method invocations on that virtualized service to particular instances, based on method argument content. It also enables execution of map or reduction of style invocations. The client can choose a different method of invocation for that service, such as synchronous, asynchronous, or parallel, without changing the client code or service implementation.
Seamless gridifying of Service Beans.
Implicit failover to a backup service without breaking the client.
Implicit load-balancing of method invocation between services.
Enables a variety of invocation methods without changing the service code, such as asynchronous (pull) or synchronous (push), parallel, batch execution, etc. This allows the application to decouple the service logic from the invocation layer. In this way, different clients can use invocation methods which best fit their needs, without depending on changes in the server side.
Enables scaling-out (between machines) and scaling up (within the same machine) at the same time.
Seamless collocation support for when the client and service implementation run on the same VM. In this case, client invocations are passed by reference to the collocated space and service bean.
Built-in support for data-affinity – method invocation can be routed to the appropriate service instance, based on the locality of certain data with that specific service. This allows you to achieve maximum latency and performance, while avoiding unnecessary network calls between the services and the data layer.
Designed for testability – collocation allows you to run the exact same business logic in a scaled-down, single process environment for functional testing and a fully functional cluster, without changes to either the client or the server code. This simplifies the process of continuous integration and interactive testing.
Built-in transaction support – method invocations are now transaction-aware and can share the same transaction context with the Data Grid layer implicitly. Unlike session bean transaction, awareness means that the method doesn't just deliver a transaction context to the application – the actual method is written to the underlying space-based messaging under a transaction. This also enables recovery from failure during asynchronous (pull) method-invocation. This ensures that asynchronous invocation is executed after failure, even if the client issues a request that is no longer available. A example of this behavior is batch processing.
Dynamic Services injection through Jruby/Groovy
Dynamic service injection is built on top of the SVF layer, and was designed to use dynamic languages such as Jruby and Groovy, as well as the new Java 6 support. This enables users to introduce new business logic and services dynamically without worrying about classpath, packaging and other matters that are normally required to ship Java code. This simplifies injecting new services to a live system without any downtime.
Injecting new business logic dynamically to an existing cluster without any downtime or changes to configuration.
Configuration of event containers has been greatly simplified by using annotations (@Polling, @Notify) in the same spirit of EJB3 simplified configuration. Matching is performed, based on POJO/SQLQuery templates, using @EventTemplate annotated method within the listener. As before, event listeners are pure POJOs. Choosing between event driven listeners and remoting based service is just a matter of configuration (annotation or xml based, see SVF for more information).
Messages sent through the JMS API can be routed, based on the content of the message to the receiver that assumes certain affinity with specific data set. This optimization avoids extra hope between the message receiver and the data tier.
Data awareness - messages and data are routed using the same underlying distribution model. This makes it extremely easy to route the messages to where the data is, without complex external mediation service.
Reduced latency (less network hops)
Better scalability - remove dependency between different partitions at each tier
Better reliability - less moving parts
Improved performance - data can be collocated with the service
Non intrusive API - Event listener implementation is totally abstracted from the specific GigaSpaces API.
Consistent behavior across all APIs - all messaging APIs share the same underlying messaging framework. This leads to better consistency in the way fail-over and scalability is achieved, as well as simplicity in terms of configuration.
A common mode for building enterprise SOA architecture relies on an Enterprise Service Bus that glues multiple services together in a loosely-coupled manner. Mule is commonly used as an open source ESB. This integration adds the value of GigaSpaces runtime into the Mule framework.
GigaSpaces users can now have an end-to-end SOA solution
Mule users can now benefit from the scaling, performance and clustering of GigaSpaces in Mule
Data Virtualization Layer
Persistency as a Service
Persistency as a Service provides the ability to store data in the Data Grid and keep it in-sync with an external database or other data source reliably, with minimum performance impact. Release 6.5 includes several optimizations for this layer on several fronts:
Simplifying configuration, using a new set of XML namespaces elements.
Setting up a production-ready Data Grid, fully integrated with existing databases, can now be done in a matter of minutes.
Out-of-the-box performance is at least 10 times better than performance when using a centralized database directly.
Enhanced Hibernate support
This version comes with a new implementation for the GigaSpaces External Data Source interface, which is used by the space for persisting and retrieving objects from an external database. This implementation is now opensourced, and has been moved into the OpenSpaces framework. It uses Hibernate to interact with the database, and includes the following main features:
Parallel loading of data from the database – by default, loading is performed by multiple threads in parallel, even for a single database table. This considerably improves performance (up to XX times faster than the old implementation). In addition, loading is done in batches (chunks) and uses Hibernate's result scrolling features for improved performance and memory utilization.
Support for Hibernate StatelessSession is also included. This provides even better performance, since no caching is performed by Hibernate in this mode of operation.
Improved plugability and abstractions for extending the existing framework. Interfaces have been simplified, and the new implementation includes a few more extension points and abstractions, to facilitate easier integration with existing databases.
Dealing with large data sets is now possible with the out-of-the-box configuration, and in most cases doesn't require custom development or complex data sources.
Stored procedure is a term used with databases to describe the business logic that runs inside the database. The collocation of the business logic with the database provides a performance boost, as well as extensibility to the SQL data processing semantics. The fact that the data is stored in-memory, and with native object format, makes it is possible to leverage existing dynamic languages, enabling execution of advanced data processing logic collocated with the data. Unlike stored-procedure, this feature has been designed to enable dynamic injection of new "procedures" on the fly, without bringing the entire server down. Code example
High-performance data processing
Extensibility beyond the built-in SQLQuery semantics
Changing or adding new processing logic, without bringing the data down, and without affecting other clients
Initial load of large data sets
An initial load can be performed from an external data-source, or from another space peer on the network. Previous to release 6.5, the initial loading process was performed sequentially. When dealing with large data sets, this basically means large pauses during recovery time. In version 6.5, the load in both cases has been made parallel, and the load time can be even 10 times faster.
Recovery from failure is faster and smoother
Deployment & Provisioning
New SLA definitions
Prior to version 6.5, users had to configure the primary and backup machine per partition manually, in order to meet this requirement.
As of 6.5, a new SLA <max-per-machine> element has been introduced. Using this SLA, the GSM is responsible for separating the two. This capability is available for both the data and the business logic services. (This feature has also been ported back to 6.0.X.)
Better utilization of machines, in case of a scale-down scenario
Prior to version 6.5, when updating a service collocated with data, the entire Processing Unit had to be brought down, bringing both the data and business logic down with it. As of version 6.5, a set of services can be reloaded independently from the data.
Upgrade to running services can be made without downtime.
You can now use a single button in the GigaSpaces UI to deploy and monitor an entire cluster.
Testing and debugging of cluster deployment is significantly simpler. You can now view a close-to-production setup in a single console, and easily debug your application in this scaled-down environment.
Prior to version 6.5, deployment of new Processing Units could be done only if the Processing Unit was placed in a certain directory on the machine running the GSM. As of 6.5, the GSM acts as a distributed repository, and enables deployment of Processing Units from any directory over the net. It automatically copies the Processing Unit from the client machine to the deployment repository on the GSM machine, as an implicit part of the deployment process.
Simple deployment process
Simple maintenance – deployment can be performed from anywhere on the net.
Enables better real-time troubleshooting and debugging.
Simplifies the process of correlating between events in a distributed environment.
Performance & Scalability
Internal data structures, client proxy, transaction-handling and destructive operations have been modified in version 6.5, to come as close as possible to being lock-free. Various tests have been conducted, including tests on Azul-based machines (that are now included as an integral part of GigaSpaces testing environment).
Scaling up is simplified – most of the complexity (and optimization) is now maintained by the GigaSpaces runtime, and is therefore no longer the responsibility of the developer.
Better efficiency – higher productivity from existing resources.
Improved out-of-the-box scale-out (See more benchmark results in this blog post)
Grid-based deployment often spawns a large number of clients at the same time, resulting in a peak load of discovery requests. Prior to version 6.5, tuning the discovery process to meet this need required complex tuning. As of 6.5, the discovery algorithm has been improved to support it, and discovery time has also been reduced.
Larger grid deployments are now supported out-of-the-box
Large space cluster deployment has also been improved
Large object performance
Large objects (Xmbytes) previously required expensive allocation of memory buffers. This resulted in a heavy load on the garbage collection, and behavior that was hard to determine. Version 6.5 allows you to control the allocation of these buffers, and therefore there is less dependency on the size of data objects.
Better performance with large objects
More deterministic behavior when dealing with large objects
Transport layer scalability (LRMI)
Connection and threading consumption at the transport layer has been improved to support large numbers of concurrent blocking operations. As of version 6.5, a blocking client operation doesn't consume threads at the server side without reaching starvation in the communication layer (blocking operations are now managed at a higher level in the implementation stack). This means that the out-of-the-box configuration can now easily handle a large number of concurrent blocking operations without tuning.
Better stability under a large concurrent load on a single-space instance
Improved scaling compared to tier-based implementations
Based on extensive research done by an independent 3rd party consultancy, GigaSpaces XAP proved far superior to a traditional JEE-based solution, in terms of throughput and latency.
The research project included porting a typical JEE OLTP application to GigaSpaces XAP, and benchmarking both applications on the same hardware.
The results on the same hardware were 6 times more throughput, with up to 10 times less latency, as depicted in this graph together with this graph.
Better "bang for the buck" - the same hardware achieves much better performance, so each transaction costs far less than the JEE alternative.
Better and more predictable scaling when business requirements change - scalability with SBA proved near linear. Scalability with JEE proved to be unpredictable. i.e. unlike SBA, the addition of 5 times more machines did not achieve a performance increase of the same order of magnitude.
Efficient memory overflow detection
Prior to version 6.5, the memory manager was turned off by default, because it introduced performance overhead. In addition, the memory manager did not monitor all possible scenarios that could lead to memory overflow. As of 6.5, the memory manager overhead has been optimized significantly, and is now turned on by default.
Better control over memory utilization (before the JVM crash) and garbage collection hiccups.
Monitoring the system allows you to take corrective action and fix problems
Memory manager is also used to detect when replication buffers are full. In this way, potential loss of data is prevented in case the Mirror Service or database are down.
Prior to version 6.5, the GigaSpaces transport configuration didn't expose all possible configuration attributes that affect that detection time in the case of a machine crash or cable disconnection. Version 6.5 provides better defaults that are optimized for fast failure detection, and exposes more fine-grained configuration that allows you to control failure detection behavior and timeouts.
Fast failure detection
Support large objects
(See Performance & Scalability)
Reduced dependency on system stability with user object size
Support for large cluster deployment
(See Performance & Scalability)
Larger clusters are now supported out-of-the-box
Robust dynamic discovery
(See Performance & Scalability)
Better stability in a scenario of too many discovery requests
Prior to version 6.5, users have to figure out how to setup GigaSpaces in Maven on their own. In addition, the GigaSpaces libraries and dependency setup were not optimized to run in a Maven repository. As of version 6.5, the library structure was modified slightly to fit better with Maven's versioning structure.
Simple integration of GigaSpaces in an existing Maven-based environment
Maven: project creation plugin
Prior to version 6.5, users that wanted to create a new project either copied an example and modified it to fit their own project-name, classpath dependencies, etc., or used an external utility based on Ant to create a project template. These options didn't cover the entire project lifecycle, such as: compiling, packaging, unit tests, deployment, redeployment, etc. As of version 6.5, the Maven plugin leverages the Maven project structure and utilities, to simplify the project creation and maintenance process (in a similar fashion to the Ruby and Groovy frameworks). It is now possible to create new projects based on pre-defined templates, compile package, deploy, and export these projects to Eclipse or Ant.
Simple project creation, testing, packaging and maintenance, both for Maven and non-Maven users.
The GigaSpaces UI allows you to run a scaled-down environment that uses the exact same setup as a full environment. This ensures that the coverage in such a scenario is as close as 100% from a functionality perspective.
The new Maven plugin includes built-in unit testing. It also provides a simple way to run in a standalone process container, in the exact container for the debugging process (skipping the deployment stage), or in a full-blown container (the same as the production line), with only a simple command-line switch. These can run from the command-line, or from inside the IDE.
Improved coverage – shorten the project development cycle and the ability to detect bugs on time
Improved reliability – less chances for unexpected failure
Improved time-to-market and ease of development – the development of large cluster applications is now significantly simpler
Simplified Processing Unit configuration
Prior to version 6.5, the user had to configure the main elements of the Processing Unit (polling containers, remoting, etc.) in the pu.xml file. As of version 6.5, the Processing Unit configuration leverages the Spring 2.5 annotation style (including support for component scanning), and enables simple injection of beans into event containers.
XML configuration is less error-prone
Simplifies the configuration of Processing Units
Enable the configuration of OpenSpaces components outside of Spring
A new configurer has been added for remoting (Sync and Async) and other components, to enable the setup of the same components used with OpenSpaces, in an environment that doesn't use Spring as a container.
Enables embedding of OpenSpaces in non Spring environments, such as other J2EE containers, console applications, etc.
Simplified product configuration
Some changes have been made in version 6.5 to simplify the configuration of the GigaSpaces environment:
Configuration overrides – allow you to override the default configuration without touching the "factory" setup. This way, users only make required changes and don't need to be exposed to the rest.
The number of configuration files have been reduced.
The required setup of both the cluster and the space is now exposed through OpenSpaces configuration, XML namespaces and annotations. This provides a single point for setting up the entire GigaSpaces configuration.
Reduces the complexity of setting up the GigaSpaces environment.
Simplifies the migration between versions of GigaSpaces – migration can be simple because there is no need to change the "factory" setup.
Prior to version 6.5, not all ports allocated by GigaSpaces were configurable, and therefore in some cases they weren't accessible through the firewall. As of version 6.5, all ports allocated by GigaSpaces are configurable and provide fine-grain-level control with firewall port setup.
Users can now control the Lookup Service multicast settings.
Prior to version 6.5, interoperability between .NET/C++ and Java was not well defined. Release 6.5 introduces new features in that area which gives smoother mapping between the representation of data objects in all those languages, such as:
AliasName - enables the mapping of different class and properties names to fit the writing convention of each language.
Storage Type - provides control over the way certain user defined types in .NET get serialized when written to the space. The user can choose to use the native .NET serialization (the default) or Portable serialization format or interop serialization mode.
Native collections support - support for mapping almost all of the native .NET collections, such as Dictionary to their Java equivalent (Map,..).
Nullable types support - Nullable .NET types such as int?, long? will be mapped to their Java equivalent (Integer, Long).
Interoperability is now made simple
Performance trade offs that are often a cost of interoperability have been reduced significantly.
GigaSpaces is now a cross language platform which easily allows the integration of a third party solution, written in other languages without the third party solution. This opens up a few options that would otherwise not exist:
An organization can now have a single platform that supports various languages, thus reducing the cost of ownership that is normally associated with the license cost required for each platform, the integration cost and the maintenance.
The Application has more flexibility for choosing the right tool for the right job. For example, users can choose Excel as front-end, and either .NET/VBScript, C++ or Java as the back end system.
Web applications written in ASP.NET can leverage the SessionStateStoreProvider, and plug in GigaSpaces to have reliable session state store, and sharing session state between different IIS instances.
Better scaling of .NET web application
Improved performance (compared to database session state sharing).
Data Virtualization Layer
Seamless integration with existing database
Prior to 6.5, integration with existing databases was very limited and didn't support out-of-the-box mapping between the data model running in the .NET application, and those of the existing databases.
As of 6.5, nHibernate is used to enable seamless mapping between the data model of the existing database and the one used in the .NET application. This feature was designed to run within a Mirror Service, thus enabling asynchronous write to the database in the same way that this is done with the Java equivalent. Initial space load and synchronous write to the database are also supported.
Prior to 6.5, the .NET application couldn't be deployed in the same way as Java Services, and leverage the processing unit and SLA driven containers capabilities. This resulted in a more limited use of the .NET framework, more towards a caching scenario, rather then the full Space Based Architecture. As of 6.5, the .NET application can be deployed as a service bundled within a processing unit. In this way .NET services can leverage the lifecycle, self healing and other capabilities that were available only to the Java developers.
Prior to 6.5, .NET users didn't have access to the space admin API, as was available with the Java version of the product. As of 6.5, admin API capabilities, such as GetClusterMemberNames, SpaceCopy and SpaceModeChanged event have been added to the .NET API.
Prior to 6.5, each call from the .NET layer to the underlying GigaSpaces libraries involved a few JNI calls. This introduced performance overhead compared with native .NET or pure Java.
As of 6.5, the underlying interface has changed so that the entire remoting operation and meta data handling is done at the .NET layer. In this way each call from .NET involves only one call to JNI.
Across the board performance improvements of up to 50% in single operations and 200% in multiple operations.
Equivalent to native .NET client. The removal of the JNI calls makes the current .NET implementations equivalent to a native .NET client, from a performance and overhead perspective. The JNI call becomes just like another call to an external library written in C or C++.
Support for 64bit
Prior to 6.5, .NET 64bit was not supported. As of 6.5 those limitations have been removed, and now GigaSpaces .NET fully supports 64bit platforms.
Performance - users can leverage 64bit to store more then 4G of data per process. This reduces the size of the GigaSpaces cluster for the deployment of large data sets, and can potentially reduce the CPU cost involved in such deployments.
As of version 6.5, .NET services can run within the same processing unit as with Java, and therefore inherit the same testability features from the Java environment.
nHibernate version 1.2
An nHibernate, based open sources implementation (practice) of External Data Source is provided with the product.
The new C++ API is the 3rd generation of the GigaSpaces C++ API. The first version of GigaSpaces C++ API was introduced around 3 years ago, where C++ engineers used internal proprietary API (the ExternalEntry) to interact with the space. This API was based on the excellent CodeMesh libraries. The main problem with this approach was that it wasn't intuitive and Java-centric. C++ engineers had to use Java semantics when writing their C++ applications, and had no ability to easily and dynamically scale the C++ business logic or deploy it across the GigaSpaces Grid containers.
The new GigaSpaces 6.5 C++ API has been designed to address the following:
Being C++ friendly – allowing C++ engineers to use standard tools.
Ease of use – the ability to build C++-based applications in a matter of minutes.
Platform Support – to run on every operating system, 32 and 64-bit.
Better performance – to have similar or better performance than the Java-based clients.
Better Interoperability – to cope with space classes that contain nested classes.
Common development flow, configuration and deployment based on OpenSpaces – one common space runtime for all applications: Java, .NET and C++.
Using an alternative approach for internal JNI calls – based on lightweight command pattern protocol.
The most complex part of data interoperability across different development languages, is the ability for every class structure to be digested by every development language correctly, and to provide one common denominator that is understood by the IMDG. GigaSpaces Portable Binary Serialization (PBS) overcomes this gap.
The PBS mechanism efficiently marshals and serializes objects from different development languages into one universal data structure that is stored within the IMDG. With the PBS approach, a C++ object (which can be a graph object) holds references to other objects or arrays of objects, and is stored within the IMDG, in a way that a Java application using a matching class structure can read the data. The same is true with a .NET application.
The GigaSpaces container hosts C++ services. It manages the lifecycle of these services and allows them to interact with their data source (the space) in the most efficient manner. The C++ services can perform any space operations, or invoke other services, via the space acting as the service invocation layer. GigaSpaces ensures that services survive during failures, by keeping track of the amount of service instances running across the GigaSpaces grid. In case one of the instances fails, new instances are started. In the same manner , based on the predefined SLA , additional instances might be started or shut down. With this approach, you can scale the capacity of the system to cope with additional incoming requests in real time, without human intervention. C++ services can be deployed in the following manner:
Pure C++ business logic with its space(s) running in a remote process(s).
Collocated business logic and space instance(s) sharing the same memory address, where the C++ business logic accesses its collocated spaces within, involving remote calls.
A mixture of the above
Once the C++ business logic is deployed with collocated spaces, there are no remote calls involved when the C++ business logic interacts with the space. Remote calls are involved when the business logic accesses the remote spaces explicitly (space proxy configured in clustered mode), and when the spaces are configured to have replica space. In this situation (replica spaces), every destructive operation triggers a replication event, which transports the local space changes to the replica space(s). Data and operations replication can be done in synchronous or asynchronous manner.
When the business logic is deployed with a collocated space, it can inherit the space active mode (primary or backup). This means that you can have the business logic running in stand-by mode, i.e. not initialized as long as its collocated space is running in backup mode. Backup mode space gets its operations only from the existing primary space, and is not accessible to the client for direct interaction. Once the primary space and its collocated business logic fails (normally or abnormally), the collocated C++ business logic is initialized and started.
Simple business logic management
Messaging Virtualization Layer
Notification API, and C++ Data and Operation Replication Support
The C++ API supports data messaging in various ways:
Publish/subscribe model - API registering for notifications is provided, allowing the client to use a listener object to be a trigger, once a matching event occurs within the space. One to one , one to many, and single consumer messaging models are supported.
Data distribution - replication of data and events for C++ objects is supported. Data and operations can be replicated in an asynchronous and synchronous manner across the LAN or WAN in a transparent manner.
All messaging options support FIFO , persistency , monitoring and interoperability.
This leads to:
Simple and transparent C++ data transfer to local nodes or remote nodes.
C++ Space Domain Classes acting as the Message delivery transport - i.e. one unified data and messaging foundation.
Data Virtualization Layer
C++ objects within the IMDG
C++ objects can be stored in the IMDG, running across multiple machines, using different platforms. A C++ client running a windows 32 bit OS, can store its data within an IMDG, spanning across Windows, Sun or Linux 64 bit machines, leveraging their large memory address space.
For fast data access, a C++ client can run a local cache, or a local view, having some of the IMDG data set located within the same memory address that the C++ business logic is running in.
The IMDG can load its data from the backend databases on demand, or in lazy manner, and delegate updates via the Mirror Service, as background activity into the backend database. This architecture removes the database from the critical path of the application transaction and offload to the database. The IMDG maps the C++ data to the RDBMS using standard mapping technologies (Hibernate).
Fast data access
Data locality control
Deployment & Provisioning
Regular and Service Based
C++ application deployment can be done as a regular standalone C++ process that is not managed by the GigaSpaces provisioning environment, or as a C++ service that enjoys SLA and self-healing features once deployed into the GigaSpaces container.
Standalone C++ Process
When the C++ application is running as a standalone application , having the space running in another process, the C++ business logic interacts with the space via remote calls. All space operations are conducted using C++ objects, where the actual interaction is performed via PBS objects. The C++ objects are transformed to PBS objects in runtime, and sent to the space via the C++ and Java runtime layers.
In this case , a failure of the C++ process requires human intervention in order to restart it. This can happen on the same machine or on a different machine. Scaling the application also enforces some additional 3rd party systems, or human intervention to start additional instances of the application.
When the C++ business logic runs as a service (called also worker or bean), collocated with the space; no remote calls are involved when interacting with the space. Interactions with the space are done using C++ objects, similar to the standalone C++ application configuration. The C++ service life cycle is managed by GigaSpaces , allowing it to survive failure in a seamless manner. When a need for additional service instances is required , GigaSpaces will instantiate that on the relevant machines , thus increasing the system capacity in a dynamic manner.
High-availability - Business logic survivability when deployed as a service.
Management & Monitoring
GUI and CLI tools support
Space monitoring tools and service deployment tools are available for C++ objects and C++ services. C++ Space Classes and objects can be viewed and queried via the standard tools available also for .NET and Java users. The C++ services life cycle is managed via the standard tools, allowing C++ users to deploy services into machines running GigaSpaces containers.
Short and simple development cycle
Smooth user experience.
Performance & Scalability
PBS Library Usage
C++ objects use the PBS libraries, allowing the C++ client to interact with the space runtime when running remoting or collocated. The PBS layer compacts the data once sent over the wire, and serializes non primitive C++ Space domain class fields in an optimized manner. This provides fast response time and efficient network utilization.
Data is transported across the C++ and the Space runtime hosting within the Java runtime, via a very efficient protocol that minimizes the JNI calls involved.
The C++ client runs a sophisticated space proxy that is highly concurrent, and supports multi threaded applications, allowing the client to scale within the process in near-linear manner.
Embedded space performance numbers (all numbers are for one client thread for 1K payload):
o Single mode - 20K space operations per second.
o Batch mode - 40K space operations per second.
Remote space performance numbers:
o Single mode - 4K space operations per second.
o Batch mode - 20K space operations per second.
The numbers are almost linearly scalable up to the amount of cores the client or space machine is running.
Low latency for remote space operations.
Scalable in 2 dimensions : in-process and out of process.
Once the C++ objects are stored within the space memory they are highly available due-to the synchronous replication policy of the space. You can enforce a SLA that will ensure a specific number of copies for your C++ objects across the GigaSpaces Grid. In the same manner , C++ services can have a SLA and survive failures.
No data loss.
High-availability for data and services
Integrated with the IDE
The developer can use his preferred development tool for the coding phase. The command line tools required for the coding , debugging and deployment phase can be integrated with your preferred C++ IDE tool (such as Microsoft Visual Studio).
C++ Domain classes and C++ services are introduced to GigaSpaces via their relevant configuration files. The configuration files are standard XML based, and provide a structured manner to specify the C++ Domain classes and C++ services behavior
C++ classes used as Space domain classes, can introduced via a configuration or code. Once introduced via configuration, you should specify for each space domain class, its meta data decorations such as:
Here is an example of such a configuration: