The GigaSpaces Elastic Service Manager allows a user to deploy a data grid that can scale out and in to meet changing demands. A simple handler interface is used to request additional computing resources and to release ones that are no longer required.
The Cisco Unified Computing System is a next-generation data center platform that:
For more details about UCS, please go to http://www.cisco.com/en/US/netsol/ns944/
The Cisco Unified Computing System includes an innovative XML API which offers you a programmatic way to integrate or interact with any of the over 9,000 managed objects in UCS. Managed objects are abstractions of UCS physical and logical components such as adaptors, chassis, blade servers, and fabric interconnects.
Developers can use any programming language to generate XML documents containing UCS API methods. The complete and standard structure of the UCS XML API makes it a powerful tool that is simple to learn and implement.
The UCS Manager API allows a developer to provision bare-metal machines and set them up according to your requirements, including configuring pools of compute resources, MAC addresses and SAN WWN addresses. These features make it an ideal scalable platform that GigaSpaces can leverage to scale out its Data Grid and application server. For more details about the UCS Manager, please see their website at http://developer.cisco.com/web/unifiedcomputing/home
The scaling agent is an implementation of the GigaSpaces ESM Scaling Handler interface that uses the UCS Manager API to handle GigaSpaces scaling events. The scaling handler interface is copied in the code segment below, and the design of the UCS Elastic Scale Handler is also explained in greater detail in the following sections.
Deployment of a data grid is initiated by connecting to a running instance of the ESM, and requesting a new data grid deployment. From this point on, the ESM controls the provisioning and removal of resources.
Here is the control flow of a typical deployment
The project source code includes a demo application called com.gigaspaces.ucs.deploy.ESMDemo. This class will initialize a data grid and performs some basic operations on it. This is the best place to start. Here you can change the test grid's settings, like its capacity and availability, and see how they affect the grid deployment.
A video file attached to this page shows the demo code in action.
This section describes the implementation of the elastic handler used in the UCS integration (see the com.gigaspaces.ucs.scaleHandler.UCSElasticScaleHandler in the downloadable package. The handler uses the UCS manager XML API, so an understanding of this API is important. The UCS Manager API programmer's guide is available here.
This section describes the main technical details of the integration with the UCS Manager API.
The handler uses JAXB to marshal and unmarshal the XML documents used by the Manager API. Domain objects were generated by the JAXB compiler from the xsd schema files that are available from Cisco (and also in the project sources). XML documents are sent to the UCS Manager over HTTP/S using the Apache HttpClient library.
By default, JAXB loads all domain classes at start-up in order to speed up marshaling/unmarshaling. Unfortunately, this means that a large number of domain classes can cause a significantly delay during start-up, and the UCS Management domain is quite large. To avoid this delay we use the fastBoot option which loads domain object the first time they are used. As the number of classes actually used by the handler is fairly small, the delay is relatively small.
The service facade is a class that wraps the UCS Manager API and exposes the functionality that the handler requires. It implements the main methods of the UCS Manager API (login, configConfMo, configResolveChildren, configResolveClass, etc..) as well as various methods used by the handler in handling GigaSpaces events, i.e. starting and stopping machines.
The GigaSpaces elastic handler does not require any prior installation of applications or libraries on a newly booted machine. Instead, the handler will transfer all required files to the new machine over a secure copy connection (i.e. port 22) and will then launch a script (again, over ssh) that will set up the files as required and launch the GigaSpaces agent. From this point on, the ESM will manage this machine according to the current demands on the cluster.
The application files and scripts used by the installer can be modified to adapt them to a specific deployment scenario.