Describes the service recipe sections and syntax
Service Recipe Sections
- Prolog: Service name, icon filename, tier type, number of instances
- Life-cycle: Event names mapped to handling scripts or closures
- Plug-ins: Life detector, Details and Monitoring plugins
- UI - soon to be ported to an external file : Widgets and Menus configuration per service
The Service Recipe Prolog
The tier type is indicated for end user information and for GUI modeling. Supported tier types are:
Lifecycle Event Handler
Lifecycle event handlers can be an external Groovy script (located in the same folder as the service recipe), a windows bat or Linux/Unix shell script (again in the same folder), or inline closure in case of very short code. If you want to use script written in other languages, you can run them from a Groovy script providing them with the context using environment variables or through a bat / shell script.
In some cases you want a recipe that can run on several operating systems. you can use the following lifecycle event handler notation
"Win.*" : "tomcat_run.bat",
"Linux" : "tomcat_run.sh",
"Mac.*" : "tomcat_run.sh"
"Win.*" : "tomcat_stop.bat",
"Linux" : "tomcat_stop.sh",
"Mac.*" : "tomcat_stop.sh"
Best practices for lifecycle event handlers:
The best approach is divide and conquer. Do one step at a time. For example in the pre-install phase only get the service binaries from the repository or the internet; in the install event only do the unzipping of the binaries of the service. In the post-install, tweak any configuration files that needs updating etc.
This methodology makes the recipes easier to maintain, troubleshoot and makes the external scripts reusable in different applications recipes.
Passing context to event handlers
Lifecycle scripts get the context injected to them as environment variables or as system variables
Cloudify Service Recipe use several types of plugins for different purposes:
- Process Level Monitoring - this plugin is transparent to the recipe developer. It monitors the OS process of the service and reports the following metrics:
- Process CPU
- Process CPU usage
- Kernel CPU time
- Process Memory
- Number of Page faults
- Service Start Detector Plugin - currently used for detecting a successful start of service in order to invoke postStart and dependent services
- Currently org.openspaces.usm.liveness.PortLivenessDetector verifying that a socket has been opened on port X is the only built in detector
- Details Plugin - used to get custom service details reported to the Admin API and to the Web Console
- Currently org.openspaces.usm.jmx.JmxDetails using JMX Mbeans attributes, is the only built-in plugin
- Monitoring Plugin - used to get custom metrics sampled once every X seconds and reported to Admin API and Web Console
- Currently, org.openspaces.usm.jmx.JmxMonitor, using JMX Mbeans attributes, is the only built-in plugin.
There are two modes of using plugins
- Configure a built-in plugin
- Add a custom plugin. Custom plugins and their dependencies should be located (as jars) under usmlib sub-folder
"Port" : ,
"TimeoutInSeconds" : 60,
"Host" : "127.0.0.1"
"Current Http Threads Busy": [
"Current Http Threads count": [