Apache Slider Architecture¶
Slider is a YARN application to deploy non-YARN-enabled applications in a YARN cluster
Slider consists of a YARN application master, the "Slider AM", and a client application which communicates with YARN and the Slider AM via remote procedure calls and/or REST requests. The client application offers command line access as well as low-level API access for test purposes
The deployed application must be a program that can be run across a pool of YARN-managed servers, dynamically locating its peers. It is not Slider's responsibility to configure up the peer servers, apart from some initial application-specific application instance configuration. (The full requirements of an application are described in another document.
Every application instance is described as a set of one or more component; each component can have a different program/command, and a different set of configuration options and parameters.
The AM takes the details on which roles to start, and requests a YARN container for each component; It then monitors the state of the application instance, receiving messages from YARN when a remotely executed process finishes. It then deploys another instance of that component.
A key goal of Slider is to support the deployment of existing applications into a YARN application instance, without having to extend Slider itself.
The application master consists of
- The AM engine which handles all integration with external services, specifically YARN and any Slider clients
- A provider specific to deploying a class of applications.
- The Application State.
The Application State is the model of the application instance, containing
- A specification of the desired state of the application instance -the number of instances of each role, their YARN and process memory requirements and some other options.
- A map of the current instances of each role across the YARN cluster, including reliability statistics of each node in the application instance used.
- The Role History -a record of which nodes roles were deployed on for re-requesting the same nodes in future. This is persisted to disk and re-read if present, for faster application startup times.
- Queues of track outstanding requests, released and starting nodes
The Application Engine integrates with the outside world: the YARN Resource Manager ("the RM"), and the node-specific Node Managers, receiving events from the services, requesting or releasing containers via the RM, and starting applications on assigned containers.
After any notification of a change in the state of the cluster (or an update to the client-supplied cluster specification), the Application Engine passes the information on to the Application State class, which updates its state and then returns a list of cluster operations to be submitted: requests for containers of different types -potentially on specified nodes, or requests to release containers.
As those requests are met and allocation messages passed to the Application Engine, it works with the Application State to assign them to specific components, then invokes the provider to build up the launch context for that application.
The provider has the task of populating container requests with the file references, environment variables and commands needed to start the provider's supported programs.
The core provider deploys a minimal agent on the target containers, then, as the agent checks in to the agent provider's REST API, executes commands issued to it.
The set of commands this agent executes focuses on downloading archives from HDFS, expanding them, then running Python scripts which perform the actual configuration and execution of the target problem -primarily through template expansion.
To summarize: Slider is not an classic YARN analysis application, which allocates and schedules work across the cluster in short-to-medium life containers with the lifespan of a query or an analytics session, but instead for an application with a lifespan of days to months. Slider works to keep the actual state of its application cluster to match the desired state, while the application has the tasks of recovering from node failure, locating peer nodes and working with data in an HDFS filesystem.
As such it is one of the first applications designed to use YARN as a platform for long-lived services -Samza being the other key example. These application's needs of YARN are different, and their application manager design is focused around maintaining the distributed application in its desired state rather than the ongoing progress of submitted work.
The clean model-view-controller split was implemented to isolate the model and aid mock testing of large clusters with simulated scale, and hence increase confidence that Slider can scale to work in large YARN clusters and with larger application instances.
The application master is designed to be a crash-only application, clients are free to terminate the application instance by asking YARN directly.
There is an RPC call to stop the application instance - this is a nicety which includes a message in the termination log, and could, in future, perhaps warn the provider that the application instance is being torn down. That is a potentially dangerous feature to add -as provider implementors may start to expect the method to be called reliably. Slider is designed to fail without warning, to rebuild its state on a YARN-initiated restart, and to be manually terminated without any advance notice.
The RPC interface allows the client to query the current application state, and to update it by pushing out a new JSON specification.
The core operations are
getJSONClusterStatus(): get the status of the application instance as a JSON document.
flexCluster()update the desired count of role instances in the running application instance.
stopClusterstop the application instance
There are some other low-level operations for extra diagnostics and testing, but they are of limited importancs
flexCluster() call takes a JSON application instance specification and forwards it to the AM -which extracts the desired counts of each role to update the Application State. A change in the desired size of the application instance, is treated as any reported failure of node:
it triggers a re-evaluation of the application state, building up the list of container add and release requests to make of
the YARN resource manager.
The final operation,
stopCluster(), stops the application instance.
Security and Identity¶
Slider's security model is described in detail in an accompanying document
Agent to Application Master Secure Communication¶
By default, one-way SSL is leveraged to secure the communication between Slider agents and the Application Master. However, two-way SSL can be enabled. A more detailed discussion of the SSL implementation in Slider can be found in the SSL documentation.