This document is in the process of being reorganized as part of our ongoing documentation effort. Please check back for updates.


This document serves as a basic user's guide for the CAS Workflow Manager project. The goal of the document is to allow users to check out, build, and install a base version of the CAS Workflow Manager, as well as perform basic configuration tasks. For advanced topics, such as configuring the Workflow Manager to Scale and other tips and tricks, please see our Advanced Guide.

The remainder of this guide is separated into the following sections:

Download and Build

The most recent CAS-Workflow Manager project can be downloaded from the Apache OODT website or it can be checked out from the OODT repository using Subversion. The CAS-Workflow Manager project is located at ../../workflow/. We recommend checking out the latest released version (v1.5.1 at the time of writing).

Maven is the build management system used for OODT projects. We currently support Maven 2.0 and later. For more information on Maven, see our Maven Guide.

Assuming a *nix-like environment, with both Maven and Subversion clients installed and on your path, an example of the checkout and build process is presented below:

> mkdir /usr/local/src
> cd /usr/local/src
> svn checkout workflow

After the Subversion command completes, you will have the source for the CAS-Workflow project in the /usr/local/src/workflow directory.

In order to build the project from this source, issue the following commands:

> cd /usr/local/src/workflow
> mvn package    

Note that this command performs a number of tasks, including performing unit tests which can take a long time depending on your system. At the end of the process, you will see a Maven success banner.

Once the Maven command completes successfully, you should have a target directory under -workflow. The project has been built in this directory as a distribution tar ball. In order to move the built project out of the source directory and unpack the tar ball, issue the following commands:

> cd /usr/local
> mv src/workflow/trunk/cas-workflow-vX.Y.Z-dist.tar.gz ./
> tar -xvzf cas-workflow-vX.Y.Z-dist.tar.gz
> export WORKFLOW_HOME=/usr/local/cas-workflow-vX.Y.Z

The resultant directory layout from the unpacked tarball is as follows:

bin/ etc/ logs/ doc/ lib/ policy/ LICENSE.txt CHANGES.txt

A basic description of the files and subdirectories of the deployment is presented below:

  • bin - contains scripts for running the Workflow Manager, including the "wmgr" server script, and the "wmgr-client" client script.
  • etc - contains the file for the Workflow Manager, and the file used to configure the server options.
  • logs - the default directory into which log files are written.
  • doc - contains Javadoc documentation, and user guides for using the Workflow Manager.
  • lib - the required Java jar files to run the Workflow Manager.
  • policy - the default XML-based element and product type policy in case the user is using the Lucene Workflow Instance Repository and/or the XML Workflow Repository, along with the ThreadPoolWorkflowEngine.
  • CHANGES.txt - contains the CHANGES present in this released version of the Workflow Manager.
  • LICENSE.txt - the LICENSE for the Workflow Manager project.

Now you have a built Workflow Manager at /usr/local/cas-workflow-vX.Y.Z. In the next section, we will discuss how you can configure the Workflow Manager for basic operations.

Configuration in 2 Minutes or Less

The reason for entitling this section "in 2 Minutes or Less" is to show that in it's base deployment, with very minimal configuration, we can have the Workflow Manager in a usable state, capable of managing workflow tasks to completion. For the record, I haven't timed it, but its pretty fast...

We are going to set up the Workflow Manager to use the XML-based Workflow Repository, the ThreadPoolWorkflowEngine, and Lucene Workflow Instance Repository extension points. The first step is to edit the wmgr script in $WORKFLOW_HOME/bin. Make the following changes:

  • Set the SERVER_PORT variable to the desired port on which you'd like to run the Workflow Manager. Our default port is 9001.
  • Set the JAVA_HOME variable to point to the location of your installed JRE runtime. If you do not know where this is, type > which java and use that path.
  • Set the RUN_HOME variable to point to the location to which you'd like the Workflow Manager PID file written. Typically this should default to /var/run, but not all system administrators allow users to write to /var/run.

The second step in configuration is to edit $WORKFLOW_HOME/bin/wmgr-client script, making the following change:

  • Set the JAVA_HOME variable to point to the location of your installed JRE runtime.

In the third step of this configuration, you will set the Workflow Manager's various extension points. For more information about the functionality of these extension points, see our Developer Guide. By default, the Workflow Manager is built to use the XML-based Workflow Repository, the ThreadPoolWorkflowEngine, and Lucene Workflow Instance Repository extension points.

Make the following changes to $WORKFLOW_HOME/etc/

  • Specify the path to the directory where the Workflow Manager will create the Lucene index and associated files by setting the org.apache.oodt.cas.workflow.instanceRep.lucene.idxPath property to $WORKFLOW_HOME/repo. Make sure that this directory does NOT exist the time you run the Workflow Manager. If the Workflow Manager does not find a directory at thie specified location, it will create all of the necessary directory structure and ancillary files.
  • Specify the path to the directory where the XML policy files are stored for the the XML Workflow Repository. This path is set by org.apache.oodt.cas.workflow.repo.dirs. The default location (and default policy files) are located at $WORKFLOW_HOME/policy in the vanilla deployment of the Workflow Manager. Note that these properties need to be fully specified URLs (e.g., they should start with file://).
Optionally, you can change the default logging properties for the CAS Workflow Manager. This is done by editing $WORKFLOW_HOME/etc/ We have tried to select sensible defaults for the average user, but if you would like more or less information generated in the Workflow Manager logs, you can edit the levels of different catagories of information in this proporties file. The following logging levels are available: INFO, WARNING, FINE, FINER, FINEST, and ALL.

With this last step, you have configured the Workflow Manager. In order to test your configuration, cd to $WORKFLOW_HOME/bin and type:

> ./wmgr start

This will startup the workflow manager XML-RPC server interface. Your Workflow Manager is now ready to run! You can test out the workflow manager by running a command that will execute our preconfigured hello world workflow.

Run the below command, assuming that you started the Workflow Manager on the default port of 9001:

> ./wmgr-client --url http://localhost:9001 --operation \
                              --sendEvent \
                              --eventName test 

You should see a variety of info messages, in including the following:

INFO: WorkflowManager: Received event: test
INFO: WorkflowManager: Workflow testWorkflow retrieved for event test
INFO: Task: [Hello World] has no required metadata fields
INFO: Executing task: [Hello World] locally
Hello World: Chris
INFO: Task: [Goodbye World] has no required metadata fields
INFO: Executing task: [Goodbye World] locally
 Goodbye World: Chris            

Note that we have elided some of the timestamp information for the purposes of clarity. If you see the cannonical Hello World statement, you have succeeded in configuring the CAS Workflow Manager, hopefully in 2 minutes or less (and even if it took a little more time, you have to admit it was by and large painless).

Learn By Example

Coming Soon...


In this Basic User Guide, we have covered a number of topics, including Workflow Manager installation, configuration, and a basic example of use. For more advanced topics, including the use of alternative queuing strategies and best practices for addressinbg scaling issues, see our Advanced Guide.