Work Queue User's Manual
Last Updated May 2012
Work Queue is Copyright (C) 2009 The University of Notre Dame.
This software is distributed under the GNU General Public License.
See the file COPYING for details.
Overview
Work Queue is a framework for building master/worker applications.
In Work Queue, a Master process is a custom, application-specific program
that uses the Work Queue API to define and submit a large number
of small tasks. The tasks are executed by many Worker processes,
which can run on any available machine. A single Master may direct
hundreds to thousands of Workers, allowing users to easily construct
highly scalable programs.
Work Queue is a stable framework that has been used to create
highly scalable scientific applications in biometrics, bioinformatics,
economics, and other fields. It can also be used as an execution engine
for the Makeflow workflow engine.
Work Queue is part of the Cooperating Computing
Tools. You can download the CCTools from this web page,
follow the installation instructions, and you
are ready to go. From the same website,
or from within the CCTools you can view documentation for the full set
features of the Work Queue API.
Building and Running Work Queue
Let's begin by running a simple but complete example of a master and a worker.
After trying it out, we will then show how to write a program from scratch.
We assume that you have already downloaded and installed the cctools in
the directory CCTOOLS. Next, download the example file for the language of your
choice:
If you are using the C example, compile it like this:
gcc work_queue_example.c -o work_queue_example -I${CCTOOLS}/include/cctools -L${CCTOOLS}/lib -ldttools -lm
If you are using the Python example, set PYTHONPATH to include the Python modules in cctools:
export PYTHONPATH=${PYTHONPATH}:${CCTOOLS}/lib/python2.6/site-packages
If you are using the Perl example, set PERL5LIB to include the Perl modules in cctools:
export PERL5LIB=${PERL5LIB}:${CCTOOLS}/lib/perl5/site_perl
This example program simply compresses a bunch of files in parallel. List the files to be
compressed on the command line. Each will be transmitted to a remote worker, compressed,
and then sent back to the master. (This isn't necessarily faster than doing it locally,
but it is easy to run.)
For example, to compress files a, b, and c, run this:
./work_queue_example a b c
You will see this right away:
listening on port 9123...
submitted task: /usr/bin/gzip < a > a.gz
submitted task: /usr/bin/gzip < b > b.gz
submitted task: /usr/bin/gzip < c > c.gz
waiting for tasks to complete...
The master is now waiting for workers to connect and begin requesting work.
(Without any workers, it will wait forever.) You can start one worker on the
same machine by opening a new shell and running:
work_queue_worker MACHINENAME 9123
(Obviously, substitute the name of your machine for MACHINENAME.) If you have
access to other machines, you can ssh there and run workers as well.
In general, the more you start, the faster the work gets done.
If a worker should fail, the work queue infrastructure will retry the work
elsewhere, so it is safe to submit many workers to an unreliable
system.
If you have access to a Condor pool, you can use this shortcut to submit
ten workers at once via Condor:
% condor_submit_workers MACHINENAME 9123 10
Submitting job(s)..........
Logging submit event(s)..........
10 job(s) submitted to cluster 298.
Or, if you have access to an SGE cluster, do this:
% sge_submit_workers MACHINENAME 9123 10
Your job 153083 ("worker.sh") has been submitted
Your job 153084 ("worker.sh") has been submitted
Your job 153085 ("worker.sh") has been submitted
...
When the master completes, if the workers were not shut down in the
master, your workers will still be available, so you can either run
another master with the same workers, or you can remove the workers
with kill, condor_rm, or qdel as appropriate.
If you forget to remove them, they will exit automatically after fifteen minutes.
(This can be adjusted with the -t option to worker.)
Writing a Master Program
To write your own program using Work Queue, begin with C example or Python example or Perl example
as a starting point. Here is a basic outline for a Work Queue master:
q = work_queue_create(port);
for(all tasks) {
t = work_queue_task_create(command);
/* add to the task description */
work_queue_submit(q,t);
}
while(!work_queue_empty(q)) {
t = work_queue_wait(q);
work_queue_task_delete(t);
}
work_queue_delete(q);
First create a queue that is listening on a particular TCP port:
q = work_queue_create(port);
q = WorkQueue(port)
The master then creates tasks to submit to the queue.
Each task consists of a command line to run and a statement of
what data is needed, and what data will be produced by the command.
Input data can be provided in the form of a file or a local memory buffer.
Output data can be provided in the form of a file or the standard output of the program.
It is also required to specify whether the data, input or output, need to be cached at
the worker site for later use. In the example, we specify a command that takes a
single input file, produces a single output file, and requires both files to be
cached:
t = work_queue_task_create(command);
work_queue_task_specify_file(t,infile,infile,WORK_QUEUE_INPUT,WORK_QUEUE_CACHE);
work_queue_task_specify_file(t,outfile,outfile,WORK_QUEUE_OUTPUT,WORK_QUEUE_CACHE);
t = Task(command)
t.specify_file(infile,infile,WORK_QUEUE_INPUT,cache=True)
t.specify_file(outfile,outfile,WORK_QUEUE_OUTPUT,cache=True)
If a file does not need to be cached at the execution site to avoid wasteful
strorage, it can be specified so:
work_queue_task_specify_file(t,outfile,outfile,WORK_QUEUE_OUTPUT,WORK_QUEUE_NOCACHE);
t.specify_file(outfile,outfile,WORK_QUEUE_OUTPUT,cache=False)
You can also run a program that is not necessarily installed at the
remote location, by specifying it as an input file. If the file
is installed on the local machine, then specify the full local path,
and the plain remote path. For example:
t = work_queue_task_create("./my_compress_program < a > a.gz");
work_queue_task_specify_file(t,"/usr/local/bin/my_compress_program","my_compress_program",WORK_QUEUE_INPUT,WORK_QUEUE_CACHE);
work_queue_task_specify_file(t,"a","a",WORK_QUEUE_INPUT,WORK_QUEUE_CACHE);
work_queue_task_specify_file(t,"a.gz","a.gz",WORK_QUEUE_OUTPUT,WORK_QUEUE_CACHE);
t = Task("./my_compress_program < a > a.gz")
t.specify_file("/usr/local/bin/my_compress_program","my_compress_program",WORK_QUEUE_INPUT,cache=True)
t.specify_file("a","a",WORK_QUEUE_INPUT,cache=True)
t.specify_file("a.gz","a.gz",WORK_QUEUE_OUTPUT,cache=True)
Once a task has been fully specified, it can be submitted to the queue where it
gets assigned a unique taskid:
taskid = work_queue_submit(q,t);
taskid = q.submit(t)
Next, wait for a task to complete, stating how long you are willing
to wait for a result, in seconds. (If no tasks have completed by the timeout,
work_queue_wait will return null.)
t = work_queue_wait(q,5);
t = q.wait(5)
A completed task will have its output files written to disk.
You may examine the standard output of the task in t->output
and the exit code in t->exit_status. When you are done
with the task, delete it:
work_queue_task_delete(t);
Deleted automatically when task object goes out of scope
Continue submitting and waiting for tasks until all work is complete.
You may check to make sure that the queue is empty with work_queue_empty
When all is done, delete the queue:
work_queue_delete(q);
Deleted automatically when work_queue object goes out of scope
Full details of all of the Work Queue functions can be found in
the Work Queue API.
Advanced Usage
The technique described above is suitable for distributed programs of
tens to hundreds of workers. As you scale your program up to larger sizes,
you may find the following features helpful. All are described in the
Work Queue API.
- Pipelined Submission. If you have a very large number of tasks to run,
it may not be possible to submit all of the tasks, and then wait for all of them. Instead,
submit a small number of tasks, then alternate waiting and submiting to keep a constant
number in the queue. work_queue_hungry will tell you if more submission are warranted.
- Fast Abort. A large computation can often be slowed down by stragglers. If you have
a large number of small tasks that take a short amount of time, then Fast Abort can help.
The Fast Abort feature keeps statistics on tasks execution times and proactively aborts tasks
that are statistical outliers. See work_queue_activate_fast_abort.
- Immediate Data. For a large number of tasks or workers, it may be impractical
to create local input files for each one. If the master already has the necessary input
data in memory, it can pass the data directly to the remote task with
work_queue_task_specify_buffer.
- String Interpolation. If you have workers distributed across
multiple operating systems (such as Linux, Cygwin, Solaris) and/or architectures (such
as i686, x86_64) and have files specific to each of these systems, this feature
will help. The strings $OS and $ARCH are available for use in the specification of input
file names. Work Queue will automatically resolve these strings to the operating system
and architecture of each connected worker and transfer the input file corresponding
to the resolved file name. For example:
work_queue_task_specify_file(t,"a.$OS.$ARCH","a",WORK_QUEUE_INPUT,WORK_QUEUE_CACHE)
t.specify_file(t,"a.$OS.$ARCH","a",WORK_QUEUE_INPUT,cache=True)
This will transfer a.Linux.x86_64 to workers running on a Linux system with an
x86_64 architecture and a.Cygwin.i686 to workers on Cygwin with an i686
architecture.
- Cancel Task. This feature is useful in workflows where there are redundant tasks
or tasks that become obsolete as other tasks finish. Tasks that have been submitted can be
cancelled and immediately retrieved without waiting for Work Queue to return them in
work_queue_wait. The tasks to cancel can be identified by either their
taskid or tag. For example:
t = work_queue_cancel_by_tasktag(q,"task3");
t = q.cancel_by_tasktag("task3")
This cancels a task with tag named 'task4'. Note that in the presence of tasks with
the same tag, work_queue_cancel_by_tasktag will cancel and retrieve only one of the
matching tasks.
- Statistics. The queue tracks a fair number of statistics that count the number
of tasks, number of workers, number of failures, and so forth. Obtain this data with work_queue_get_stats
in order to make a progress bar or other user-visible information.
For More Information
For the latest information about Work Queue, please visit our web site and
subscribe to our mailing
list.