Service Composition Software Documentation

This is an implementation of the middleware functionality of the service composition architecture described here. A brief description of the architecture is presented below.

The operational model is that we have a set of composable services belonging to different service providers. A portal provider composes these services. Our architecture provides mechanisms for performance-sensitive choice of service instance for a particular client, and for providing backup instances when there are failures.

A service-level path consists of the service instances chosen for a particular client session.

Our architecture is shown in the figure below. We have an overlay network of service clusters. These service clusters form the middleware platform on which services are deployed by different service providers. These service execution platforms form peering relationships between one another. These peering relationships define an overlay network in the wide-area Internet. The peering relations serve two purposes: (a) exchange of performance and liveness information between peers, and (b) composition of services across the peers. Service-level paths are formed on top of this overlay network -- this is shown as the set of blue lines in the picture. The dotted blue lines represent the alternate service-level paths.



The two shades of brown in the service-level path signify two different services composed. Note the noop services in-between. These simply provide connectivity, and do not do any data manipulation or transformation.

The data source itself can reside within the overlay network, or outside it. For instance, if the data source is a live video feed, it is likely to be outside the overlay network. If it is a video-on-demand service, it could reside within the network, deployed at multiple service execution platforms. In any case, the client is likely to be outside the overlay network, and it has to find an overlay network close to itself, with which it communicates.

Each service cluster has a cluster-manager (CM), and is assigned a service-cluster ID (SCID) which is a non-zero positive number (unsigned short, 16-bits).

The software architecture is also shown in the architecture diagram. The two vertical layers are not part of the cluster manager functionality, although some (unused) code for the service location component is part of the package. In this package, the functionality of both vertical layers are achieved through run-time configuration, and command-line arguments.

The package includes the following implementations:

Here are the directories in the package:

Dependencies

You need the following packages to compile this software:

This software has been tested on Linux i386 platforms (RedHat 7.1, RedHat 7.2). I do not think there should be any major problems in using it in other UNIX platforms. But I have not actually written the configure.in to do this automatically. Sorry...

Compiling the software

To compile the software, type the following commands.

aclocal
autoconf
automake -a
./configure
make clean
make

To compile for running without using the wide-area network emulator, edit "configure.in" to comment out the line with "AC_DEFINE(IP_TUNNEL)" and run the above commands again.

Make sure you have the right paths for the Stanford graph-base and the Festival packages in serv-comp/Makefile.am and tts/Makefile.am respectively.

There is a set of compile-time configurations that can be done. These are explained along with the documentation for the individual directories in the package.

Running the software

The steps to run the software are:
  1. Run the timeSyncServer from the udp-lib sub-package.
  2. Start the emulator (if compiled to run with the emulator) from the models sub-package.
  3. Run the ControlServer program from the serv-comp sub-package.
  4. On each machine that is supposed run the cluster-manager software, run startServComp from the serv-comp sub-package. You could use the startMillNodes.pl perl script from the serv-comp package to start up all the cluster managers on different machines.
  5. Start up the services (this can be done right after step 1) -- see test cases below.
  6. Start up the client to setup service-level paths -- see test cases below.
  7. To stop the cluster-managers cleanly, hit Ctrl-C on the ControlServer. This makes sure that the log-files of the cluster-managers are written out properly. After running each test case, you can examine the logs to see what paths have been created. Use merge.pl from the udp-lib sub-package to merge the logs.

The different run-time configuration files, as well as command-line arguments are explained along with the different sub-packages.

NOTE: You have to wait for all the cluster-managers stabilize before starting up the client that makes requests to the cluster-managers. The ControlServer achieves this purpose.

Testing the software

Individual test cases are there in the sub-packages, and you can use those to get a feel for the software. Here are the test cases for the entire software package:
  1. This test case runs a dummy service, and sets up some dummy service-level paths
  2. This test case runs the text-to-speech composed service


Bhaskaran Raman, bhaskar@cs.berkeley.edu
Last modified: Tue Jan 22 18:13:29 PST 2002