This is an implementation of the middleware functionality of the service composition architecture described here. A brief description of the architecture is presented below.
The operational model is that we have a set of composable services belonging to different service providers. A portal provider composes these services. Our architecture provides mechanisms for performance-sensitive choice of service instance for a particular client, and for providing backup instances when there are failures.
A service-level path consists of the service instances chosen for a particular client session.
Our architecture is shown in the figure below. We have an overlay network of service clusters. These service clusters form the middleware platform on which services are deployed by different service providers. These service execution platforms form peering relationships between one another. These peering relationships define an overlay network in the wide-area Internet. The peering relations serve two purposes: (a) exchange of performance and liveness information between peers, and (b) composition of services across the peers. Service-level paths are formed on top of this overlay network -- this is shown as the set of blue lines in the picture. The dotted blue lines represent the alternate service-level paths.
The two shades of brown in the service-level path signify two different services composed. Note the noop services in-between. These simply provide connectivity, and do not do any data manipulation or transformation.
The data source itself can reside within the overlay network, or outside it. For instance, if the data source is a live video feed, it is likely to be outside the overlay network. If it is a video-on-demand service, it could reside within the network, deployed at multiple service execution platforms. In any case, the client is likely to be outside the overlay network, and it has to find an overlay network close to itself, with which it communicates.
Each service cluster has a cluster-manager (CM), and is assigned a service-cluster ID (SCID) which is a non-zero positive number (unsigned short, 16-bits).
The software architecture is also shown in the architecture diagram. The two vertical layers are not part of the cluster manager functionality, although some (unused) code for the service location component is part of the package. In this package, the functionality of both vertical layers are achieved through run-time configuration, and command-line arguments.
The package includes the following implementations:
Here are the directories in the package:
You need the following packages to compile this software:
This software has been tested on Linux i386 platforms (RedHat 7.1, RedHat 7.2). I do not think there should be any major problems in using it in other UNIX platforms. But I have not actually written the configure.in to do this automatically. Sorry...
To compile the software, type the following commands.
aclocal autoconf automake -a ./configure make clean make
To compile for running without using the wide-area network emulator, edit "configure.in" to comment out the line with "AC_DEFINE(IP_TUNNEL)" and run the above commands again.
Make sure you have the right paths for the Stanford graph-base and the Festival packages in serv-comp/Makefile.am and tts/Makefile.am respectively.
There is a set of compile-time configurations that can be done. These are explained along with the documentation for the individual directories in the package.
The different run-time configuration files, as well as command-line arguments are explained along with the different sub-packages.
NOTE: You have to wait for all the cluster-managers stabilize before starting up the client that makes requests to the cluster-managers. The ControlServer achieves this purpose.