next up previous contents
Next: Hierarchical cache Up: Robustness and scalability at Previous: Robustness and scalability at

Distributed proxying

In this approach all the cooperative servers are equivalent, in that any of them can handle requests from any client. The servers make efficient use of network bandwidth by using IP multicasting to communicate. The protocol they implemented is presented graphically in Figure 2.1, and its explanation follows:

1.
Client picks randomly a proxy A from a list of cooperating servers, and sends its request to A.
2.
If A has the object cached, it returns it to the client; otherwise,
3.
A multicasts a query about the client's request to the other cooperating servers.
4.
If another proxy has the object cached, it sends an ACK to A, then A sends a response to client to contact B, client contacts B and B sends the cached object to client.
5.
If none of the cooperating servers has the object, A sends a request to the remote server, gets the object, sends it to the client, and caches the object.

Figure 2.1 : Illustration of the distributed proxying protocol.

Although it might appear strange that the server A re-directs the client to proxy B instead of having B send the copy to A which would pass it to the client, at a closer glance it looks like a very good idea. If it didn't do so, there would be two proxies working to serve only one client, and the bandwidth in the proxies' network would be used inefficiently. Another possible solution would be to use multicast at the client level, but in this case the client browser would need to be modified, which, again, is not desirable. The solution presented above still requires a modification to implement the redirection mechanism, but it is minor: an extension to HTTP to include a special redirect code, and modification at the client to interpret this code.


next up previous contents
Next: Hierarchical cache Up: Robustness and scalability at Previous: Robustness and scalability at
Anil Gracias
2001-01-18