Lecture 3: Modeling the performance of computer networks ========================================================= * Two main attributes that measure the performance of a network: throughput (how many bits per second are going through the network) and delay (how long does it take a bit from one end to the other). Note that these are two orthogonal concepts: think of it as width of a pipe and length of a pipe through with data flows. * Throughput is related to other quantities like bandwidth and datarate of a link. A link can have a certain "nominal" bandwidth or datarate to send data at, however, all of it may not be used all the time to send useful bits. You may also have packet losses and retransmissions. Throughput measures the number of useful bits delivered at the receiver, and is different from but related to the individual link data rates. * The throughput of a transfer is limited by the link with the slowest throughput along the path - bottleneck link. You cannot pump data faster than the rate of the slowest link. Note that the bottleneck link need not always be the link with the slowest nominal datarate. Sometimes a high speed link may be shared by several flows, causing each flow to receive a small share, thus becoming the bottleneck. * Sometimes, you may not always be able to send at the bottleneck rate, because your protocol may have other delays, like waiting for ACKs. So, while instantaneous throughput can be the bottleneck link rate, average throughput may be lower. The way to compute average throughput is always: see the data sent over a period of time, and get the ratio. A file of size F takes T units of time to be transferred. Average throughput is F/T. * Problem: Concept of average throughput. Note the difference between bottleneck bandwidth/instantaneous throughput and average throughput in this problem. Consider a 125 KB file that needs to be sent through a network path. The bottleneck bandwidth of the path is 1 Mbps. The one way delay between sender and receiver is 20 ms. Suppose the sender continuously sends data at the bottleneck rate, and no packets are lost, no retransmissions. How long does it take to send the file? Ans: 125*8*1000 bits/ 1000*1000 bps = 1 second. Average throughput is 1 Mbps, which is the bottleneck bandwidth. Suppose the sender needs to wait for an ACK after sending every 1 KB packet. Assume ACK also takes 20 ms to come back. Now, the sender can send 1 KB in 20+20 = 40 ms. So the average throughput is 1*8*1000 bits / 40 ms = 200 kbps. So the average throughput is one-fifth of what it was before, with the new ACK requirement. So the time taken to send the file will be 5 times larger = 5 seconds. You can also compute 5 seconds as follows: 1 KB takes 40 ms, so 125 KB takes 125 * 40 ms = 5 sec. * Delay: delay of an end-to-end path is sum of delays on all links and intermediate nodes. Several components of a delay: (1) When a packet leaves a node, it first experiences transmission delay. That is, all the bits of a packet have to be put out on the link. If a link can transmit data at R bits/s, a packet of size B bits will require B/R seconds to be just put out there. (2) Next is propagation delay. That is, the bits have to propagate at the speed of waves in the transmission medium to reach the other end. This delay depends on the length of the wire, and is usually only significant for long distance links. If d is the distance the wave has to travel is s is the speed in the medium, the propagation delay is d/s. Speed of light is 3*10^8 m/s in free space. So a radio wave takes 1 microsec for a distance of 300 metres. Speed of light in copper is around 2*10^8 metres. So it takes 10 nanosec to travel a 2 meter long wire. (If prop delay < trans delay, then the first bit of the packet would have reached the other end point before the sender finishes putting all bits on the wire. So the limiting factor is really how fast the link is. On the other hand, if prop delay > trans delay, as is the case for long distance links, then the first bit reaches the other end point much after the last bit has been sent.) (3) Next, once it arrives at the other end point, it must be processed by the switch or router. This processign delay could involve looking up routing tables, some computations of header checksums etc. Again, this is usually not a significant component with today's high-speed hardware. (4) Once the other end point processes the packet and decides which link to send it on, the packet may potentially be queued until the next link becomes free. This delay is called the queueing delay. This is the most unpredictable part of the delay, as it depends on traffic sent by other nodes. Note that queueing can happen at the input port or output port, depending on design of the switch/router. A large branch of study ("Queueing theory") is devoted to modeling and understanding this delay under various conditions. Internet traffic is often bursty, and hence queueing delays occur even if the aggregate traffic is less than the capacity of the links on an average. That is, suppose incoming packets arrive at an aggregate rate of L bits/s and link rate is R bits/s, then as long as L