Exercise 2 -- Network Interconnections
Due: Thursday, March 9, 2000 (in class)
Suppose you had a token ring network and a bus network using the CSMA/CD protocol organized as shown in the diagrams below. The most obvious way one of these networks might get "damaged" is to have one of the wires accidentally cut. So, assume that in each network something accidentally damages the wire between machines "C" and "D," severing the connection.
In a real network, an accidental cable break might cause electrical damage to the machines connected to the broken cable or other secondary problems. For this question, however, assume that the damage done is so clean that the only thing interrupted is the ability to send data along the wire between C and D. Everything else works as well as it possibly could after the break.
On the other hand, in real implementations of these networks, the machines follow special instructions when they detect connection problems in an attempt to recover from any damage. For this problem, I want you to instead assume the machines act as if nothing special has happened.
With these assumptions:
A better known example of an odd unit of measure for distance is the "light year." Since when is a "year" a unit of distance?
When measuring networks, a very good unit of measure might be called the "light bit." The idea is to choose the distance that a signal traveling through the network traverses in the time spent sending one bit's worth of data as the basic unit of distance. Since electrical signals and light in fiber travel at just about the speed of light, this is basically the distance light travels in the time it takes to send one bit.
The actual length of this unit depends, of course, on the data transmission rate of the network considered. For example, when data is sent through a 10 Megabit per second Ethernet, the time required to send a bit is 1/10,000,000 of a second. The speed of light is about 3 x 108meters/sec. So, the length of a "bit" sent through an Ethernet is:
(3 x 108)/(107) = 30 meters
I'd like you to do some measuring in these units.
When a network tries to deliver a packet, the efficiency of the process can be measured by dividing the time spent actually sending the bits of a packet by the total time from when the system starts trying to send the packet to the point when the last bit is sent. If no time is wasted, the efficiency will be 1. If something delays the beginning of the transmission of the packet, the efficiency will be:
Obivously, if the delay is greater than 0, the efficiency will be less than 1.time-spent-sending/(length-of-delay + time-spent-sending)
In Ethernet, the main cause of inefficient transmission is collisions. If a station encounters one or more collisions before it manages to send a packet successfully, the efficiency will be less than one.
Suppose that on some Ethernet, two computers, A and B, both have a packet to send. Assume that A starts sending before B, but not long enough before B to avoid a collision. Also assume that after detecting the collision, A attempts to send its packet again as soon as it detects that the network is again idle and that nothing collides with this second attempt allowing A to deliver its packet successfully.
The basic cause of inefficiency on a token ring is the time spent waiting for the token to arrive. So, assume that just one station, A, on a token ring has a packet to send. Assume that the total circumference of the ring is 1 kilometer and that the data rate of the ring is 100 million bits per second.
packet A wants to send is 1000 bits long.
If you have questions, you are encouraged to ask them through the discussion area for this homework assignment