Function of Each OSI Model Layer
We have had several descriptions of layers in this chapter using metaphors. What are these layers actually accountable for? The following sections provide a brief summary of what different layers do. We will start with the physical layer and move up gradually up to the application layer. In later chapters, we will describe the applications of each layer at length.
The Physical Layer
As mentioned earlier in the example, the physical layer's job is that of a transporter; to carry the bits from one end to another. It has to use the communication medium available, i.e., a wired or a wireless connection to transfer the bits to the other end. It is interesting to note that there is more than one mechanism to transfer the bits from one end another using the same medium (Figure 1.4). The study of physical layer describes different ways of transferring bits from one end to another and their pros and cons.
In the next chapter, we will see two basic ways to transfer bits-analog signaling and digital signaling. We will also see that the fiber optic cable uses pure light to send and
receive bits. We will also notice that the media can be a copper wire or a fiber optic cable or vacuum. The layer has to decide what will represent a 1 and what will represent a 0: what is to be done when multiple senders and recipients are in the fray, etc.
When there are multiple senders and receivers, one way to send a message is to broadcast it to the entire network. The intended receiver accepts the message and the rest reject it. The other way is to find out the exact recipient and send the message using a direct path. The first case does not require much trouble on the senders's part, but the second one requires some knowledge about where the receiver is located in the network. We will look at both these approaches while we study physical layer later in chapter.
Thus the physical layer handles the transfer of bits from the sender to the receiver, converts the bits to voltages or light pulses and at the other end, it converts the incoming voltage values or light pulses into bits. The physical layer checks whether the message has to be broadcast or to be sent to a single node.
The Data Link Layer
The data link layer's job is to send the bits using the physical layer. Additionally, it provides quality control measures by ensuring the bits sent and the bits received remain identical. This is important as there is every chance of the data getting corrupted in the transit. The data link layer provides ways for the sender and the receiver to recognize erroneous or unintended data. To ensure error-free transmission, the data link layer adds additional bits to the data using some algorithm. These additional bits are calculated from the data itself. The calculation procedure is designed in a way such that the additional bits are different for different data. The same algorithm is also applied to the data at the other end and the results are compared. If the results match, then the data is accepted; otherwise it is rejected. This is known as the error-detection mechanism. In some cases, the bits added are designed to not only detect the errors but also to correct them. In this case, the algorithm is designed such that the pattern of change in bits due to an error is predictable by looking at the additional bits. This process is known as the error-correction process.
To perform these actions, the data link layer has to encapsulate the data into the body of a unit called frame. For identity, it has to have the senders and receivers addresses on the frame. (It is same as the addresses written on the box!) The data link layer also needs to put some mark on the frame to detect errors, if any. It is possible that the data link layer asks for confirmation of delivery from the other end. This is possible by providing acknowledgements from the receiver as soon as the data is received. When the sender receives the acknowledgement, it realizes that the frame has reached the other end.
It is also possible that the sender is sending frames at a faster rate than the receiver can handle. For solving this problem, a mechanism called flow control is to be employed. It usually involves a message from the receiver asking the sender to slow down the process. Thus the data link layer has to deal with error handling, framing, acknowledgements and flow control. (Figure 1.5)
The Network Layer
The network layer's job is to look at the actual destination address and decide the intermediate router through which the packet can be delivered. Figures 1.6 to 1.9 show the network layer's processes.
The process of finding the next immediate router for a final destination is actually two processes in one. First of all, the network layer must be aware of the locations of different routers and then only, it can decide a path to the final destination. To know who is where in the network, the network layer deploys multiple methods. All these methods are known as routing algorithms. The outcome of a routing algorithm is usually a table known as the routing table. Once this table is in place, for every incoming packet, the network layer can decide a route. This part is known as forwarding. Every network layer must do both, routing and forwarding.
There are two different forwarding schemes. A popular scheme used in Internet is to have an independent route for every packet belonging to the same connection. Here every packet of the same connection is treated afresh and the routing decision about that packet may be different from its predecessors or successors. This mechanism is essential for connectionless transfer, as the path is not predetermined and the packet can travel on virtually any valid path to reach the destination. The other mechanism is to decide a route for the connection which will remain unchanged throughout its lifetime. Every packet that belongs to the connection travels on this path only. This method is essential for connection oriented transfer, as the connection establishment process decides the path of communication as well. We have already looked at the difference between connection-oriented and connectionless transfer and learned that the Internet follows the connectionless transfer mechanism.
The routing invariably includes a process to handle congestion. It is a situation where lines and routers get choked up with more packets than those they can handle. Consider a case of driving through your city. To avoid congested roads, we try to choose a path having less traffic. Similarly it is important for the layer that decides the route to avoid congested paths that can delay the communication.
One more issue in the network layer is to handle the differences between different networks. There is an interesting case of one type of network nodes connected by some other type of internetwork. Here the first type of networks are far apart from each other and only connected by the networks of other type. Think of traffic belonging to one particular network is flowing through some other network. How such traffic can be handled? The answer is to provide a technique called tunneling. It is an important function of the network layer. Let us understand tunneling by an example.
An example of connection-oriented network is ATM or Asynchronous Transfer Mode. It was considered to be a very promising mechanism sometime back. It is still in use at some places in telephone companies. In ATM, the connection is established beforehand and released at the end (connection-oriented network layer!). Assume the sender and the receiver belong to two different ATM networks and the intermediate network is TCP/IP. In such cases, we need to get answers to questions such as
• How can the network layer of TCP/IP accept the ATM cell (the network layer data is known as cell in ATM) and deliver at the other end?
• How connection orientation is managed?
• More importantly, how addressing is managed?
The TCP/IP uses 32 bit addresses to identify a source or destination, while the ATM uses a much smaller address. 12 The problem is solved using tunneling (Figure 1.10). In this process, the ATM cells (it can even be a bunch of cells) are embedded inside an IP datagram and carried to the other end where the reverse process is done extract the original ATM cells and passed over to the receiving ATM network. Thus most of the questions asked are not required to be answered as the conversion does not take place!
The network layer which requires to route between the wired and wireless network have to handle similar issues. Its main job is to handle routing and forwarding. Additionally, it manages the connections for connection-oriented transfer, and also finds out solutions for problems like congestion and a variety of traffic issues by using tunneling.
The Transport Layer
The transport layer is the secretary of the application layer. In most of the cases, it provides a sense of responsibility to the actual communication by deploying various techniques. The jobs of transport layer include direct communication to the transport layer at the other end. We have already seen that the secretary in our example sends a letter and receives an acknowledgement to confirm to the manager that the job is done.
Suppose we are using an application like Telnet (technically called the Telnet client). When we press Is (this is a command to list all the files on the remote Telnet server running Linux or UNIX), Telnet passes on that command to the TCP process running on our machine. The TCP process running on our machine establishes a connection and sends the message Is to the TCP on the other ends, and gets the acknowledgement back. When the Telnet server sends a list of files to us using the TCP connection, our TCP process sends the acknowledgement back and passes on the content (the list of files) to our Telnet client so that we can see the list on our screen.
For data transmission, usually a connection-oriented service is preferred, but in case of real-time transmission, particularly live audio or video, it is essential to have a connectionless service. Here the transport layer is not supposed to provide connection-oriented, reliable service to the application layer. Why? We will find the answer in the following paragraphs.
The transport layer ensures reliability by employing a simple technique of timing out and retransmission. Let us try to understand the same using an example.
Suppose A is sending a file to B. The file contains five paragraphs. The paragraphs are numbered as I to 5. Suppose A starts sending the file at timer. Thus the TCP process running at that place receives the file at time from our application. Suppose at the same time, the TCP process of our machine sends the first paragraph to the other end. After a time delay of Ar, the second paragraph is sent and so on. The TCP, after sending each paragraph, starts a timer. The value of timer indicates the time when the acknowledgement of the paragraph, is expected. The timer value is decided by the TCP using a well-designed algorithm and it is usually a good enough estimate of the round-trip time to the specific destination.
Suppose the round-trip time is calculated as x. Now the timer value is set to (usually) + 2x for the first packet and t+&+2x for the second 14 packet and so on. The timer value is more than the actual round-trip time (we have taken it to be double than the expected round-trip time exactly) to avoid unnecessary retransmission in case of negligible delays. If the acknowledgement does not come back by that time, the timer is said to expire and the TCP process retransmits that paragraph (without consulting the sending application, FTP in our case).
There can be various reasons for no receipt of acknowledgement in time. It is possible the paragraph gets lost in transit and the TCP process at the other end has not received it at all. So obviously there would be no acknowledgement. The other possible reason is that the process has received the paragraph and sent an acknowledgement as well, but the acknowledgment is lost in transit. The third possibility is that neither the paragraph nor the acknowledgement of the transmission is lost. It may be the case that either the paragraph took more time in reaching the receiver or the acknowledgement took more time in reaching us. It can occur due to an accidental congestion in the network. Our timer, without really knowing about this, has timed out waiting for the acknowledgement.
Now assume that the 3rd paragraph is lost. It is retransmitted after the 5th paragraph as the timer expires just when the 5th paragraph is transmitted. Now we are sending the retransmitted 3rd paragraph and the receiver is receiving that. If the receiver presents the paragraphs in the order in which they arrived, then the user will get the 4th paragraph after the 2nd one and similarly the 3rd paragraph will be presented after the 5th one. This might create chaos. To avoid this situation, the receiver should collect all the paragraphs initially, arrange them in order, and then give it out to the user. This is a usual mechanism for file transfer (FTP), remote login (Telnet), or even Internet browsing (HTTP) to provide reliability in data transfer. Data transmission using TCP automatically ensures retransmission of those data which has not been delivered.
The mechanism of resending the missing data later does not work well in case of real time audio or video. You have to display each video frame immediately when it reaches to make the movie look continuous. What happens if a frame or two is missed? Can we display them later? The best option is to skip them! Similarly in an audio transfer, if a word or two is lost in transit, just keep on listening to other words after that! A human viewer or listener has a great ability to pick up what is missing in the presentation. If in a live video, a frame or a sequence of frames is missed for a short period, then the user can always guess the missing content from the context. Therefore, it is OK if the lost data is kept lost. If we provide retransmission and display a frame at a later time, then it will create more confusion.
Let us take an example to reinforce the issue. Suppose in a cartoon movie, Mickey Mouse is shown moving from left to right. There is a creature sitting in the middle of the screen. Frame by frame, Mickey is coming closer to that creature. In one of the frames, Mickey kicks that creature, the next frame shows the creature in the sky, and the next sequence shows the creature falling on top of Mickey. Now assume the frames are sent one by one and the kicking frame is lost. Then the viewers will automatically assume that the creature must have been kicked. It is absolutely fine till now. If the kicking frame is retransmitted, then we have a sequencing problem. If Mickey is shown kicking that creature after the creature falls down, then it will create a confusion in the minds of the viewers as to how Mickey came out from beneath the crashed creature and kicked it. So it is better not to transmit the lost frame and keep it lost. Now the question arises if we do not need retransmission of lost data, then what is the necessity to have timers and count the number of transmitted frames? If nothing is required, TCP becomes an overhead.
For all such cases, UDP (User Datagram Protocol, an alternative to TCP as a transport layer protocol in Internet) is preferred. UDP is a protocol which skips a few checks that TCP normally does and it is comparatively lightweight. Almost all real-time transmissions use UDP as their transport layer protocol.
Thus we have two different protocols at the transport layer. One which provides a connection-oriented service over a connectionless network layer (TCP over IP) and another which provides a connectionless service over a connectionless network layer (UDP over IP). Both these protocols are representatives of the two different possible transport layer services. The application layer is responsible to model applications written by the user. So when two different transport layers are provided, an application writer (a programmer) can have two choices for the application. The TCP/IP model provides just that.
In the OSI model, the network layer is where the choice is provided. It is an interesting case. In the OSI protocol stack, a network layer can be either connection-oriented or Thus the data transfer can be done either way. It gives a choice to the service provider (ISP). He can provide either a connectionless data transfer (like IP) or a connection-oriented transfer (like ATM). Unfortunately, the choice in this case is not directly given to an application. The transport layer has a choice of selecting a network layer, but an application layer does not have a choice.
What is the advantage of providing two different types of network layer? Why were they provided? It is because the ISP gets the advantage of choosing between two services. He can either pick the service he prefers or provide both the services. Is there any advantage to have two different network layers? There does not seem to be any. Then the question is, why was it provided in the OSI model? It is because the OSI model was influenced by telephone companies and therefore it provided only connection oriented transfer at the network layer. So the data transfer is billed on the basis of the time of connection rather then the volume of data. As this mechanism failed and the connectionless mechanism of the TCP/IP model became successful, some of the committee members. demanded for the connectionless version of the network layer. It was provided as an addition to the original connection-oriented scheme,
The transport layer does one more important job. There can be multiple applications running on our machine, all of them taking services of the same transport layer. (Single secretary is being shared by multiple managers!) In such cases, it is important to track the requirements of each application and provide requisite services accordingly. This process is known as multiplexing. It is important to note that this issue arises in other layers as well, but the amount, the complexity, and the dynamism in numbers is much more in the transport layer as compared to others.
The transport layer does almost everything a data link layer does. The only difference is that the data link layer connects to its immediate neighbour layers, while the transport layer manages the connection with a remote recipient. (The warehouse keeper sends the consignment to the next destination, while the secretary manages to send it to its ultimate destination.)
Thus the transport layer manages the connection with other transport layers, handles all the issues that are there in the data link layer, and also takes care of multiplexing (Figure 1.11).
The Application Layer
This is the layer for all of us to see and interact with. When we connect the Telnet with a remote server or an FTP to upload/download a file or with HTTP to browse the Internet, then the FTP client or the Telnet client starts interacting with the application layer which is embedded in the application that we run (i.e., the ftp client program). It is also said that the client runs at the application server. The FTP program that we run is actually a client to a larger program called the FTP server. The server also runs at the application layer. Our FTP client talks to the FTP server at the other end to download and upload files. The interaction proceeds by utilizing the entire protocol stack involving the application layer to the physical layer that we have just discussed. The FTP client sends the command that we type (ls, cd, put, get, etc.) to the application layer The application layer decides the server that should reply to this request. The application layer then asks either the TCP or the UDP (or some other transport protocol for that matter) to carry the request to that server. In case of FTP client, we may be looking at some FTP server on some machine using TCP. Once our FTP client decides to have a connection with some FTP server, then our TCP process establishes a connection with the TCP process running on that server. After that, the TCP process on receiving machine connects to its FTP server. Thus the application layer connects an application on one machine with another application on a different machine. The application layer is important for the following reasons:
• It is the layer the user application interacts with. It must be equipped with a good user interface. When we run telnet or FTP or HTTP (the browser), we usually have such an interface. Other layers can concentrate more on functioning rather than the interface.
• It is the layer that provides service to the user. Users may want service in various formats. API or application programming interface is a popular mechanism. It provides the user an interface to interact with the application. API is very useful for programmers. If you would like to incorporate FTP or telnet in your own application or in case you want to have a routine for sending and receiving mails in your own program, then API is the best option. Another popular interface is the GUI. Almost all applications today provide this interface. There is another interface called CUI or character user interface. It comes handy in some cases, e.g., normal mobile phones with menus. Mobile UI is a complex problem to solve because it deals with relatively small screens.
• The application layer must provide open solutions to make sure all these interfaces can be laid on top of it. Such versatility is not required in interfaces for other layers.
The application layer must work differently with different applications. In fact, we need much more application layers than any other layers because we deal with many applications using the same transport, network, and other layers (Figure 1.12).
Comments
Post a Comment