Common language is the pillar of an effective communication. When a set of common languages leads to a set of common behaviour, the protocol is born. In the New Oxford American Dictionary, the protocol is defined as
“The official procedure or system of rules governing affairs of state or diplomatic occasions”.
At higher layer a collection of related protocols is called protocol suite. Today’s Internet is based on the TCP/IP suite which is inspired by the early model draws its origin from ARPANET reference model (ARM) which was a part of DARPA project. The core objective of the ARM was to design an architecture to interconnect small locally administered packet switched computer networks to establish a large super network. Like any protocol, the design guidelines stem from some specifications and requirements that try to connect architecture with implementation. TCP/IP suite is no exemption here. The design, both protocol and physical implementation, amounts to which features and where they should be logically implemented in the network. This is exactly where TCP/IP suite is differentiated from many other early works such as Xerox’s XNS and IBM’s SNA. In this article, I try to shed a light on the philosophy of the design that shapes today’s Internet which its influence on today’s modern life is indisputable. In other words, how the Internet architecture ended up having architecture that we witness today.
First Fundamental Goal
The first fundamental goal of DARPA Internet architecture was designing an effective technique to multiplexed utilisation of existing interconnected networks to have a larger network service. In other words, the top-level goal was integrating separated administered entity into one common utility. The network concept adopted for this purpose was packet-switched network. This is due the fact that, at that time many existing networks were of packet switched type and many applications such as remote login was served by packet switching concept. The technique they have reached consensus in internetworking was using gateways to store and forward packets between networks. The reason they have chosen this method was that the principle of store and forward switching has been well understood in previous ARPANET project . In this manner, the first fundamental goal is established.
Second Level Goals
More detailed set of goals come to specify the exact features which have profound implications on the Internet architecture. These goals which are called second level goals are :
- Internet communications should keep carrying on despite of loss of network or gateway
- Multiple types of communication services should be supplied by Internet
- The Internet architecture must accommodate wide variety of networks
- The Internet architecture must be cost effective
- The host attachment should be with low level of effort
- The resources used in Internet architecture should be accountable
These goals, with a given priorities, specify a set of rules by which different designs are adopted in Internet architecture. Even the order of these goals can change the adopted architecture dramatically. For example, the priorities of goals given above is suited well for military purposes. In contrast, if it is meant to be exploited in commercial development, the priorities start from bottom of the list.
We start from the first goal which says that the network should not be shut down if any gateway is failed. This is an important goal which shapes the Internet architecture dramatically. Up to 1960, the dominant concept of networks was circuit switching which served as a basis for telephone networks. Although, in circuit-switched networks the performance of a connection in terms of latency and transmission speed is reliable and predictable, two main problems exist. First circuit switching does not utilise network resources efficiently. This is due to the Time-division multiple access (TDMA) nature of multiplexing used in these types of networks. In TDMA, the network resources such as transmission medium is allocated for a specific user for a given period of time even though the user can be idle. Second, it is not resilient to the failure. If the connection allocated is disrupted, the transmission is torn away. This is against the main requirements in the list. By using packet-switched networks, the network is robust and efficient. In packet-switched network the information is transported in terms of different packets which each can route potentially different path toward to the destination., This brings robustness to the network. In other words, if one path from source to destination is failed, the communication is still going on (as long as there is a such a path). It is efficient because the underlying transmission medium is not allocated solely for a specific user and deprive others from using it. This is possible by using statistical multiplexing which mixes traffic from different sources based on their traffic pattern or arrival statistics. However, there was still unclear how to achieve robustness in network. In order to have full robustness, the communication states which maintains the Internet connection should be distributed and this distribution should be implemented in a cost-efficient manner. The best design principle that satisfies the distributed state is keeping the state of the connection in the host rather than inside network. This approach to reliability is called fate sharing. Following fate sharing the state of communication is not accommodates in any intermediate nodes such as switches and gateways. This is the core principle for specific class of networks called stateless communications which are usually called datagrams. It is exactly based on this philosophy that the virtual circuit types of networks such X.25 is not suitable candidate for Internet network. In connectionless networks the connection establishment and maintenance are handled by all the information embedded inside the packet.
Different Types of Services
The second most important goal in the list is to support different applications with different requirements. These requirements are featured in terms of latency, bandwidth, and reliability. Unfortunately, there is a serious hurdle in achieving this goal. In designing large complex systems such as operating system and protocol suite one of the most important question which often arises is as to where a particular feature or function should be placed. Answering this question is one of the most critical principle which influence the design of TCP/IP suite which is called end-to-end argument. The core statement conveyed by the end-to-end argument is that the function in question can be only specified via the applications standing at the end of the communication system. Providing the feature in question as a prat of communication system is not feasible. In short, this principle tells us that the service features such as error correction, acknowledgement and in-order delivery should not be implemented in the lower layer of the communication system. So, the TCP/IP suite aim at dumb network with smart host attached to it. Using fate sharing and end-to-end argument, the TCP/IP suite provides the facility to run connection-oriented connection, such as TCP, over connectionless datagram. It is noteworthy that at the beginning of the protocol design, the TCP and IP layer were treated as one single layer. However, need a service that TCP could not provide satisfying results leads to a separation of TCP and IP layers. One of these services is real time delivery of digitised speech which requires strict transmission delay requirement. It has been proved that one of the main sources of the delay in network is caused by the retransmission mechanism integrated in TCP reliable in-order delivery. Based on this fact, separating TCP and IP layers, into two different independent layers provides an infrastructure, i.e., datagram, to support different service types, each with different application requirement. In despite of the misunderstanding that consider datagram as a service, it is just a building block facility for packet transmission.
Varieties of Networks
One of the main factors in deriving success for Internet is the capability of Internet to run on any network technologies such as long-haul nets (X.25 networks), local area nets, and satellite networks. This success is mainly due to the fact that the protocol design made a minimum assumption on the way the packet is delivered. The basic assumptions are delivering packets of information a reasonable size with some degree of reliability via addressing mechanism. In contrast assumptions such as in-order delivery, network level broadcast or multicast, prioritising traffic, and internal knowledges of speed or delay are not explicitly taken. These can be provided by higher layers or end-point applications. This is really power point of the Internet architecture which otherwise engineering the architecture for every kind of underlying network was a must.
The three goals discussed so far have a profound effect on shaping today’s Internet architecture. The other goals play minor role which might not be engineered completely such as distributed management of the Internet. Furthermore, there are some sources of inefficiencies in protocol implementation such as high overhead and retransmission in the case of packet loss which needs reconsideration in application design.
In this article, we aim at looking some main factors which derive the TCP/IP suite protocol design. The datagram is the main architectural feature of Internet which provides basic packet trasnsmission with minimum relaibilty. Using datagram as a foundation enables Internet to not constrain itself to support wide range of services and leaves all of these to the applications.
 Clark, David. “The design philosophy of the DARPA Internet protocols.” Symposium proceedings on Communications architectures and protocols. 1988.