Wednesday, January 21, 2009

How Does The Internet Work?

The question then arises: how come every node on the Internet can talk to the others, if they all use different link-level protocols to talk to each other?

The answer is fairly simple: we need another protocol which controls how stuff flows through the network. The link-level protocol describes how to get from one node to another if they're connected directly: the `network protocol' tells us how to get from one point in the network to any other, going through other links if necessary.

For the Internet, the network protocol is the Internet Protocol (version 4), or `IP'. It's not the only protocol out there (Apple's AppleTalk, Novell's IPX, Digital's DECNet and Microsoft's NetBEUI being others) but it's the most widely adopted. There's a newer version of IP called IPv6, but it's still not common.

So to send a message from one side of the globe to another, your computer writes a bit of Internet Protocol, sends it to your modem, which uses some modem link-level protocol to send it to the modem it's dialed up to, which is probably plugged into a terminal server (basically a big box of modems), which sends it to a node inside the ISP's network, which sends it out usually to a bigger node, which sends it to the next node... and so on. A node which connects two or more networks is called a `router': it will have one interface We call this array of protocols a `protocol stack'
[ Application: Handles Porn ] [ Application Layer: Serves Porn ]
^
v
[ TCP: Handles Retransmission ] [ TCP: Handles Retransmission ]
^
v
[ IP: Handles Routing ] [ IP: Handles Routing ]
^
v
[ Link: Handles A Single Hop ] [ Link: Handles A Single Hop ]

+------------------------------------------+



So in the diagram, we see Netscape (the Application on top left) retrieving a web page from a web server (the Application on top right). To do this it will use `Transmission Control Protocol' or `TCP': over 90% of the Internet traffic today is TCP, as it is used for Web and EMail.

So Netscape makes the request for a TCP connection to the remote web server: this is handed to the TCP layer, which hands it to the IP layer, which figures out which direction it has to go in, hands it onto the appropriate link layer, which transmits it to the other end of the link.
At the other end, the link layer hands it up to the IP layer, which sees it is destined for this host (if not, it might hand it down to a different link layer to go out to the next node), hands it up to the TCP layer, which hands it to the server.
So we have the following breakdown:
The application (Netscape, or the web server at the other end) decides who it wants to talk to, and what it wants to send).
The TCP layer sends special packets to start the conversation with the other end, and then packs the data into a TCP `packet': a packet is just a term for a chunk of data which passes through a network. The TCP layer hands this packet to the IP layer: it then keeps sending it to the IP layer until the TCP layer at the other end replies to say that it has received it. This is called `retransmission', and has a whole heap of complex rules which control when to retransmit, how long to wait, etc. It also gives each packet a set of numbers, which mean that the other end can sort them into the right order.
The IP layer looks at the destination of the packet, and figures out the next node to send the packet to. This simple act is called `routing', and ranges from really simple (if you only have one modem, and no other network interfaces, all packets should go out that interface) to extremely complex (if you have 15 major networks connected directly to you).

Tuesday, January 20, 2009

The OSI Refrence Model

The concept of how a modern day network operates can be understood by dissecting it into seven layers. This seven layer model is known as the OSI Reference Model and defines how the vast majority of the digital networks on earth function. OSI is the acronym for Open Systems Interconnection, which was an effort formed by the International Organization for Standardization in 1982 with the goal of producing a standard reference model for the hardware and software connection of digital equipment. The important concept to realize about the OSI Reference Model is that it does not define a network standard, but rather provides guidelines for the creation of network standards. The OSI has become so accurate a concept that almost all major network standards in use today conform entirely to it's seven layer model. Though seven layers may at first appear to make a network seem overly complex, the seven layer OSI Model has been proven over the past twenty years to be the most efficient and effective way to understand this extremely complex subject.

THE PHYSICAL LAYER:

The first and foundational layer of a network is the Physical Layer. The Physical Layer is literally what it's name implies: the physical infrastructure of a network. This includes the cabling or other transmission medium and the network interface hardware placed inside of computers and other devices which enable them to connect to the transmission medium. The purpose of the Physical Layer is to take binary1 information from higher layers, translate it into a transmission signal or frequency, transmit the information across the transmission medium, receive this information at the destination, and finally translate it back into binary before passing it up to higher layers. Transmission signals or frequencies vary between network standards and can be as simple as pulses of electricity over copper wiring or as complex as flickers of light on optical lines or amplified radio frequency transmissions. The information that enters and exits the Physical Layer must be bits; either 0s or 1s in binary. The higher layers are responsible for providing the Physical Layer with binary information. Since nearly all information inside of a computer is already digital2, this is not difficult to achieve. The Physical Layer does not examine the binary information nor does it validate it or make changes to it. The Physical Layer is simply intended to transport the binary information between higher layers located at points A and B.

DATA LINK LAYER :
The second layer in the OSI Model is the Data Link Layer. The primary focus of the Data Link Layer is revealed in its common nickname, The Physical Address Layer. The only layer in the OSI Model that specifically addresses both hardware and software, the Data Link Layer receives information on its software side from higher layers, places this information inside of "frames", and finally gives this frame to the Physical Layer, Layer 1, for transmission as pure binary. A frame essentially takes the information passed down from a higher layer and surrounds it with Physical Addressing information. This information is important to the Data Link Layer on the receiving end of the transmission. When the frame, in binary form, arrives at the destination node3, it is passed from the transmission medium to the Data Link Layer (Layer 2) by the Physical Layer (Layer 1). The Data Link Layer on the receiving node then checks the frame surrounding the information received to see if it's Physical Address matches that of its own. If the Physical Address does not match, the frame and its encapsulated data is discarded. If the Physical Address is a match, then the information is removed from the frame and passed up to the next highest layer in the OSI Model. The Physical Address check is Obviously not of much use if there are only two nodes on a network, but suddenly becomes extremely valuable when three or more nodes exist. The Physical Addressing system allows multiple nodes to be on the same network medium, but retain the ability to address only a specific node with a transmission. On the simplest networks, all nodes receive every frame transmitted on the network, but discard frames not specifically addressed to them.
The Physical Address used in the Data Link Layer's Physical Addressing system is known as a MAC4 address and is embedded physically into the node's Network Interface Card during manufacturing. Every NIC's MAC address is unique in order to prevent addressing conflicts. It is this relationship that causes the Data Link Layer to be known as the only layer that addresses both hardware and software. This layer is where the information on the network makes the move from the physical infrastructure of the network into the software realm. The remainder of the OSI Reference Model's layers are entirely software.

THE NETWORK LAYER :
OSI Layer 3 is known as the Network Layer. The first layer to deal entirely in software, the purpose of the Network Layer is to direct network traffic to a destination node who's Physical Address is not known. This is achieved through a system known as Logical Addressing. Logical Addresses are software addresses assigned to a node at Layer 3 of the OSI Model. Since these addresses are able to be defined by software rather than being random and permanent like Physical Addresses, Logical Addresses are able to be hierarchical. This allows extremely large networks to be possible. Up until this point, only small networks would be possible since all traffic was addressed to all nodes. This works fine until more than one person attempts to utilize the network at once, at which point a data "collision" occurs. While OSI Layer 4 protocols may attempt to compensate for this collision by retransmitting packets until they have reached the destination node without issue, this degrades network performance exponentially as the number of nodes on a network grows. The larger the network is the greater this issue becomes. OSI Layer 3 takes on this problem by its Logical Addressing system and a concept known as routing.
The Oxford American Dictionary defines routing as "Sending or directing along a specified course". Layer 3 routing on a network takes this foundational definition and puts it to use to enable millions of computers, rather than just a handful, to communicate at once without interference. This is achieved by having a smart device working at Layer 3 that handles network signals from each node directly rather than nodes just blindly repeating packets at Layer 1 until they happen to reach their destination. Such a device is known as a network router. A network router sits in the center of a network with all nodes having a direct link to it rather than being linked to each other. This strategic position allows the router to intercept and direct all traffic on the network. A routed network can be illustrated by a star formation, as shown in Diagram 1. On a routed network, Layer 3 packets are no longer broadcasted to all nodes, but rather received by the router and passed on only to the appropriate node. This is a valuable concept because it allows for the collision free-transport of packets across a network.
As well as being linked directly to all nodes in a local network, a router can be linked directly to other routers. This allows groups of nodes separated by distance to communicate with each other in a practical way. It would not be practical to have nodes separated by a great distance all connect to a single router. The amount of cabling required would be immense, and depending on the number of nodes involved, the router may not posses the required number of physical connections. Placing a router at each group of nodes and running a single line from router to router, however, is quite practical. Routers can be chained in a line, or as shown in Diagram 2, can be connected by a central router. This concept is virtually infinitely scalable and is very efficient.
When a node starts a transmission, the OSI Layer 3 protocol takes the information passed down from higher layers and encapsulates it with the logical address of the destination node in a unit called a packet. This packet, then passing through the remaining lower layer protocols, is transmitted over the network medium from the node to the router. This router reads the logical address that the packet contains and compares it to a list of physical addresses of nodes directly connected to it. If the packet's destination address matches an entry in this list, the packet is transmitted directly on the line that leads straight to the destination node. If the router does not know of a direct connection to the destination node, the packet is transmitted on a line leading directly to another router. This router then treats the packet much like the first router did upon receipt. The packet's logical address is checked for matches against the list of logical addresses belonging to nodes directly connected to the router. If the packet reaches a router with connections only to other routers, as shown in Diagram 2, the router uses the logical address' orderly numbering scheme to try and determine the closest router to the destination node and then transmits the packet to that router.
As we can now see, the OSI Reference Model Layer 3 is one of the most complex, but most functionally important, parts of the modern day network. The Layer 3 protocol IP stands for Internet Protocol and is the protocol handling virtually all traffic on the internet today. The fashion in which Layer 3 protocols connect computers in a star-shaped, extensible network is much of the reason the internet is commonly called the "web".

THE TRANSPORT LAYER :
OSI Layer 4 is known as the Transport Layer. Since we are now above Layer 3, all information transfered is assumed to be at the correct destination node and is being passed up to Layer 4. The Transport Layer is responsible for the reliability of the link between two end users and for dividing the data that is being transmitted by assigning port numbers to its Layer 4 packages, known as segments. Ports can be thought of as virtual destination mailboxes or outlets. When information reaches a Layer 4 protocol, the segment is examined to determine the destination port of the data it contains. Once the port is determined, just as all of the past layers have done, the wrapper is discarded and the payload data passed up to the next layer's protocol.
Ports allow more than one set of Layer 5-7 protocols to exist on a single node. This is important if the node has more than one purpose. Modern home computers utilize many ports during everyday use, because the modern computer user demands that a computer serve many purposes at once. Higher layer protocols that provide services such as email, web browsing, text chat, file transfer and more each operate on their own unique Layer 4 port, allowing all of these protocols to be operated at once without interference.
On the reliability front, Transport Layer protocols can be capable of running a checksum7 on the payload data they carry. This allows the protocol to determine the integrity of incoming payload data. If this data has been corrupted or its integrity compromised, the Layer 4 protocol will request the segment be retransmitted.

THE SESSION LAYER:
While being an optional layer in most protocol packages today, OSI Layer 5, known as the Session Layer, still serves a purpose in the OSI Reference Model. The Session Layer draws the outline for protocols that manage the combination and synchronization of data from two separate higher layers. Layer 5 protocols are responsible for ensuring that the data is synced and consistent before transmitted. A good example situation is the streaming of live multimedia audio and video, where near perfect synchronization between video and audio is desired.

THE PRESENTATION AND APPLICATION LAYERS :
The sixth and seventh layers in the OSI Reference Model are the Presentation Layer and the Application Layer. The primary purpose of these layers is to facilitate the movement of formatted information between applications interacting with end users on nodes by way of the lower layer protocols. Commonly used top layer protocols are HTTPS (for the secure transfer of web page related files), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP, used for the sending of email messages), and SSH (Secure Shell, used for secure remote shell8 access to a computer operating system).

Friday, January 16, 2009

Network Topology

Network topology is the study of the arrangement or mapping of the elements (links, nodes, etc.) of a network, especially the physical (real) and logical (virtual) interconnections between nodes.[1][2] A local area network (LAN) is one example of a network that exhibits both a physical topology and a logical topology. Any given node in the LAN will have one or more links to one or more other nodes in the network and the mapping of these links and nodes onto a graph results in a geometrical shape that determines the physical topology of the network. Likewise, the mapping of the flow of data between the nodes in the network determines the logical topology of the network. The physical and logical topologies might be identical in any particular network but they also may be different.
Basic types of topologies
Bus
Linear bus:
The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has exactly two endpoints (this is the 'bus', which is also commonly referred to as the backbone, or trunk) – all data that is transmitted between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network virtually simultaneously (disregarding propagation delays)[1].
Note: The two endpoints of the common transmission medium are normally terminated with a device called a terminator that exhibits the characteristic impedance of the transmission medium and which dissipates or absorbs the energy that remains in the signal to prevent the signal from being reflected or propagated back onto the transmission medium in the opposite direction, which would cause interference with and degradation of the signals on the transmission medium (See Electrical termination).
Distributed bus:
The type of network topology in which all of the nodes of the network are connected to a common transmission medium which has more than two endpoints that are created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology (i.e., all nodes share a common transmission medium).
Star
The type of network topology in which each of the nodes of the network is connected to a central node with a point-to-point link in a 'hub' and 'spoke' fashion, the central node being the 'hub' and the nodes that are attached to the central node being the 'spokes' (e.g., a collection of point-to-point links from the peripheral nodes that converge at a central node) – all data that is transmitted between nodes in the network is transmitted to this central node, which is usually some type of device that then retransmits the data to some or all of the other nodes in the network, although the central node may also be a simple common connection point (such as a 'punch-down' block) without any active device to repeat the signals.
Extended star:
A type of network topology in which a network that is based upon the physical star topology has one or more repeaters between the central node (the 'hub' of the star) and the peripheral or 'spoke' nodes, the repeaters being used to extend the maximum transmission distance of the point-to-point links between the central node and the peripheral nodes beyond that which is supported by the transmitter power of the central node or beyond that which is supported by the standard upon which the physical layer of the physical star network is based.
Distributed Star:
A type of network topology that is composed of individual networks that are based upon the physical star topology connected together in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes').

Ring
The type of network topology in which each of the nodes of the network is connected to two other nodes in the network and with the first and last nodes being connected to each other, forming a ring – all data that is transmitted between nodes in the network travels from one node to the next node in a circular manner and the data generally flows in a single direction only.
Dual-ring:
The type of network topology in which each of the nodes of the network is connected to two other nodes in the network, with two connections to each of these nodes, and with the first and last nodes being connected to each other with two connections, forming a double ring – the data flows in opposite directions around the two rings, although, generally, only one of the rings carries data during normal operation, and the two rings are independent unless there is a failure or break in one of the rings, at which time the two rings are joined (by the stations on either side of the fault) to enable the flow of data to continue using a segment of the second ring to bypass the fault in the primary ring.

Mesh
The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints.
Fully connected:
The type of network topology in which each of the nodes of the network is connected to each of the other nodes in the network with a point-to-point link – this makes it possible for data to be simultaneously transmitted from any single node to all of the other nodes.
Partially connected:
The type of network topology in which some of the nodes of the network are connected to more than one other node in the network with a point-to-point link – this makes it possible to take advantage of some of the redundancy that is provided by a physical fully connected mesh topology without the expense and complexity required for a connection between every node in the network.
Tree
Also known as a hierarchical network.
The type of network topology in which a central 'root' node (the top level of the hierarchy) is connected to one or more other nodes that are one level lower in the hierarchy (i.e., the second level) with a point-to-point link between each of the second level nodes and the top level central 'root' node, while each of the second level nodes that are connected to the top level central 'root' node will also have one or more other nodes that are one level lower in the hierarchy (i.e., the third level) connected to it, also with a point-to-point link, the top level central 'root' node being the only node that has no other node above it in the hierarchy (The hierarchy of the tree is symmetrical.) Each node in the network having a specific fixed number, of nodes connected to it at the next lower level in the hierarchy, the number, being referred to as the 'branching factor' of the hierarchical tree.

Tuesday, January 13, 2009

MODERN NETWORK HARDWARE & INFRASTRUCTURE STANDARDS

ETHERNET (IEEE 802.3):

Ethernet is used to link computers in both small residential and large commercial situations and is the most widely used Network Hardware Standard today. It often delivers internet access from other longer range hardware standards to multiple computers within a home or workplace. Ethernet equipment is relatively small, affordable, and can carry data at high speeds. The original specification called for speeds of 10mbps9, while newer technologies have brought that speed today to lie between 100mbps and 1000mbps. Ethernet is rarely used outside the of local area networks found inside of business or homes due to it's range limitations. Ethernet installations typically can not run for more than a few thousand feet. This leaves other network infrastructures to link computers over great distances. Ethernet most commonly forms what is known as the LAN, or Local Area Network. A LAN is a collection of computers in close proximity that are linked together to form a network. This network may then be linked to other LANs via network infrastructure standards with long distance capabilities.
Originally developed at Xerox's PARC10 facility, the project was predominately conceived and headed by a man named Robert Metcalfe. In 1976, Metcalfe and his team published a paper entitled Ethernet: DIstributed Packet Switching For Local Computer Networks which drew out conceptual specifications for the Ethernet standard. Though the standard defined in his paper was for 3mbps 8bit communication, Ethernet would soon evolve into its more modern-day form when Metcalfe left Xerox to form 3Com. In 1980, he encouraged major companies such as DEC, Intel, and Xerox to participate in a standard he called "DIX" (for DEC, Intel, Xerox). This standard defined Ethernet as having 10mbps speeds and would end up competing directly against the day's largest proprietary systems.
The Ethernet standard demonstrates its flexibility by supporting multiple transmission mediums. The original medium, known as 10BASE-2, utilized BNC11 type coaxial connections and coaxial cabling. This was the standard for many years, transmitting data at a rate of 10mbps. 10BASE-2, however, became increasingly cumbersome, requiring high maintenance. A complete circuit was required for proper operation, meaning that a single failed node or cable break on a large network cause a cease of proper operation. In the early 90s, the newer 10BASE-T standard emerged. 10BASE-T utilizes twisted pair cable, which is similar to copper phone lines but differs in that it carries four twisted pairs instead of one or two. Operating at speeds of either 10mbps, or later 100mbps (100BASE-T), this standard has become and remains the most widely used network standard in the world. In the late 90s, Gigabit Ethernet came into existence, allowing for transfer speeds of up to 1000mbps over the same twisted pair cabling. The Gigabit Ethernet standard is also capable of transmitting over optical cable, though this ability has not gained a following due to the existence of superior high end fiber optic network standards.
While there are numerous other standards operating under the IEEE 802.3 specification, most others are used in niche markets or private deployments for very large network backbones. Ethernet has been and will remain for years the most used standard for the transmission of digital information over short distances.


WI-FI (802.11x):

Wi-Fi is a standard developed to perform nearly the same role as Ethernet does in consumer settings, but without the wires. Taking to the air, Wi-Fi allows a node to lie anywhere within a 100 to 1000 foot range of a Wi-Fi enabled router and have a constant, secure connection to the Local Area Network. Wi-Fi originated with speeds of just 11mbps in the form of IEEE 802.11b, but today can achieve speeds between 54mbps and 108mbps.
The original Wi-Fi standard was developed at AT&T by a man named Vic Hayes. It was initially designed to provide wireless communication for cashier systems in retail locations and operated at speeds of 1 or 2mbps. In the late 90s, the IEEE ratified the 802.11b specification, providing wireless ethernet-like connectivity for nodes at speed steps between 1 and 11mbps. The varied speeds allowed a node's hardware to switch to a lower transmission speed when further from the access point12 in order to maintain the connection over a longer distance.
In 2003 and 2004 IEEE 802.11g, a newer standard based on the Wi-Fi specification, emerged. 802.11g provides speeds between 11 and 54mbps while still maintaining backward compatibility with the older 802.11b standard and it's 1 to 11mbps speed range. This means 802.11g is able to offer superior speeds while still capable of reverting to lower speed, long range transmission rates when necessary.
In 2005, an even newer standard began to emerge known as 802.11n. While not officially ratified to date, so called "Pre-N" devices have begun to be sold in the consumer marketplace based upon the 802.11n standard that is still in the ratification process at the IEEE. The 802.11n specification calls for transmission speeds of 108 to 540mbps while still maintaining full support for the 802.11b/g standards speeds between 1 and 54mbps. While in the 802.11g standard the longer range, lower speed backward compatible 802.11b standard was utilized to increase the range and connection stability nodes received when further from the 'g' access point, 802.11n uses previous standards almost exclusively for compatibility with older equipment. This is due to the fact that 802.11n devices are able to communicate at 54 to 108mbps speeds at ranges greater than those offered by 802.11b when operating at 1 to 5mbps. 802.11n is not expected to begin to receive widespread adoption until late 2006 or 2007, both because it has not yet received IEEE certification and it has the current standard's enormous market saturation to attempt to replace.
The issue of range has greatly marred the performance of 802.11 equipment for years. This problem is obvious when it is considered that most 802.11b equipment actually only operates at 5 to 7mbps, and 802.11g equipment at 24 to 36mbps during real- world use. Fortunately, with the improvements brought by the 802.11n and future standards, this problem will begin to fade as the speeds achieved during everyday use close in on the technical maximum speeds offered by emerging standards.


BLUETOOTH :

Bluetooth, IEEE 802.15.1, is a short range wireless network standard originally developed by Ericsson Corporation. Designed to allow nodes to participate in a network using the lowest possible amount of power, Bluetooth supports three power/range levels: 1mW/10cm, 2.5mW/10m, and 100mW/100m. Bluetooth's current maximum transmission rate is 2.1mbps. While this seems very low compared to much older Wi-Fi standards such as 802.11b, Bluetooth is designed to fit a special section of the market, rather than to be a widespread, high-performance technology. Bluetooth is almost always used in a paired or "ad-hoc" type network. In an ad-hoc network, no router exists, but the nodes are simply responsible for negotiating communication among themselves automatically. A paired network is simply an ad-hoc network with only two nodes.
Common Bluetooth devices and applications include mobile phone headsets, PC-to-organizer/PDA synchronization, and other situations in which small devices need low power, short range communication capability. Bluetooth is a staple feature on most of today's newest and smallest portable information and communication devices.

The Internet Protocol Suite

The Internet Protocol Suite is a stack of protocols based on the OSI Reference Model. Undeniably the single most used Protocol Stack in the world, the IP Suite is the primary power behind the internet and a large number of other networks of all sizes. This suite is known as the TCP/IP suite or the IP Suite, despite the fact that it is actually a suite of specifications and consists of more than just the TCP and IP protocols. To make things easier to understand, the TCP/IP suite is often explained using just four layers, each of which represents multiple OSI layers.


THE INTERNET PROTOCOL SUITE LINK LAYER:

While not technically a part of the Internet Protocol Suite, the IP Suite relies on a link layer, just as any other protocol stack would. Without the Link Layer, which represents OSI Layers One and Two, the higher protocols defined in the TCP/IP stack would not function.
An interesting advanced topic that can be considered here is the concept of a Virtual Private Network (VPN) or network "tunnel". A network tunnel links two remote local area networks as if they were one local area network. This operates by running a VPN stack with a TCP/IP stack on top. While this concept may seem complex, the same principles discussed earlier in this document in relation to stacking apply not only to protocols, but to stacks of protocols. While theoretically this concept could extend without limits, it never really does due to protocol overhead (the space consumed by packet headers) and the fact that no widespread practical use has ever existed for more than two or three nested tunnels.


THE INTERNET PROTOCOL SUITE INTERNETWORK LAYER:

Also known as the Internet Layer, due to its almost exclusive use on the medium, this is the level at which packets are routed and switched on networks. The Internet Protocol (IP, not to be confused with the IP Suite) is responsible for getting this job done. As shown when being used for an example on routing in Diagram 3, IP addresses are determined and assigned to each node by IP. IP is an OSI Layer 3 protocol.


THE INTERNET PROTOCOL SUITE TRANSPORT LAYER:

The Internet Protocol Suite's Transport Layer is where the TCP/IP suite shows its broad diversity and capability. Supporting multiple varied mainstream protocols, the IP Suite's Transport Layer provides many options for the protocol and associated feature-set that a node's applications may use. IP Suite Transport Layer protocols fall under the specification of OSI Layers Four and Five. Here are some of the most common IP Suite Transport Layer protocols.

TCP-
TCP is a reliable, connection oriented protocol, and is possibly the most commonly used IP Suite Transport Layer protocol. Its advantages are that it is reliable, meaning it will attempt to re-send packets that fail to reach their destination with the same integrity with which they left. In order assist in preventing this issue, TCP attempts to monitor the current load and free capacity based on the action of other TCP network traffic and will throttle its packet sending rate to prevent network packet overload/collision. In addition, TCP will attempt to send packets in roughly the order they originally were intended. TCP performs best when used with an application that does not require timely, ordered information, but does require the information be of good integrity. TCP is classified as an OSI Layer 4 (Transport Layer) protocol.

UDP-
Often viewed as being similar to TCP, UDP begins to differ in that it is an unreliable type protocol. This does not mean that it serves its purpose poorly, but rather that UDP does not verify that its packets have reached the destination node successfully, and will not put future packets on hold to retransmit current failed ones. This means that UDP is typically utilized in applications where the integrity of transmitted information is not particularly required, but timely delivery is. UDP is useful in such applications as multimedia streaming because it does not stop to resend bad packets, thus preventing pauses in the media stream. UDP is classified as an OSI Layer 4 (Transport Layer) protocol.

RTP-
RTP is a Session Layer (OSI Layer 5) protocol that lies on top of UDP (an OSI Layer 4 protocol). RTP is specifically designed to deliver streaming audio and video content on time and in order. Utilizing UDP for its unreliable time-conscious transmission methods, RTP ensures that packets reach the end node's application both in a timely manner and in the originally intended order.


THE INTERNET PROTOCOL SUITE APPLICATION LAYER:

The IP Suite's Application Layer is where things the common user interacts with come into play. Representing the OSI Reference Model's Layers Six and Seven, the IP Suite has a large number of protocols commonly used on its highest layer.

THE UNIFORM RESOURCE LOCATOR CONCEPT-
In order to allow the IP Suite the flexibility to operate using a variety of Transport and Application layer protocols, the need for a uniform way to reference these protocol's resources arises. The IP Suite uses a system known as the Uniform Resource Locator (URL). A URL, as shown in Diagram 5, commonly consists of three parts, but various protocols may have an expanded syntax13 to reflect expanded capability.
The first segment of a URL indicates the Application Layer protocol that will be used for this request. Common examples are http://, https://, and ftp://. The second segment of a URL is an IP address or Host Name14. This tells the IP protocol (OSI Layer 4) the IP (logical) address of the node where the requested resource is located. The third segment of the IP address, indicated in Diagram 5 by the position of the number '80', is the Port Number. The concept of Layer 4 ports is introduced in Section 2.4 of this document. The Port Number in a URL tells the IP protocol the remote port it should attempt to access. Diagram 5 is a URL telling the IP Suite that it should use the HTTP protocol to access the HTTP protocol operating on port 80 of the node located at 127.0.0.1.

HTTP-
Possibly the most recognizable protocol yet discussed here, HTTP is the HyperText Transport Protocol. Following the specification of OSI Layer 7, the HTTP protocol is responsible for fetching, sending, and receiving files per the requests of the end user. HTTP is commonly used inside of computer programs called browsers15 to allow for the quick viewing of many filetypes and for ease of navigation among them.
The average person would likely recognize HTTP as being 'those four letters typed at the beginning of a web page address', and would be correct since websites operate primarily on the HTTP protocol. Thus a website's URL might look something like: http://NSGN.net. HTTP typically operates by default on TCP Port 80.

HTTPS-
HTTPS operates identically in every way to HTTP except that it encrypts all packets it handles on-the-fly. HTTPS requires an encryption certificate to operate properly. A certificate is a digital document that only the two end users transferring information via HTTPS posses. The certificate contains the encryption/decryption key, thus the only end users able to make use of the information transmitted over the HTTPS connection are the two who hold certificates. HTTPS typically operates by default on TCP Port 443.

FTP-
FTP is the File Transfer Protocol. It is a protocol used for the transferring of files between two nodes over a network. While FTP is far from being the only file transfer protocol designed to run on top of the IP Suite, it is one of the first and in many ways is unparalleled. The FTP protocol is commonly used through computer programs known as "FTP Clients". These software applications send and receive FTP commands and present the various information to the user. An FTP session, depending on the software application in use, may sometimes be initiated by a URL beginning with ftp:// . FTP typically operates by default on TCP Port 21.

SSH-
SSH is the Secure SHell protocol. Used primarily on business or server computer operating systems, the SSH protocol allows a node to be remotely controlled or administrated. The SSH protocol typically operates by default on TCP Port 22.

Friday, January 2, 2009

Types Of Networking

PEER TO PEER NETWORKING


Based on their layout (not the physical but the imagined layout, also referred to as topology), there are two types of networks. A network is referred to as peer-to-peer if most computers are similar and run workstation operating systems:
It typically has a mix of Microsoft Windows 9X, Me, Windows XP Home Edition, or Windows XP Professional (you can also connect a Novell SUSE Linux as part of a Microsoft Windows-based network; the current release of the operating system is really easily to install and made part of the network).
In a peer-to-peer network, each computer holds its files and resources. Other computers can access these resources but a computer that has a particular resource must be turned on for other computers to access the resource it has. For example, if a printer is connected to computer A and computer B wants to printer to that printer, computer A must be turned On.


CLIENT/SERVER NETWORKING


A computer network is referred to as client/server if (at least) one of the computers is used to "serve" other computers referred to as "clients". Besides the computers, other types of devices can be part of the network:
In a client/server environment, each computer still holds (or can still hold) its (or some) resources and files. Other computers can also access the resources stored in a computer, as in a peer-to-peer scenario. One of the particularities of a client/server network is that the files and resources are centralized. This means that a computer, the server, can hold them and other computers can access them. Since the server is always On, the client machines can access the files and resources without caring whether a certain computer is On.

Another big advantage of a client/server network is that security is created, managed, and can highly get enforced. To access the network, a person, called a user must provide some credentials, including a username and a password. If the credentials are not valid, the user can be prevented from accessing the network.
The client/server type of network also provides many other advantages such as centralized backup, Intranet capability, Internet monitoring, etc.

In these series of lessons, the network we will build is based on Microsoft Windows operating systems (I have been able to fully connect some versions of Linux, such as Novell SUSE Linux, into a Microsoft Windows-based network but at the time of this writing, I will not be able to address that).
In our lessons, we will mention the names of companies or provide links. These are only indications and not advertisements. Any other company or link that provides the mentioned service is suitable.

Characteristics of a Computer Network


The primary purpose of a computer network is to share resources:

1.You can play a CD music from one computer while sitting on another computer
2.You may have a computer with a CD writer or a backup system but the other computer doesn’t have it; In this case, you can burn CDs or make backups on a computer that has one of these but using data from a computer that doesn’t have a CD writer or a backup system
3.You may have a computer that doesn’t have a DVD player. In this case, you can place a movie DVD on the computer that has a DVD player, and then view the movie on a computer that lacks a DVD player
4.You can connect a printer (or a scanner, or a fax machine) to one computer and let other computers of the network print (or scan, or fax) to that printer (or scanner, or fax machine)
5.You can place a CD with pictures on one computer and let other computers access those pictures
6.You can create files and store them in one computer, then access those files from the other computer(s) connected to it