A computer network consists of two or more computing devices connected by a medium allowing the exchange of electronic information. These computing devices can be mainframes, workstations, PCs, or specialized computers; they can also be connected to a variety of peripherals, including printers, modems, and CD-ROM towers. Most networks are supported by a host of specialized software and hardware that makes these connections possible, including routers, bridges, and gateways, which help accommodate traffic between unlike systems.
Many different types of computer networks exist. Some, such as local area networks (LANs), metropolitan area networks (MANs), and wide area networks (WANs), are defined by their geographic layout and the differing technologies that support such layouts. LANs are by far the most common, and in most cases, the fastest. Networks may be public, such as the Internet; semi-public, such as subscription networks (including subscription-based Internet service providers and other content-based networks); or private, such as internal corporate LANs, WANs, intranets, and extranets. Most networks are private, but of course the relatively few public ones, like the Internet, support a very large user base. Networks may also be open, or linked to other networks, or closed, which means they are self-contained and do not allow connectivity with outside resources. Most modern corporate networks are somewhere in between; they often allow access to the outside, but tightly restrict access from the outside. "Open" can also describe whether network technology is based on widely accepted standards that multiple hardware/software vendors support, versus a closed or proprietary system that is dependent on a single developer (or very few).
Research facilities sponsored by the U.S. Department of Defense were among the first to develop computer networks. Perhaps the most famous example of such a network is the Internet, which began in 1969 as Arpanet, part of a project to link computers at four research sites. One of the most significant developments in these early networks was the concept of packet switching, which encodes data for transmission over networks into small chunks of information that each carry meta-information about where the data are coming from, where they are going, and how each piece fits into the whole. Packet switching, the basis of all modern networking, enables a transmission to be routed through any number of computers to get to its destination, and provides an efficient means of retrieving lost information. If a packet is lost or corrupted, only a single packet need be re-sent, which is handled behind the scenes by the networking software, rather than starting the entire transmission over again.
Several of the most defining advances occurred in the early 1980s. Coming on the heels of IBM's mid-1970s introduction of the Systems Network Architecture (SNA), a proprietary set of highly stable protocols for networking mainframes and mid-range systems, a few important industry wide standards were reached that cleared the path for widespread implementation of networking. The first of these was the debut of the Institute of Electrical and Electronics Engineers' (IEEE) 802.x series of standards, which prescribed the technical specifications for various types of network data exchanges. The IEEE standards, which are updated and expanded periodically, are still in force today. Next, a common architecture model called the Open Systems Interconnection (OSI, see below) was adopted by the International Organization for Standardization (ISO). Although the OSI was only a broad model, it provided network developers with an internationally accepted classification of the different network functions and processes and how they ought to work together. The OSI and the IEEE standards were complementary.
The Ethernet LAN protocols both influenced the formation of technical standards and became the most widespread embodiment of those standards. Ethernet was pioneered in the late 1970s by Xerox at its famous Palo Alto Research Center (PARC) with assistance from then-Digital Equipment Corp. (later part of Compaq Corp.) and Intel Corp. Indeed, the experimental Ethernet was the model on which the original IEEE standard was based, and Ethernet quickly became (and still is) the most common commercially produced LAN protocol.
Ethernet employs several hardware standards for various bandwidths and device connections, but it is perhaps best characterized by its use of a protocol called Carrier Sense Multiple Access with Collision Detection (CSMA/CD). CSMA/CD is essentially a set
|• Ethernet —a series of widely used hardware/software protocols for local area networks|
|• Local area networks (LANs) —networks that are confined to a single building or part of a building and that employ technology to capitalize on the advantages of close proximity (usually speed)|
|• Metropolitan area networks (MANs) —networks that are accessed from multiple sites situated in a relatively concentrated area (within 50 km or so) and that function as a faster alternative to wide area networks|
|• Nodes —individual computers on a network|
|• OSt —Open Systems Interconnection model, a broadly defined international model for the hierarchy of data communications between networked computers|
|• Packets —also called datagrams, these are measured pieces of information (usually ranging 500 to 2,500 bytes in size) in a data transfer that are each separately addressed to their destination and reassembled into the full original message at the receiving end|
|• Protocols —a set of rules dictating how hardware and software communicate with other devices|
|• Storage area networks (SANs) —a high-performance network of storage/backup devices integrated with one or more primary computer networks|
|• Topology —the structure of how networked computers are actually connected to each other and to other network resources|
|• Wide area networks (WANs) —networks that are maintained over two or more separate buildings and use technologies that maximize the ease and cost-effectiveness of connections between distant locations (often at the expense of speed)|
of rules for how competing devices can share finite network resources. Through this protocol a computer on the network can determine whether it can send data immediately or whether it must compete with another device for network services. Collision occurs when two devices attempt to use the same resource, and the CSMA/CD protocol provides a simple mechanism for resolving this contention: it halts the colliding operation (the one initiated last) and keeps trying to resend the data at specified intervals until either it succeeds or reaches a maximum number of attempts. If the maximum is reached, the operation may be aborted and data may be lost.
Since its inception Ethernet has enjoyed regular, albeit less rapid, advances in speed parallel to those in microprocessing. The latest generation of Ethernet standards, finalized in late 1998, is Gigabit Ethernet. This Ethernet standard supports transmission of up to I billion bits of data per second, representing a hundredfold improvement over the original Ethernet, which carried data at 10 million bits per second (Mbps). Gigabit Ethernet followed an enhanced 100 Mbps standard from the early 1990s known as Fast Ethernet.
The other significant development was that of the Internet Protocol (IP) and its many derivatives, which have been the center of innovation from the late 1980s until the present. IP, which is very basic, actually dates to the early 1970s, when the Internet's predecessor, Arpanet, was in its formative years. At its core, IP is a simple packet transmission protocol and an addressing scheme. This means that IP has certain parameters for how packets, often called datagrams, are addressed and formatted for exchange between two computers. IP forms the basis for a number of popular WAN and client/server protocols, notably Transmission Control Protocol/Internet Protocol (TCP/IP), which was developed during the 1970s and adopted for Arpanet in 1982.
Networks can allow businesses to reduce expenses and improve efficiency by sharing data and common equipment, such as printers, among many different computers. While printers can be shared in other ways, such as by carrying information on floppy disks from one PC to another, or using manual or electronic data switches, networks have the capacity to accommodate more users with less frustration. The power of mainframes or minicomputers can be used in harmony with personal computers. The larger machines can process larger and more complex jobs, such as maintaining the millions of records needed by a national company, while individual PCs manned by individual users handle smaller jobs such as word processing. And older equipment can be rotated to less demanding jobs as workstations are upgraded. Many software programs also offer license agreements for networks, which can be more cost effective than purchasing individual copies for each machine. The costs of implementing a network depend on issues of performance, compatibility, and whether value must be added to a turnkey system through additional programming or the addition of special components.
By coordinating all data and applications through a single network, backup copies of data for all systems can be made more consistently than could be expected if left to individual users. Additional, updated software for all machines on a network can be installed through a single PC. A centralized system simplifies other aspects of administration, too. With the proper software, computer security can also be implemented more effectively in a network than among many individual hard drives. Access to files can be restricted to password holders or it can be limited to inquiry-only access for public users. Generally, security measures are more vulnerable at machines with single user operating systems than those with network security precautions.
The types of machines that can be connected to a network include PCs, intelligent workstations, dumb terminals, host computers, clients, and file and other types of servers. File servers control network activity such as printing and data sharing, as well as controlling security. Important factors to consider in selecting a file server include its speed, processor performance, memory, hard drive capacity, and most importantly, its compatibility with network software.
A corporate trend since the mid-1990s has been toward so-called network computers (NCs), a variation on the long-established notion of dumb terminals supported by a powerful central system. Spurred by advances in Internet technology, IT managers found that they could save on the high cost of buying and maintaining full-featured PCs for every desktop when only a handful of corporate applications were used, and these could conceivably be retrieved from (or run off) a central computer, the server. Advances in software and data portability, such as HTML documents on the Web and Sun Microsystems' platform-independent Java language, encouraged the idea that NC users could simply download whatever programs and files they needed from a central repository, rather than storing such information locally on each computer.
Servers are computers that run software to facilitate various kinds of network activities; the software packages that enable such activities are sometimes also called servers. A single physical computer may host a number of server-related processes. The three main types of server functions are file servers, network servers, and printer servers. File servers can be run in either a dedicated or a nondedicated mode. Nondedicated file servers can be used as work stations as well, although workstation functions can take up much of the processor's capacity, resulting in delays for network users. Also, if a workstation program causes the file server to lock up, the entire network may be affected and suffer a possible corruption of data. One compromise for a small office is to use a nondedicated file server as a workstation for a light user. A disk subsystem can increase the performance of a file server in large network applications. Network servers are used to facilitate network activities, such as processing e-mail, while printer servers manage traffic on networked printers.
Highlighting the need for network storage space, particularly for critical system backups, has been the development of a relatively new set of network technologies known as storage area networks (SANs). SANs, which are high-speed networks of storage devices that can work in conjunction with any number of servers and other network devices, can be deployed as a solution to the inefficiencies of maintaining a host of separate disk subsystems. Although most companies of any size perform routine system backups, the process of backing up as well as restoring data can be slow and cumbersome—a competitive liability for companies that depend heavily on their systems being available 24 hours a day. SANs are used to reduce this liability and improve efficiency.
Connecting devices such as bridges, routers, and gateways are used to subdivide networks both physically and logically, to extend the range of cabling, and to connect dissimilar networks. Connecting devices can be used extend the range of cabling or to subdivide networks into segments, which is useful for isolating faults. Repeaters simply extend the physical distance that network data can travel by receiving and retransmitting information packets. They do not provide isolation between the components they join. Connecting devices are classified according to the functional layer at which they operate.
Bridges operate at OSI layer two (also known as the data link layer; see Figure 2). They are used to isolate segments from a network backbone, to connect two networks with identical lower layers, and to convert one lower level technology into another. They can be configured to transmit only appropriate messages (filtering).
Routers operate at layer three (network layer). They can also be used to isolate network segments from a backbone, but unlike bridges, they can connect segments with different lower-layer protocols. Software exists which can perform this function, though not usually as fast. "Brouters" are a hybrid between bridges and routes that operate at layers two or three.
Gateways operate at layer four (transport layer) or higher. They are required for minicomputer or mainframe access from PCs and are much more complex and costly than other connecting devices. They are capable of converting data for use between dissimilar networks.
Some type of media is required in order to connect network components. Various types of cables exist for this purpose; as with most hardware, their price is related to their performance. Two PCs can be connected quite simply and cheaply by using a null modem cable. At the upper end of the spectrum, wireless and even satellite connections are used by large corporations and the military.
The earliest cable to become widely used is coaxial cable (nicknamed "coax"). As it is shielded and resistant to electrical noise, it has proven useful in factory situations. Twisted-pair cable, also called UTP (unshielded twisted pair), has replaced coax in most applications, as it is cost effective. Similar to telephone wire, noise problems prevented it from being accepted more quickly. Underwriters Laboratories rates UTP cable from Levels I through V based on performance. Levels I and II are only suitable for low grade or slower applications.
Fiber optics is the most expensive and the fastest of the cables. Fiber-optic technology has been shown to achieve speeds of several hundred gigabits per second (Gbps) or faster, although most commercial applications to date have settled for between 2.5 and 10 Gbps. Experts have theorized that multiplexing technology can push fiber-optic capacity into the terabits—or trillion bits—per second (Tbps). For these reasons, it is frequently used for high-volume backbones connecting network segments. Another benefit of fiber-optic cable is that it is immune to electrical interference.
Wireless systems are also used for connecting workstations with the file server. Microwave dishes are among the oldest means of connecting computers over long distances, though they are limited to line-of-sight transmissions and can be affected by weather conditions. Depending upon frequency, microwave equipment can transmit up to 30 miles. Another option is satellite transmission, which has been used to transmit price changes among stores in national retail chains.
Networks also require connectors to interface computing devices with the connecting media. While mainframes usually have connectors built in, most PCs require the addition of a network interface card (NIC). Larger, more powerful computers require more expensive connections due to the cost of their high-performance microprocessors and support circuitry. Such devices often implement the protocol to which electronic messages on the network must conform.
Network software is needed to perform network functions. In a LAN, some type of network software is typically installed in each computer on the network, and a network operating system is run on the network servers. Two of the most common LAN networking packages are Microsoft's Windows NT and Novell's NetWare. Functions of network software include file transfer and real-time messaging, automatic formatting of e-mail, and creating directories and unique addresses for each node. Management utilities such as problem detection, performance analysis, configuration assistance, usage and accounting management (billing), and network security are usually included in network software packages.
The topology, or the physical layout, of the network is the concern of configuration management. The three main arrangements are the bus, ring, and star as shown in Figure 1 below. In the bus configuration, each node is connected to a common cable and detects messages addressed to it. Because it is reliable and uses the least amount of cabling, this layout is often used in offices. However, fiber-optic systems cannot usually be arranged this way.
In the ring layout, packets of information are retransmitted along adjacent nodes. It has the possibility of greater transmission distances and fiber-optic systems can use this layout. However, the components necessary can be more expensive. A popular implementation of ring topology is IBM's Token Ring configuration.
In the star arrangement, all traffic is routed through one central node. It offers the advantages of simplified monitoring and security. Also, unlike the other layouts, the failure of one node, unless it is the central one, does not cause the entire network to fail. This drawback is addressed in the clustered star layout, in which a number of star networks are linked together.
While topology refers to the physical layout of the network, architecture refers to the broad design of the rules computers must follow in order to communicate. The specific procedures that must be followed are called protocols; in this sense architectures are collections of protocols and may include other standards or specifications for hardware and software connectivity.
Architectures can be classified as either centralized or decentralized. The former is useful when many users need the same information; less maintenance is required to update the network. However, distributed processing via decentralized networks is becoming the standard as it allows work to be spread out, taking advantage of the capabilities of the ubiquitous and increasingly powerful PC.
The introduction of standards has reduced the cost of networking dissimilar products. Standardization organizations, government or industry-sponsored, reference standards in profiles or abstract models that leave some parameters open for software and hardware developers. For example, one thing that has not been defined in operating standards for modems is what to do if transmission speed must be reduced due to a drop in line quality. Individual manufacturers have been left to solve this problem, resulting in the possibility of different makes of modems being unable to communicate in such a situation.
Although no networking system follows it exactly (often they leave out or combine the functions of certain upper layers), the OSI model continues to influence network architecture. OSI is based on layered architectures; i.e., different layers in the software and hardware are devoted to different network functions. The lower layers exchange information between
It is the often complicated job of the network manager to ensure that all the hardware and software work together. An important aspect of the job is fault management, or the detection, isolation, and resolution of problems in the network. Performance management ensures that data exchange proceeds at an acceptable rate, a factor influenced by workload and the configuration of the network. Other duties include accounting management, monitoring user activity on the network, and security management (limiting access of certain files or the network itself to authorized users). Because network administration can require vast knowledge of rapidly changing technical issues—knowledge that is hard to maintain when network managers are also preoccupied with day-to-day service problems and other internal concerns—many larger companies have chosen to outsource some or all of these duties to specialized firms.
Brandel, Mary. "The Promise of Packet Switching." Computerworld, 26 April 1999.
Comer, Douglas E., and Ralph E. Droms. Computer Networkis and Internets. 2nd ed. Upper Saddle River, NJ: Prentice Hall. 1999.
Freed, Les, and Frank J. Derfler, Jr. How Networks Work. 4th ed. Indianapolis: Que, 1998.
"Most Important Networking Products & Standards." Byte, September 1995.
Ruby, Doug. "Gig-E Is Here, and It's Here to Stay." Communications News, February 1999.
Tanenbaum, Andrew S. Computer Networks. 3rd ed. Upper Saddle River, NJ: Prentice Hall, 1996.
Taschek, John. "SANs in Store for Storage Industry." PC Week, 5 October 1998.