A graphical user interface (GUI, often pronounced "gooey") is a type of computer input/output system that represents directions, programs, and files with pictures and spatial relations. In addition to the visual elements on the screen, the interface in its broadest sense includes

In the typical GUI, instead of laboriously typing commands at a prompt to tell the computer what to do, the user can simply choose commands by activating or manipulating the pictures—e.g., clicking on a button or dragging an icon—with an input device like a mouse. GUIs are intended to make computers "user friendly" by simplifying tasks and decisions, and by creating a visual representation of a computer system to which people can more easily relate.

A significant aspect of GUIs is that they're not merely different to look at, but they also can increase the efficiency of learning and usage over text-based interfaces. They can also lead to higher productivity because they lend themselves better to performing multiple tasks at once. Well-designed GUIs not only represent files, programs, and procedures visually, but also provide streamlined methods for completing tasks and take into account the users' needs and expectations.

In addition to familiar GUIs like Microsoft Corp.'s Windows and Apple Computer's Macintosh operating systems, other widely used GUIs include IBM's OS/2 and a variety of Unix/Linux-based GUIs.



The term "interface" was adopted early on by the computer industry, during the 1950s, to describe computer video displays, or "user interfaces." Early user interfaces, such as the one employed by the Whirlwind Computer developed at MIT in 1950, consisted of simple cathode-ray tubes (CRTs) that displayed simple input prompts. They featured crude interaction and a dearth of graphical representation. Users had to enter complicated commands flawlessly and typically had to commit the commands to memory or refer to user manuals.

During the 1960s more advanced displays were developed that offered more advanced interactive capabilities, meaning that the user could provide input to the computer more easily and efficiently. Among the earliest advanced interactive systems were systems developed by Ivan Sutherland, including his Sketchpad drawing system. Among other innovations, Sutherland created a graphical interaction method that allowed computer users to provide input using a light pen (a hand held pointing device that sensed objects on a cathode ray tube). His research laid the groundwork for more advanced systems that emerged during the mid- and late 1960s, including computer-aided design (CAD) and computer-aided manufacturing (CAM) systems that used styli to draw forms and choose commands.

Computer systems that utilized graphical interface technology were implemented in the commercial sector during the late 1960s and early 1970s, but only by the most technologically advanced enterprises—the General Motors Corp. engineering department, for example, was among the first organizations to use graphical interaction technology for product design.

Xerox Corp.'s Palo Alto Research Center (PARC) is generally credited with developing the first true GUI. Introduced on the company's ill-fated Star computer model, the system featured a pointing device, the mouse, that interacted with visual elements to accomplish various tasks.

Although Xerox's product proved a failure, Apple Computer, Inc. soon popularized the GUI concept with the 1984 release of its Macintosh personal computers. The Macintosh rejected the traditional command-driven approach pioneered by International Business Machines Corp. (IBM) in favor of a menu-driven, object-oriented interface. The Macintosh's user friendly interface was designed to mimic the top of a desk. Programs and files were accessible through pictures, or icons, of files on the computer screen, and many features and procedures could be accessed through mouse actions rather than typing. Likewise, other commands and choices were represented by icons of trash cans, chalkboard erasers, diskettes, and other relevant symbols.

Although many purists initially denounced the Macintosh GUI as simplistic and juvenile, Macintosh became the computer of choice for many businesses that sought to reduce training expenses and simplify their PC systems. In response, Microsoft Corp. developed Windows, an "overlay" GUI for its popular command-driven MS-DOS operating system run on many IBM PCs. However, despite its popularity, it took Windows a decade before it began to capitalize on some of the more powerful uses of GUIs.


A corollary to the creation of GUIs was the concurrent development of increasingly powerful microprocessors and efficient memory storage devices. New microprocessors delivered the processing prowess necessary to handle the complex mathematical demands of GUIs, and powerful memory devices accommodated the massive amounts of code needed to run GUIs.

Besides advancements in processing and storage, however, progress related to graphical displays contributed to the gradual dominance of GUIs. Of import is raster graphics technology. Raster video displays use bit maps (arrays of dots, or pixels, on a computer screen) to form graphics. A simple bit map represents each pixel with a bit (binary digit represented by 0 or 1) of information. The bit map is made up of a rectangular array of pixels, each of which is independently addressable by the computer. The value of each bit in the array can be changed to cause corresponding pixels to become white or black. Systems that assign multiple bits to each pixel create advanced bit maps called "pixel images." Pixel images assign different shades of gray, or even different colors and intensities, to each pixel.

GUIs use raster displays to form text and graphics on a computer screen because the array of pixels can be manipulated to create forms (i.e. text and pictures) that, when viewed from a distance, are seemingly contiguous. Raster displays were developed as an alternative to vector displays, which create graphics using solid lines. Raster displays are much more amenable to graphic applications because, besides allowing more acute manipulation of images, they make it easier for the user to create shades and colors.


The primary benefit of GUIs is that they create a visual representation of a computer system and its functions that is more natural and easier for users to comprehend and conceptualize, hence the idiom "user friendly." Because computer files, programs, and commands are represented by familiar icons, the user is effectively able to operate in an affable environment. This is particularly powerful when visual standards are implemented across multiple applications (i.e., a data file in any number of unlike applications is always opened by choosing the "Open" option from a menu called "File.") In contrast, traditional command-driven interfaces force users to remember commands or search for the proper directives in "help' text or manuals. Using a command-driven interface is analogous to telling somebody how to repair a car engine over the telephone, while a GUI effectively lets you look at the engine and fix it yourself without having to carefully recall and verbalize specific instructions.

In addition to facilitating visualization, a major and obvious benefit of GUIs is that they allow users to quickly accomplish object-oriented tasks, such as drawing lines and shapes, repositioning or resizing pictures and text, and other graphical manipulations. Such tasks are usually accomplished through peripheral input devices such as a mouse, stylus, or joy stick. A less obvious GUI benefit is that applications developed for use on GUIs are device-dependent. That means that as an interface changes to incorporate new peripheral devices, such as a printer or memory device, applications can utilize those peripheral devices with little or no modifications. Command-driven interfaces, in contrast, might require the user to supply commands that would tell the software exactly what the peripherals are and how they should be used.

While GUIs have many "spatial" advantages, they also incorporate conventional "linguistic" interface techniques used by command driven systems. For example, an architect or engineer may benefit from being able to draw lines and shade regions using a mouse. But technical drawings require that he or she be able to tell the computer precisely, perhaps within 1/1000th of an inch or less, where to place a point or coordinate. This requirement brings to the light an advantage command driven systems have over some GUIs; verbal commands are often more precise than point-and-click visual directives. Therefore, GUIs are generally an amalgam of object-oriented and command-driven technologies.


The test of any GUI is how easily it allows users to accomplish exactly what they need to quickly and accurately. Software developers must model the tasks the user will want to perform and design an interface to optimize for those tasks. Effective GUIs possess both internal and external consistency, are easy to understand and interpret, and minimize the number of steps it takes to get the proper results.


Internal consistency means that terminology, layout, and procedures in one part of the program are consistent with those in another part. For example, common failure at internal consistency has occurred in some Windows word processing programs that select different icon toolbars depending on the task the user is trying to complete. Say the user is creating a form letter using the mail merge feature. In some programs the normal toolbar may disappear or be modified to include the mail merge icons. However, if the user needs to reference one of the features on the normal toolbar, he or she must enter a setup screen and reselect that toolbar, or else find the menu-based command for that feature. Although the automatic toolbar switch was no doubt intended as a convenience, the fact that it removes from view some features that are still logical options the user is used to seeing makes this a time-wasting internal inconsistency.


Equally important is consistency with other widely used programs or the operating system itself. External consistency extends to

In Macintosh systems, for example, most applications include a standard dialog box for opening files. The box lists the current folder and its contents and contains several standard buttons labeled "Desktop," "Cancel," "Open," and so on. Suppose a software vendor chose to ignore the convention by leaving off the "Desktop" button or by placing "Open" above "Cancel" rather than below. Such inconsistencies could be annoying at the very least, and might cause a user to choose the wrong option. While in this example the choices and consequences can be readily inferred, in a complex software package there may be hundreds of such design choices that can make the application easier or harder to use in relation to the norms of the operating system and other applications.


Poorly designed GUIs fail to take advantage of the medium's power to make computing intuitive and efficient. They make users go through needless sequences of keystrokes or mouse clicks to complete simple tasks, they obscure functions and procedures with cryptic or inadequate labeling, and they consume their users' time and patience with their idiosyncrasies. (This is of course not to diminish the great value of path-breaking innovations—those departures from convention can be well justified.) In his book The Essential Guide to User Interface Design, Wilbert 0. Galitz attributed bad design to two main factors:

  1. a lack of time devoted to design efficiency, e.g., the software vendor rushed the program to the market without spending much time on design issues; and
  2. a lack of understanding of how design affects efficiency in real-life applications, e.g., failing to put the most commonly used options in the most convenient place.

Inexperienced software developers may be forgiven if they alternately cram too many features and options into one confusing screen and fail to include linkage to the same features on other relevant screens. However, design problems are endemic in GUIs, and poorly conceived designs can be a drain on both productivity and patience. This observation highlights the reality that GUIs aren't, as some people assume, intrinsically better than text-based systems just because they're graphical. The entire interface—all of the ways in which the user interacts with the software—must be suited to the tasks it is needed for.

For all of MS-DOS's limitations, longtime users of popular DOS-based business programs regularly reported losses in efficiency once they migrated to the Windows version of the same program. The reason wasn't necessarily that they were old fashioned and didn't appreciate the Windows interface; it was often that the early Windows implementations of the programs added awkward or time-consuming steps to common tasks and failed to capture the efficiencies, including speed, of the DOS versions.


As Galitz noted, the trend since the introduction of GUIs has been away from an application orientation ("I will open the word processor in order to edit my document") and toward an object orientation ("I will select my document and it will open whatever application I need"). Newer GUIs are further smoothing the road between accessing data stored on local computers and that obtained from networks such as corporate intranets and the Internet. Some forward-looking observers have coined the phrase "network user interface," or NUI, to describe these sorts of integrated computing environments.

For Windows users, the first brush with a NUI came in 1998 with Microsoft's release of Windows 98, a minor upgrade to its more revolutionary Windows 95 version. The 1998 version included optional functionality to integrate the Internet browser with the main desktop and to store commonly used web addresses among commonly used local files. A more radical departure was in store with Windows 2000, which was to finally rid Windows of its DOS ties and was to be based instead on Microsoft's popular Windows NT networking platform.

Finally, graphics in current user interfaces may gradually begin to lose favor to other forms of input/output, notably sound. Significant development efforts were underway by several of the leading software developers at the turn of the 21st century to create reliable voice-recognition (VR) software, implying a vocal user interface. While some modest VR products were already offered in the consumer market for dictating documents and executing simple system commands, they were still very prone to errors. Advanced speech recognition systems would need to have a near-perfect recognition rate.


Fowler, Susan. GUI Design Handbook. New York: McGraw-Hill, 1998.

Galitz, Wilbert 0. The Essential Guide to User Interface Design. New York: John Wiley & Sons, 1996.

Halfhill, Tom R. "Good-Bye, GUI, Hello, NUI." Byte, July 1997.

"The History of Windows." PC Magazine, August 1998. Sarna, David E.Y., and George J. Febish. "Life without Microsoft." Datamation, June 1997.

Weinschenck, Susan, and Sarah C. Yeo. Guidelines for Enterprise Wide GUI Design. New York: John Wiley & Sons, 1995.

Other articles you might like:

User Contributions:

Comment about this article, ask questions, or add new information about this topic: