ARTIFICIAL INTELLIGENCE (AI)



Artificial Intelligence Ai 255
Photo by: AlienCat

Artificial intelligence (Al) encompasses a diverse number of computer applications, or components within applications, that use sets of rules and knowledge to make inferences. Unlike its roots as an esoteric discipline of trying to make computers emulate the human mind, modern Al—along with the technologies it has inspired—has many practical ramifications and delivers real benefits to users. In business applications, Al capabilities are often integrated with systems that serve the day-to-day needs of the enterprise, such as inventory tracking, manufacturing process controls, and customer service databases. Often, however, these newer, practical implementations of Al may not be labeled as such because of negative associations with the term.

WHAT IS ARTIFICIAL INTELLIGENCE?

Defining AI succinctly is difficult because it takes so many forms. One area of agreement is that artificial intelligence is a field of scientific inquiry, rather than an end product. AI is difficult to define with any precision partially because several different groups of researchers with drastically different motivations are working in the field. Perhaps the best definition is that coined by M.L. Minsky, "Artificial intelligence is the science of making machines do things that would require intelligence if done by men."

HISTORY OF ARTIFICIAL INTELLIGENCE

Charles Babbage (1792-1871), an English mathematician, is generally acknowledged to be the father of modern computing. Around 1823 he invented a working model of the world's first practical mechanical calculator. Then, he began work on his "analytical engine," which had the basic elements of a modern-day computer. Unfortunately, he was unable to raise the funds needed to build his machine. Nevertheless, his ideas lived on.

Herman Hollerith (1860-1929), an American inventor, actually created the first working calculating machine, which was used to tabulate the results of the 1890 U.S. census. There ensued a series of rapid improvements to machines which allegedly "thought." The first true electronic computer, the Electronic Numerical Integrator and Computer (ENIAC), was developed in 1946. The so-called "giant brain" replaced mechanical switches with glass vacuum tubes. ENIAC used 17,468 vacuum tubes and occupied 1,800 square feet—the size of an average house. It weighed 30 tons. Scientists began at once to build smaller computers.

In 1959, scientists at Bell Laboratories invented the transistor, which marked the beginning of the second generation of computers. Transistors replaced vacuum tubes and sped up processing considerably. They also made possible a large increase in computer memory. Ten years later, International Business Machines Corp. (IBM) created third-generation computers when they replaced transistors with integrated circuits. A single integrated circuit could replace a large number of transistors in a silicon chip less than one-eighth of an inch square! More importantly, integrated circuits allowed manufacturers to dramatically reduce the size of computers. New software that made use of increased speed and memory complemented these third-generation computers—which themselves proved to be short lived.

Only two years after the appearance of integrated circuits, Intel Corp. introduced microprocessor chips. One chip contained a computer's central processing unit. Prior to that time, computers contained specialized chips for functions such as logic and programming. Intel's invention placed all of the computers' functions on one chip. Scientists continued to improve on computers.

Miniaturization of chips led to large-scale integrated circuitry (LSI) and very-large-scale integrated circuitry (VLSI). LSI and VLSI enabled software and printers to react faster with each other and with computers. They also contributed to the invention of microcomputers, which revolutionized the role of computers in business. More importantly, LSI and VLSI heightened scientists' interest in the development of Al.

DEVELOPMENTS IN ARTIFICIAL
INTELLIGENCE

AI is the construction and/or programming of computers to imitate human thought processes. Scientists are trying to design computers capable of processing natural languages and reasoning. They believe that once machines can process natural languages such as English or Spanish, humans will be able to give instructions and ask questions without learning special computer languages. When that day arrives, machines, like humans, will be able to learn from past experience and apply what they have learned to solve new problems. Scientists have a long way to go, but they have made what they believe is a giant step in that direction with the invention of "fuzzy logic."

FUZZY LOGIC.

Since their inception, computers have always acted on a "yes" or "no" basis. They simply have not been able to recognize "maybe." Even the most sophisticated computers, capable of performing millions of calculations per second, cannot distinguish between "slightly" or "very." This simple difference has confused Al scientists for years. However, an American researcher, Dr. Lofti A. Zadeh, of the University of California, presented a possible answer, which he termed "fuzzy logic."

The concept is based on feeding the computer "fuzzy sets," or groupings of concrete information and relative concepts. For example, in a fuzzy set for industrial furnaces, a temperature of 1,000 degrees might have a "membership" (relative value) of 0.95, while a temperature of 600 might have a membership of 0.50. A computer program might then utilize instructions such as, "the higher the temperature, the lower the pressure must be." This solution means that programmers can teach machines to compute with words, instead of numbers.

Historically, most complex mathematical models developed by programmers compute strictly with numbers. However, the fuzzy logic approach to Al did not catch on in the American scientific community. It did, however, among the Japanese.

Japan-based Hitachi, Ltd. developed an artificial intelligence system based on fuzzy logic that allowed an automated subway system in Sendai, Japan, to brake more swiftly and smoothly than it could under human guidance. The Japanese Ministry of International Trade and Industry budgeted $36 million in 1990 to subsidize the initial operation of a Laboratory for International Fuzzy Engineering. Development of fuzzy engineering also took hold in China, Russia, and much of Western Europe. American scientists, however, pursued other aspects of Al.

In the early 1990s, a University of North Carolina professor developed a microprocessor chip using an all-digital architecture, which would allow it to run in conventional computers. The chip can handle 580,000 "if-then" decisions per second, which is more than 100 times faster than the best Japanese fuzzy-logic chip can operate. Many U.S. companies have been experimenting with this and similar chips. The Oak Ridge National Laboratory is using the chip in robots to be employed in radioactive areas of nuclear power plants. The Oricon Corporation has used fuzzy logic in a signal analysis system for submarines. NASA has also experimented with using fuzzy logic to help dock spacecraft.

EXPERT SYSTEMS.

Other Al applications are also in use; one is the so-called expert system. Expert systems are computer-based systems that apply the substantial knowledge of a specialist—be it in medicine, law, insurance, or almost any field—to help solve complex problems without requiring a human to work through each one. In developing such systems, designers usually work with experts to determine the information and decision rules (heuristics) that the experts use when confronted with particular types of problems. In essence, these programs are simply attempting to imitate human behavior, rather than solving problems by themselves.

There are several advantages to expert systems. For example, they give novices "instant expertise" in a particular area. They capture knowledge and expertise that might be lost if a human expert retires or dies. Moreover, the knowledge of multiple experts can be integrated, at least theoretically, to make the system's expertise more comprehensive than that of any individual. Expert systems are not subject to human problems of illness or fatigue, and, if they are well designed, can be less prone to inconsistencies and mistakes. These benefits make them particularly attractive to businesses.

Companies also use expert systems for training and analysis. General Electric, for instance, developed a system called Delta that helps maintenance workers identify and correct malfunctions in locomotives. Digital Equipment Corporation uses XCON (derived from "expert configurer") to match customers' needs with the most appropriate combination of computer input, output, and memory devices. The system uses more than 3,000 decision rules and 5,000 product descriptions to analyze sales orders and design lay-outs, ensuring that the company's equipment will work when it arrives at customers' sites. XCON catches most configuration errors, and eliminates the need for completely assembling a computer system for testing and then breaking it down again for shipment to the customer. The system is expensive, however. DEC spends $2 million per year just to update XCON. In fact, cost is one of the most prohibitive factors involved in the development of Al systems. However, when such a system is implemented effectively, the money it saves in staff hours and costs from averted human errors can quickly recoup development costs. In large corporations the savings can accrue in the tens of millions of dollars per year.

A moderate-sized system, consisting of about 300 decision rules, generally costs between $250,000 and $500,000 to design. That is a great deal of money to spend on creating systems that do little more than play chess—which was what some designers did back in the 1960s.

EXPERIMENTAL GAMES.

Scientists in the 1960s developed machines that could play chess in an attempt to create machines that could think by themselves. They made tremendous strides in developing sophisticated decision trees that could map out possible moves, but those programs included so many potential alternatives that even contemporary supercomputers cannot assess them within a reasonable amount of time. They reduced the number of alternatives, which allowed the machines to play at the chess master level. To simulate the thinking process, the computers processed large amounts of data on alternative moves. Some of these experiments continue through the present. A highly publicized success in this area came in 1997 when an IBM supercomputer named Deep Blue beat world chess champion Garry Kasparov in a match.

NEURAL NETWORKS.

Neural networks go one step further than expert systems in bringing stored knowledge to bear on practical problems. Instead of just leading the user to the appropriate piece of knowledge that has been captured in the system, neural networks process patterns of information to arrive at new abilities they weren't equipped with on day one. In a sense, they learn to do things for the user based on special preparation that involves feeding the system data which it then analyzes for patterns. This approach has proven highly effective in a number of fields, including finance, information technology management, and health care.

For instance, a neural network might be employed to predict which loan applicants are too risky. Rather than programming the computer with exact, user-defined criteria for what constitutes a risky applicant, the neural network would be trained on a large volume of application data from past loans— especially on details about the problematic ones. The neural network would process the data thoroughly and arrive at its own evaluation criteria. Then as new applications come in, the computer would use this knowledge to predict the risks involved. As time passes, the neural network could receive periodic (or even continuous) retraining on new data so that it continues to hone its accuracy based on current trends. Real-life systems such as this have enjoyed a high success rate and have been able to reduce the number of bad loans at the lending institutions that use them.

APPLICATIONS OF Al IN THE BUSINESS
WORLD

Al is being used extensively in the business world, despite the fact that the discipline itself is still in the embryonic stages of development. Its applications cross a wide spectrum. For example, Al is being applied in management and administration, science, engineering, manufacturing, financial and legal areas, military and space endeavors, medicine, and diagnostics.

Some Al implementations include natural language processing, database retrieval, expert consulting systems, theorem proving, robotics, automatic programming, scheduling, and solving perceptual problems. Management is relying more and more on knowledge work systems, which are systems used to aid professionals such as architects, engineers, and medical technicians in the creation and dissemination of new knowledge and information. One such system is in use at Square D, an electrical component manufacturer. A computer does the design work for giant units of electrical equipment. The units generally share the same basic elements but vary in required size, specifications, and features. However, as is the case with most Al-type systems, human intervention is still required. An engineer is needed to check the computer-produced drawing before the equipment is put into production.

Senior managers in many companies use Al-based strategic planning systems to assist in functions like competitive analysis, technology deployment, and resource allocation. They also use programs to assist in equipment configuration design, product distribution, regulatory-compliance advisement, and personnel assessment. Al is contributing heavily to management's organization, planning, and controlling operations, and will continue to do so with more frequency as programs are refined.

AI is also influential in science and engineering. The applications developed were used to organize and manipulate the ever-increasing amounts of information available to scientists and engineers. AI has been used in complex processes such as mass spectrometry analysis, biological classifications, and the creation of semiconductor circuits and automobile components. Al has been used with increasing frequency in diffraction and image analysis; power plant and space station design; and robot sensing, control, and programming. It is the increased use of robotics in business that is alarming many critics of artificial intelligence.

Robots are being utilized more frequently in the business world. In 1990, over 200,000 robots were in use in U.S. factories. Experts predict that by the year 2025 robots could potentially replace humans in almost all manufacturing jobs. This includes not only the mundane tasks, but also those requiring specialized skills. They will be performing jobs such as shearing sheep, scraping barnacles from the bottoms of ships, and sandblasting walls. However, there are jobs that robots will never be able to perform, such as surgery. Of course, there will still be a need for individuals to design, build, and maintain robots. Yet, once scientists develop robots that can think, as well as act, there may be less of a need for human intervention. Thus, the social ramifications of Al is of major concern to people today.

THE FUTURE OF ARTIFICIAL
INTELLIGENCE

In spite of its great advances and strong promise, Al, in name, has suffered from low esteem in both academic and corporate settings. To some, the name is inexorably—and unfavorably—associated with impractical chess-playing computers and recluse professors trying to build a "thinking machine." As a result, many developers of Al theories and applications consciously shun the moniker, preferring instead to use the newer jargon of fuzzy applications, flexible software, and data-mining tools. In avoiding the label Al, they have found more receptive audiences among corporate decision-makers and private investors for their Al-inspired technologies.

Thus, while the practices and ideas known as Al are hardly dead, the name itself is drifting toward obscurity. This is true not only because of the perceived stigma, but also as a consequence of the diversity and heterogeneity of ways in which Al concepts have been implemented. Furthermore, these concepts are verging on ubiquity in software applications programming. Such disparate objectives as building a customer order system, implementing a self-diagnostic manufacturing system, designing a sophisticated search engine, and adding voice-recognition capabilities to applications all employ AI theories and methods. Indeed, Ford Motor Company was slated to implement an engine-diagnostic neural network in its car computers beginning in the 2001 model year. With Al so entrenched in modern software development, it has lost many of its distinctions from software generally.

FURTHER READING:

Armstrong, Larry. "Ford Slips a Neural Net under the Hood." Business Week, 28 September 1998. Available from www.businessweek.com .

Buck, Neena. "Just Don't Call It Al." Computerworld, 13 January 1997.

Gruppe, Fritz H., Tony von Sadovszky, and M. Mehdi Owrang 0. "An Executive's Guide to Artificial Intelligence." Information Strategy, fall 1995, 44-48.

Lyons, Daniel. "Artificial Intelligence Gets Real." Forbes, 30 November 1998.

——. "The New Face of Artificial Intelligence." Forbes, 30 November 1998.

Taylor, Paul. "Breakthroughs in Business Intelligence." Financial Times, 7 May 1997. Available from www.ft.com .



User Contributions:

Comment about this article, ask questions, or add new information about this topic: