History Trails
 
Space  
  The Founding
  Faculties, Departments & Schools
  People A-G
  People H-O
  People P-Z
  Buildings & Campus Development
  Traditions
  Affiliated Institutions
  Clubs & Groups
  Speeches and Addresses
  Publications
  Misc
Space  

Questions of Knowledge
Investigations in the Field of Artificial Intelligence

Outside Renee Elio's door, someone has tacked a bumper sticker to the wall. The sticker proclaims the coming-of-age of a new area of scientific research. "'Artificial Intelligence—it's for real," it says.

As she explains her work, Dr. Elio gives an added meaning to the message of the sticker. Artificial intelligence, she says, is really about knowledge and intelligence in general—in that respect there's nothing artificial about it. Then she adds: "But that's my bias." While others may choose to look at "artificial" or "machine intelligence" in complete isolation from the human cognitive processes, to her this field of investigation is intimately interwoven with questions about the way knowledge underlies the abilities we associate with intelligence in people: problem solving, language understanding, learning, planning, and reasoning—to name but a few.

Dr. Elio is an assistant professor in the University's department of computing science. Her research interests are in the related areas of artificial intelligence, expert systems, human information processing and user-system interfaces. A native of Connecticut with degrees from Smith College in Massachusetts and Carnegie Mellon University in Pittsburgh, she brings a diverse background to her teaching and research. Her first degree was in cognitive psychology, and before coming to the University she spent two and a half years at the Alberta Research Council where she developed an artificial intelligence system for predicting hailstorms (an example of a so-called expert system).

Dr. Elio's research can be divided into two broad areas. The first involves knowledge representation associated with expert systems—systems which attempt to mimic the thinking of human experts to provide "intelligent consultants" to address real-world problems. In particular, she is interested in how best to represent and use causal knowledge-understanding of how elements in complex devices, systems or processes work. (If your car fails to start when you turn the key, you don't get out and check the air in the tires-that's causal knowledge.) More facts may mean a smarter expert system, she says, but issues of organization and control are of extreme importance. "If a system is given a large amount of information, how is it best organized and deployed?" is the question she asks.

Dr. Elio's other broad interest is in machine learning and the mechanisms underlying the learning process. This involves two approaches: simulations of human learning using data from actual experiments related to human learning; and the methodology of getting machines to learn independently of how humans learn. Teaching machines how to learn requires a good understanding of how knowledge is represented, she says. A crucial goal in the science of artificial intelligence is, then, knowledge representation: how to acquire, modify and use knowledge to solve problems.

In contrast to a data base, a knowledge base, explains Dr. Elio, is a collection not only of facts but also of relationships among facts and mechanisms which operate on those relationships to draw inferences. The possession of knowledge—as opposed to data—implies that while every fact is not always available, there are processes and mechanisms which allow inferences leading to new facts. A knowledge base can be structured in many ways, she says. "One way would be by semantic relationships, that is, different associations among the same set of facts. By "semantic relationships" she means relationships that carry some meaning, allowing inferences to be made. She gives an example: "We know that canaries are birds and there could be lots of facts stored about birds, some of which will be true of canaries. Once we know we are dealing with birds, we know that there are a number of things that can be labelled as 'valid' and 'invalid' assertions about birds and, thus, canaries. Knowing Tweety is a canary, we can infer Tweety is yellow. We also might infer that Tweety does not talk—but that's a risky assumption here since Tweety, it turns out, is a cartoon character."

Dr. Elio also points out that a knowledge-based system will contain expectations, things to be anticipated based on items of information. For example, when we ask someone, "Do you know the time?" we don't expect "Yes" for an answer. "The ability for systems to understand what is left unsaid is an important research topic in building natural language systems and intelligent user-system interfaces," says Dr. Elio.

She goes on to add that knowledge bases don't replace data bases but are used for reasoning from information on hand. "Sometimes," she points out, "this involves recovery from wrong inferences, as we can." This recovery process represents a major research goal in the treatment of what is referred to as non-monotonic logic. In systems based on conventional monotonic logic, the number of facts known to be true strictly increases over time -a new fact can never cause an old fact to become invalid. But as soon as a situation arises where assumptions are made at points along the way (for instance, we might reason, "I don't know if this statement is true, but it usually is, so make it true.") and it is later found that the assumption is wrong-making it necessary to backtrack and withdraw facts-we are dealing with non-monotonic logic.

To elaborate further, Dr. Elio invents the example of Sam the eagle. "When we first learn that Sam is an eagle, we naturally assume that he is alive. If we later discover that Sam is stuffed, many default assumptions (that Sam would fly, need food and water, and so on) are no longer true and neither are any conclusions that were based on his vitality." Default reasoning is a crucial aspect of intelligent behavior and much of what we call common sense reasoning says Dr. Elio, and the provision of means for recovery when it fails is a major task for researchers in the area of knowledge representation.

In highlighting the difference between a knowledge-based and a non-knowledge-based system, Dr. Elio draws upon her experience at the Alberta Research Council. "Determining future weather conditions could be achieved by a system which is based on a statistical model of weather forecasting, employing data tables, equations, and logic trees." That would not be a knowledge-based approach. "A knowledge-based system about weather;" she explains, "tries to maintain meteorological concepts and strategies that simulate the reasoning of an expert in the field."

Dr. Elio and a graduate student are currently using the latter approach as they look at how to represent a qualitative understanding of some processes in cell biology. Through an understanding of the structure of a cell and the biological processes affecting it, the computer should be able to make reasonable predictions about "what if" situations-for instance, what if the cell were put in solution, would it burst or shrink? In many cases a person with a reasonable understanding of how a cell works can answer such questions without resort to equations-through qualitative reasoning.

The goal of the cell biology project is not really the building of a system that can make predictions (a set of equations might do that) but rather to explore issues in representing causal knowledge. "With this kind of knowledge," she says, "the system will be able to examine its knowledge about mechanisms and processes and, perhaps, reason about what assumptions might be wrong if its predictions did not match what was subsequently presented as observed data."

A common thread running through almost all of the research in which Dr. Elio is involved is her interest in how knowledge helps us learn. In the realm of machine learning this has led to an interest in spontaneous knowledge reconstruction-how does a system know that its current organization is somehow flawed and qualitatively reorganize its knowledge? "Knowledge reorganization is a form of learning," she says and elaborates: "It is a recognition that A isn't really associated with B but rather with C. We learn these things from experience, so knowledge reorganization includes finding new associations, possibly breaking old ones, realizing that several associations should be packaged together since they're always used at the same time." The long-range goal is to discover simple general mechanisms that will allow machines to reorganize knowledge based on experience in using the knowledge.

Dr. Elio also maintains a keen interest in how humans learn. "Given a proposed theory of how we learn," she says, “one tries to build a model that will allow simulation of the supposed process on the computer, employing the mechanisms thought to be important."

One such mechanism is the process of generalization, which she explains with another example, the essential nature of a chair. "If this is the concept to be learned, we might be shown a four-legged red object followed by a four-legged blue object. From this we would generalize that chairs have four legs, but their color isn't crucial. Further information could modify the conclusions-say we were shown a three-legged chair-refining the process even further."

To test a psychological theory or model, the computer can act much like a human subject, says Dr. Elio. It is given examples, it learns, and it makes decisions based on its learning mechanisms. If the computer's performance -including the errors-matches human performance, then the model underlying its behavior may be a plausible model of human behavior for the same task.

Dr. Elio's work touches on only some of the major themes in the area of artificial intelligence, now a very wide-ranging field of investigation. Other main branches include natural language understanding, robotics and computer vision. Some of these are being pursued by other researchers in the department of computing science: Len Schubert specializes in robotics, planning and natural language understanding; Jeff Pelletier shares the interest in natural language understanding, and Tony Marsland and Jonathon Schaeffer examine issues in neuristic search, an important aspect of intelligent problem solving.

However, as diverse as the field of artificial intelligence has become, it still all boils down to a theory of knowledge, how it is acquired, organized, modified, used and retrieved, says Dr. Elio. And the potential applications are indeed real. "It will allow us to construct increasingly intelligent machines and, in doing so, to invent a theory of machine knowledge and perhaps discover something about a theory of human knowledge."

Published Winter 1986.

       
ua logo