(Robert Holte, Robert Laganière, Mario Marchand, Stan Matwin, Douglas Skuce, Stan Szpakowicz)
Last update: September 1996
The research activities of the Artificial Intelligence group concentrate on machine learning, knowledge representation and acquisition, information retrieval, natural language processing, decision analysis and machine vision.
Machine Learning is a subfield of Artificial Intelligence concerned with systems that change themselves in order to better perform a given task. Activities of the Machine Learning group (R. Holte, M. Marchand, S. Matwin) range from pure research to industrial applications. The ML group's primary topics of interest are empirical learning, case-based reasoning, inductive logic programming, change of representation, learning theory and neural networks.
Work coordinated by Holte and Matwin has been sponsored by NSERC, Precarn, Inc., ITRC, Canada Centre for Remote Sensing, Pacific Forestry Centre, the Canadian Space Agency, and by a number of companies, including Nortel, Netron, IBM Canada, Lanvista, Inc., RES, Inc.. Application areas are typically related to software engineering (members of the group held an NSERC strategic grant "Machine Learning Applications in Software Reuse"), and collecting, managing and interpreting data on natural resources. A number of systems/projects arose from this work: CAESAR (CAse-basEd Software Reuse), CABARESS (CAse-BAsed REmote Sensing Shell), CEHDS (Canadian Environmental Hazards Detection Systems), GUIDAR (Graphical User Interface Design And Reuse), LOPSTER (inductive LOgic Programming with Subunification of TERms), SAFIR (an analogy-based system for database definition and query reuse). The machine learning research team includes three faculty members, two post-doctoral fellows and some fifteen graduate students. The Ottawa Machine Learning Group holds a weekly seminar in which faculty, invited guests and graduate students present their current research.
In recent years, Marchand and his group have been designing learning algorithms for neural networks which are provably efficient within a rigorous mathematical model of learning now called the Probably Approximately Correct (PAC) model. Surprisingly, almost no learning algorithms proposed for neural networks have a rigorous performance guarantee. The group's main successes have been obtained for the so- called nonoverlapping neural networks, loop-free networks where each node sends only one outgoing weight. The group is now trying to extend its methods to networks of nonoverlapping stochastic perceptrons. Also investigated are the learning capabilities of support vector machines, "generalized neural networks" that implement a learning strategy based on Vapnik's structural risk minimization principle.
A basic problem in Artificial Intelligence applications is the creation of knowledge resources, that is, systems that can serve to store and retrieve knowledge in structured formats. Traditionally, these are called knowledge bases, which are highly abstracted, structured, condensed and often formalized sources of knowledge. Years of experience have shown that building such knowledge bases can be very difficult. This has two main causes: a lack of sufficiently powerful tools for creating complex knowledge bases, and a lack of understanding of the basic conceptual and linguistic needs and behaviour of users. The research of D. Skuce addresses these problems. His group, consisting of about eight people, has developed a number of tools for building various types of knowledge resources. The oldest of these is the CODE system (Conceptually Oriented Design/Description Environment), a general-purpose knowledge management system. It assists in the various operations necessary to create a knowledge base: inputting, structuring, debugging, retrieving, explaining, and so on. CODE uses an advanced frame knowledge representation and has a highly developed graphic user interface. CODE has been used by companies such as Bell-Northern Research, Mitel, and Boeing for tasks including software engineering and corporate knowledge capture.
In 1995 Skuce's research turned toward a somewhat different approach to building large knowledge resources. His group is now developing a new knowledge management system called IKARUS (Intelligent Knowledge Acquisition and Retrieval Universal System) that will be merge the functionalities found in classical knowledge base systems such as CODE, information retrieval tools such as Glimpse, and World Wide Web-based collaborative work tools. (The group's systems are now all Web-based, so anyone anywhere can access them collaboratively.) With IKARUS, one will be able to interactively find answers to questions, either in knowledge bases or in text, and not retrieve just whole documents but individual facts or sentences, depending on the type of query. The answers can be organized in hierarchical concept or topic structures, so that they can be reused by others. Material such as manuals, textbooks, or documents on the Internet would be typical sources. For example, one would be able to ask questions such as "what is Perl?", "how do I print a postscript file in Unix?", or "when did Roosevelt die?". The system will use machine learning techniques such as neural networks to learn relationships between concepts and words. Such knowledge bases should make it easy to organize and use large amounts of knowledge based on textual sources. They will be easy to navigate, accessible globally via the World Wide Web, and linguistically sophisticated, that is, they will contain much knowledge about how words behave.
Natural language processing offers an interesting possibility of creating knowledge bases. The long-term goal of the TANKA project (Text Analysis for Knowledge Acquisition) is the construction of a semi- automatic system that assists a person who build a conceptual model of a domain by processing an unedited technical text. The system works on successive text fragments and submits its findings to the user for approval. Syntactic analysis, interactive semantic analysis and rote learning are the basic mechanisms. Many elements of the system are already in place. This work, coordinated by S. Szpakowicz, is closely related to the MaLTe project (Machine Learning and Text analysis), which was supported for three years by an NSERC strategic grant "Machine Learning and Text Analysis Towards Semi-automatic Knowledge Acquisition". The MaLTe project, run by Matwin and Szpakowicz, tried to combined natural language processing techniques with machine learning techniques in a system to improve knowledge acquired directly from text. Two faculty members, two postdoctoral fellows and several graduate students worked on this research direction. An outgrowth of this work is Matwin and Szpakowicz's new project, text summarization; see detailed information on this and other related projects of the KAML group (Knowledge Acquisition and Machine Learning).
Negoplan is a knowledge-based decision analysis and simulation system, built upon the idea of restructurable modelling. Negoplan can also be applied to individual and cooperative decision making and to symbolic simulation. The system maintains the representations of the goals, structure and behaviour of three sides that interact in decision-making: an agent whose decisions are supported by the system, other participating agents and the decision environment. This project was for four years supported by an NSERC Strategic Grant "Automation and Support of Decisions with Strategic Interactions". It was run by G. Kersten (Carleton University, School of Business) and Szpakowicz. The team included four post-doctoral fellows and several graduate students. A large application of Negoplan has been developed: a prototype system to train and test the decision making skills of medical students. Current work of Negoplan includes an extension of the system to allow the modelling of more than three sides of a decision problem.
Recent technological progress has facilitated the increased use of images as a source of information. Machine vision, another important area of Artificial Intelligence, is now becoming a thriving field of research with possible applications in robotics, medical imaging, remote sensing and soon. The research of R. Laganière's team (currently three graduate students) concerns the problem of 3D scene interpretation from multiple views. Energy minimization methods are investigated in order to process low-level information, while knowledge-based approaches are considered for integrating higher-level information into the process of 3D reconstruction. Most experiments are conducted using stereoscopic sequences (taken with two side-by-side cameras), often used as the "eyes" of an autonomous mobile robot. 3D interpretation is, in this case, essential for the robot to move and interact with the environment. Stereoscopic sequences are also the basis of 3DTV, a recent technology used in virtual reality and telepresence; here, 3D interpretation is used for image coding and enhancement.