Decision-Support and Intelligent Tutoring Systems in Medical Education

Published in: Clinical and Investigative Medicine, volume 23, no. 4: August 2000.


Dr. Monique Frize, P. Eng., O.C.,
Dept. of Systems and Computer Engineering, Carleton University
1125 Colonel By Drive,
Ottawa, ON, Canada K1S 5B6
and
School of Information Technology and Engineering, University of Ottawa
161 Louis-Pasteur,
Ottawa, ON, Canada K1N 6N5
Email : frize@site.uottawa.ca

Dr. Claude Frasson,
Département d'informatique et de recherche opérationnelle
2920 Chemin de la Tour
Montréal, Québec, Canada H3T 1J4
tel. 514-343-7019; fax: 514-343-5834
Email: frasson@iro.umontreal.ca


Presented at CIHR Workshop in Sydney, B.C.
February 26/28, 2000

Address for Reprints and Correspondence: Dr. M. Frize


Abstract : One of the challenges in medical education is to teach the decision-making process. This learning process varies according to the experience of the student and can be supported by various tools. In this paper we present several approaches that can strengthen this mechanism, from decision support tools, such as scoring systems, Bayesan models, neural networks, to cognitive models that can reproduce how the students progressively build their knowledge into memory and foster pedagogical methods.

Background on the Development of Decision-Support Tools in Medicine

The earliest decision-support tools in medicine consisted of scoring systems, usually a severity of illness index. New scoring systems continue to be developed and are extensively used in the clinical setting. In critical care medicine, examples are: the APACHE score used with adults patients and the SNAP score used with neonates. But the use of computers in the development of decision-support tools offer a broader solution to the estimation of outcomes than scoring systems. The development of Clinical Diagnostic Decision-Support Systems (CDDSS) in medicine<1> began with the development of particular clinical algorithms used to automatize some of the calculations necessary to determine parameters that could only be obtained by indirect computation of measured values; this was followed by the use of clinical databanks in conjunction with certain analytic functions; other types were mathematical pathophysiological models, pattern recognition systems, Bayesian statistical systems, decision-analytical systems, and symbolic reasoning (also called expert systems). Most of these developments consisted of small systems dedicated to a narrowly-focused diagnosis or medical environment. Automated differential blood count analyzers and cytological recognition systems for analyzing Pap smears are examples of such systems.

The development of the Bayasian model lead to several applications. Amongst the first to use this model was Warner and colleagues<1> who obtained the probabilities used in the diagnosis of congenital heart diseases from a literature review, from the cases they reviewed themselves, and from experts' estimates based on knowledge of pathophysiology. The first clinical and widely used Bayasian system was de Dombal et al.'s <1> for diagnosis of acute abdominal pain. Many groups have developed, implemented, and refined Bayesian methods for making diagnostic decisions. An alternative approach was based on heuristic reasoning (based on empirical rules-of-thumb) such as the system reported by Weiss et al<1> , an expert system shell for the diagnosis of rheumatological diseases.

Knowledge-Based Systems and Case-Based Reasoning (CBR)

Knowledge-based systems are becoming increasingly accepted as part of clinical decision-aid tools. The use of case-based reasoning (CBR) has been particularly successful in applications such as matching cases on abnormal cardiac patterns<2>. Frize et al <3>combined an expert shell with case-based reasoning abilities and a graphical user interface to display closely-matching patient cases in both an adult and a neonatal intensive care unit. In this last example, the idea is for the system to imitate a physician's approach in "remembering similar past cases" in order to establish a differential diagnosis and/or to determine a course of therapy. The system displays five or ten cases which are selected as closest matches to the newly admitted patient. It is expected that the large database used with the system will provide more cases to display than the physician's own memory of past cases. The system then allows physicians to retrieve information of 'similar' patients such as demographics, diagnoses, complications, etc..

Artificial Neural Networks (ANNs)

Artificial neural networks (ANNs) have also been used and tested as decision-aid tools in a variety of applications. For example, Baxt used ANNs as an aid to diagnose acute coronary occlusion<4> and later for myocardial infarction<5>. Frize et al.<6> performed studies of estimated duration of artificial ventilation, as well as estimates of mortality and LOS in an adult intensive care unit. Again, in order to remain as close as possible to the manner in which physicians work, an artificial neural network (ANN) model was selected which, when trained, provides an estimate of selected clinical outcomes, simulating a clinician's consideration of potential patient outcomes such as a physician thinking: "And for this particular patient, this is what I think will happen"<6,7>.

A progression can be observed from the early concentration on various algorithmic alternatives in the seventies, through the development of comprehensive systems like Internist/QMR or Help in the eighties, with explanation capability, critiquing, and embededness as the distinctive advanced features, to the evidence-based approaches of the nineties. The electronic medical record and the automatic incorporation of guidelines in CDDSS are other current efforts.

Creation of Models that Can Learn

Whereas different approaches can be used to model a complex process, we should distinguish the ways that the model is created and in particular how the model can learn. Learning starts with either obtaining expert knowledge or by observation or by neural network analysis and then the model is tested using test or real cases. It is likely that case studies, experts, and techniques such as neural networks or case-based reasoning are going to be useful in refining the models. Some medical areas can be resumed by an explanatory model such as a care guideline expressed as an algorithm. These algorithms can inherently be modeled behind an electronic patient record and suggest next steps and even suggest probabilities for certain types of outcomes. Patient care should be distinguished from patient care evaluation, including adaptation to new technologies and treatments that support care. Of course, there remains an important role of medical education in these technological developments.

Decision-Support Tools of the Future

Most of the decision-aid tools that currently exist have been developed for very specific illnesses or for a particular medical environment. In the future, several questions need to be addressed: For example, what decision-aid tools support primary care medicine? How and by whom are such tools used? Are the systems integrated into clinical practice and if so, how widely? How do we define success in this realm? Do the tools include support for cultural and religious sensitivity? What kind of tools, in addition to those mentioned above, would help physicians manage knowledge explosion and change? An important consideration in any system design are the issues of privacy and confidentiality. All patient identifiers in databases should be removed prior to their use and everyone involved in the research projects should be aware of these concerns. Studies should be submitted and approved by the appropriate local ethics committees. Another important aspect is to ensure that users understand the limitations of the system so that they will use it appropriately and in an effective manner. The strengths and weaknesses of each of the various technical approaches should be studied and manners to evaluate them reviewed and discussed.

Human agents and cognitive agents in Medical Education

Humans are example of cognitive agents, complex cognitive environments which acquire, store, and structure knowledge into memory in order to solve specific or general problems. The difference with intelligent software agents is that humans have the capability to learn and structure their knowledge in an extremely complex and efficient network of nodes that can be expanded in more fine grains according to a context. However, during their education in medical schools, humans adopt generally one method of knowledge acquisition and they stay with this approach along the complete curriculum of medical education. Unfortunately, the uniformity of medical teaching (and the education system in general) does not permit to change or improve these learning mechanisms, which, as a consequence, can lead to good or poor physicians at the end of the medical curriculum.

The use of Intelligent Tutoring systems can allow improving learning in medicine if we can detect the stage of understanding of the learner. We have developed and experimented various cognitive agents (piece of software with certain particularities (see Castelfranchi, 1995 for more details) able to trigger a specific pedagogy (Frasson et al, 1997) at each level of understanding. In fact, we can make a parallel between these agents and human cognition. We have developed and experimented various cognitive agents able to trigger a specific pedagogy<8>. In fact, we can make a parallel between these agents and human cognition. The structure of knowledge into memory allows us to distinguish a three levels cognitive architecture (figure 1) for a human agent (a student in medicine for instance). These levels (from the bottom to the top) characterize the theoretical transformation of knowledge from a novice to an expert.

Figure 1. A cognitive architecture of a human agent

When students begin their curriculum, they learn basic knowledge in biology, physics, and chemistry in order to build a base of fundamental knowledge. However at that time the knowledge is just stored and not yet linked to applied cases in medicine. They continue by trying to apply this knowledge to concrete biomedical situations. We are still at the first level of the architecture: the student is only able to identify basic symptoms, or situations, and generate immediate (right or wrong) diagnosis.

By progressing to clinical problems the students acquire some procedural knowledge, a step in which elements of knowledge are linked through examples of situations. They begin to make hypothesis and establish strategies to select or reject some of them. Problem-based learning is a good approach to foster this step. However, some students tend to learn the case itself, as an integrated knowledge without distinction between semantic links and facts. They hesitate to pass from a subject matter learning to a problem solving approach, and sometimes this transforms the clinical reasoning problem into a case-based reasoning learning. Generally, intermediate acquire their knowledge through this period.

The ultimate layer, which is not reached by all the students, concerns knowledge acquisition through contacts with real patients. Here, we distinguish two tendencies: (1) the physician continues to accumulate experience with a more complete base of cases, (2) the physician is able to induce, from a set of cases, a new case and new rules using generalization techniques. The experts only reach this last level.

The problem in medical education is that very few teachers are able to detect with sufficient precision the cognitive level of the learner and thus apply a pedagogical strategy in order to strengthen the transition between the different layers. So, we have developed and experimented the contribution of several cognitive agents (similar to the agent indicated above) to detect these situations and apply adequate strategies (10, 11). They proved very useful for acquiring the best reflex of decisions making process. These agents, also, can be filled with the types of decision support tools mentioned above.

Conclusion Many research questions remain to be explored, particularly if these technological developments are going to be useful to physicians in general practice and for medical education. The workshop in Victoria was very useful in raising the awareness of all present to all the facets that should be taken into account when developing tools for clinical practice or for medical education. Without this holistic view, these tools may have many shortcomings and be more a burden that an assistance. The multidisciplinarity of the teams working on these research questions become an essential part of future success.

References
1. Clinical Decision Support Systems:Theory and Practice, Springer, NY, ed. Eta, S. Berner, 1998.
2. Lau F. Development and Validation of a Decision-Support System for Cardiovascular Intensive Care. Can. Medical Informatics 1994; May/June:28-29.
3. Frize M, Taylor KB, Nickerson BG, Solven FG, Borkar H. A Knowledge-based System for the Intensive Care Unit. Proc of the 15th Ann. Int. Conf. IEEE/EMBS, 1993; San Diego:677-678.
4. Baxt, W.G. "Use of an Artificial Neural Network for Data Analysis in Clinical Decision-making: the Diagnosis of Acute Coronary Occlusion." Neural Comput. 2(4) (1990):480-489.
5. Baxt, W.G. "Use of an Artificial Neural Network for the Diagnosis of Myocardial Infarction." Annals of Internal Medicine 115 (1991):843-848.
6. Frize M, Solven FG, Stevenson M, Nickerson BG, Buskard T, Taylor K. Computer-Assisted Decision-support Systems for Patient Management in an Intensive Care Unit. Proc. Medinfo '95 1995; Vancouver:1009-1012. 14.
7. Frize M, Wang L, Ennett C, Nickerson BG, Solven FG, Stevenson M. New Advances and Validation of Knowledge Management Tools for Critical Care Using Classifier Techniques. Proc. AMIA Annual Symposium 1998:553-558.
8. Castelfranchi, C. (1995). Garanties for autonomy in cognitive agent architecture. In Wooldridge, M. & Jennings, N.R. (Eds), Intelligent Agents: Theories, Architectures and Languages, LNAI, vol 890, Springer Verlag: Heidelberg, Germany pp. 56-70.
9. Frasson, C., Kaltenbach, M. "Strengthening the Novice-Expert shift using the self-explanation effect", Journal of Artificial Intelligence in Education, special issue on student modelling, vol 3, no 4, pp. 477-494, 1993. (*)
10. Frasson, C., Mengelle, T., Aïmeur, E., Gouardères, G. (1996) "An Actor-based Architecture for Intelligent Tutoring Systems", ITS'96 Conference, Lecture Notes in Computer Science, No 1086, Springer Verlag, Montréal, pp. 57-65.
11. Frasson, C, Mengelle, T, Aimeur, E. Using Pedagogical Agents in a Multistrategic Intelligent Tutoring System. AIED Workshop on Pedagogical Agents in AI-ED 97, World Conference on Artificial Intelligence and Education, Japan. (1997): 40-47.