Contribution to the Panel Discussion: Formal Methods after Fifteen Years
(FORTE'95, Oct. 1995, Montreal)

Introduction by the Moderator

Luigi Logrippo
Telecommunications Software Engineering Research Group
Department of Computer Science
University of Ottawa
Ottawa, Ont. Canada, K1N 6N5
Luigi@csi.uottawa.ca

The FORTE 1995 conference was held in Montreal, organized by Gregor von Bochmann, Rachida Dssouli, and Omar Rafiq. One of the activities that took place was a panel presentation and discussion among the coauthors of this paper. This exercise lasted about two hours and elicited several remarkable contributions from the panelists and the audience. It was thought worthwhile to record the panelists' contributions in the form of a joint paper, this one.

What does 'after 15 years' mean? In 1981, the First International Workshop on Protocol Specification, Testing, and Verification (PSTV), took place in Teddington, UK, under the auspices of the National Physical Laboratory, chaired by David Rayner. In consideration of the fact that the organization for the workshop started probably in 1980, that year has been taken as the date of birth of our research area. Ten years later, in the occasion of the Tenth PSTV Symposium, R.E. Miller, D. Sidhu, C.A. Vissers, and C. West were asked to write papers on the topic: 'The first ten years, the next ten years'. These talks are still relevant today [1].

These fifteen years have withnessed a number of developments, both in research and in applications. Many of these developments have occurred in relation to the standardization effort, within CCITT (now ITU) and ISO, of the Open Systems Interconnection (OSI) protocols and services. At the beginning of their work, the OSI community resolved that formal methods were needed, that they would be developed quickly, and that they would be used in the standardization work. The history unfolded differently, however. First, the development of the Formal Description Techniques (or FDTs, the languages that were to provide the front-end to the formal methods) turned out to be a complex project, which lasted much longer than initially expected. The languages became available only in the late eighties, when much of OSI standardization had been completed already. Second, several languages were developed (ASN.1, Estelle, LOTOS, SDL, TTCN) while only one was initially planned. Third, when people started looking at the languages, they found them falling short of their initial optimistic expectations. Most notably, the languages, and the associated tools and methodologies, were still incomplete and experimental, they were difficult to learn and insufficient documentation existed. Implicitly, standards developers and industry questioned the wisdom of spending energy in something unproven.

Meanwhile, other similar languages and methods were developed in the research community, independently of standardization efforts. Some of these are mentioned in other sections.

What seems to be used today are mostly languages and methods that were already available when the OSI standardization effort started. Such are those based on Extended Finite State models, which have been under development since the sixties. The language SDL (for which a first version was available in 1976) and its related tools are widely used. Languages such as ASN.1 and TTCN, which were developed later for specific application areas, without the ambition of covering the whole spectrum of protocol and service specification, are also thriving. Another well-established language is Z, also a language that has been available for a long time (although the use of Z in the area of telecommunications appears to be still limited). Methods based on temporal logic and process algebras are still seen as `researchy'. Interestingly, it appears that the formal languages and methods developed for OSI have suffered the lot of OSI itself, which has been (at least temporarily) overcome by preexisting protocols, such as TCP/IP.

Although the existing formal languages have proven themselves in many applications, deep limitations in them are being exposed on several fronts. New application areas, such as multimedia, mobile systems, intelligent networks, etc. present new requirements from the point of view of expressive power. Also, new advances in the various aspects of software development theory and practice present new challenges that these established languages do not always meet. However better languages may not be mature for use for a number of years.

What to make of this? Perhaps it should be noted that the speed of penetration of truly new ideas in Computer Science is in the neighborhood of 20 years: it took that long for systems and languages such as Unix, Smalltalk, C, and SDL, to take hold in industry. This is the time it takes to develop good methodologies, implementations, and textbooks, to train practitioners, etc.

With this background in mind, the moderator's questions to the panelists were: