SEG-2106 "Software Construction"

Chapter 3-1: Concurrency 

Additional readings:

Hardware and software concurrency

Physical concurrency

A single CPU of a computer executes one instruction after the other. There is no concurrency. If we have several CPUs running in parallel (either within the same computer or in separate computers) we have some form of concurrency (also called parallelism). Concurrency realized at the hardware level is called physical concurrency. Physical concurrency is related to parallel computer architectures:

Concurrency in software

The book by Sebesta mentions the following levels of concurrency:

The sharing of a CPU among several concurrent threads or programs: Each operating system has a scheduler which allocates short time periods of the CPU to the different active programs running on the computer. At the end of such a time period, it determines which program should obtain the CPU in the next time period. A similar schedule is included in each program that supports several concurrent threads - this thread scheduler will determine how the time period, allocated by the operating system scheduler to the program, is shared among the threads within that program. For instance, the program running a Java JM has a scheduler to share the CPU among the threads of the Java program (if it contains threads in addition to the main thread).

Note that the behavior of a scheduler is not always defined at all levels of details; e.g. for the Thread scheduler in a Java virtual machine, not all aspects are defined. Normally, one distinguishes the following states for a process (thread, task, ...) that is scheduled by the scheduler (see diagram below):


Dependencies between concurrent processes

Different kinds of dependencies

In the simplest case, concurrent processes perform their processing independently from one another. But often they have to coordinate their work, which is done through interprocess communication and synchronization. One may classify their interrelationship as follows:

Note: Competitive and cooperative synchronization and communication can be realized through message passing primitives or through shared variables (including some basic synchronization primitive containing as an atomic operation the testing and (conditional) updating of a variable. Such primitives can be provided by an operating system for communication between concurrently executing programs, or by the hardware of a concurrent computer architecture. Examples of synchronization primitives for shared variables are a machine instruction "test and set" and the semaphore concept described in Section 12.3 in Sebesta's book.

The non-deterministic nature of concurrent systems (remembering what we discussed in Chapter 1-3)

Concurrent processes give rise to a large number of possible interaction sequences because the execution of sequences of the different processes may be interleaved in many ways. In the case of independent concurrency (as for instance defined by parallel regions of UML State diagrams) one assumes by default that there is no interrelationship between the concurrent processes (no communication, no shared resources, nor shared variables). This has the advantage, that the behavior of each process can be analysed independently of the other processes. However, the number of possible overall execution sequences is the largest, because the actions of the different processes may be interleaved arbitrarily in the global execution trace. An example is shown in the following diagrams, where (a) and (b) represent two independent processes, defined in the form of LTSs, and (c) shows the resulting global behavior (obtained by reachability analysis - which is simple in this case). In this example, one assumes that the LTS transitions take no time, therefore the execution of two transitions of two different LTSs can be assumed to be sequential, one after the other, and when the LTS are independent and two transitions of different LTSs are enabled, then their order of execution is not determined; there is therefore some non-determinism introduced by this form of concurrency. This is called interleaving semantics.

concurrency       concurrency

The main point is that one cannot predict which execution path will be taken by the interleaving concurrent processes. If, let us assume, there is some dependency between the two processes which leads to some possible difficulties when the transition z is taken from the global state (3, C), then it is very difficult to test the system to ensure that this dependency is correctly implemented. We cannot control the system to go through a particular path. It is worse if one does not know where a problem may reside in an implementation - the chances to execute a transition path that encounters the problem is very small indead during normal testing. Conclusion: Concurrent systems are very difficult to test for correctness. Therefore one should try to verify by logical reasoning.

Some notes on the "message passing" concept

"Message passing" means that one process sends a message to another process and then continues its local processing. The message may take some time to get to the other process and may be stored in the input queue of the destination process if the latter is not immediately ready to receive the message. Then the message is received by the destination process, when the latter arrives at a point in its local processing where it is ready to receive messages. This is called asynchronous message passing (because sending and receiving is not at the same time, although the reception is necessarily after the sending).

Asynchronous message passing implies the following synchronization between the two processes:

In the case of synchronous message passing (as for instance realized in the Ada language, see Sebesta's book, Sections 12.5.1 through 12.5.3), one assumes that sending and receiving takes place at the same time (there is no need for an intermediate buffer). This is also called rendezvous and implies closer synchronization: the combined send-and-receive operation can only occur if both parties (the sending and receiving processes) are ready to do their part. Therefore the sending process may have to wait for the receiving process, or the receiving process may have to wait for the sending one (as in the case of asynchronous message passing).

Ressource sharing

Need for mutual exclusion

Ressource reservation

Ressource reservation is a scheme for realizing mutual exclusion; may be realised through semaphores or monitors (as programmable in Java)

The monitor concept

Concurrency in Java (Please read in the book by Sebesta)


Some additional questions:

Initialement écrit: 22 mars 2003; mise à jour: 19 mars 2004; révision typographique: 4 avril 2005; translated to english: March 3, 2008, revised 2009, 2010, 2011, 2013