Threats to predictability exist on all layers of abstraction, for example in the processor architecture, the software development for single tasks, the task-coordination level and distributed operation.
At the heart of unpredictability at the processor level is the interference between architecture components. Timing anomalies and domino effects severely damage predictability. These problems are aggravated by several forms of concurrency, e.g., super-scalarity, out-of-order execution, and dynamically scheduled multi-threading.
The restricted processor-memory channel bandwidth and the growing speed gap between processor and memory has led to the introduction of deep memory hierarchies and several types of speculation. These are among the strongest threats to predictability.
Dynamic power management technology, which is critical for reducing the power consumption of hardware, has a significant impact on predictability, too.
Internal kernel mechanisms such as scheduling, mutual exclusion, interrupt handling and communication, can heavily affect task execution behaviour and hence the timing predictability of a system. For example, preemptive scheduling reduces program locality in the cache, increasing the worst-case execution time of tasks compared with non-preemptive execution. Semaphores for mutual exclusion are prone to priority inversion and may introduce unbounded blocking in the execution of high priority tasks.
Programs with pointers including pointers to functions are hard to analyse statically. The object-oriented programming style, although attractive as a software development methodology, introduces dynamics into the execution time by the dynamic binding of methods to calls.
Communicating tasks interfere on the common communication resource. Dependent tasks and task chains need end-to-end delays of events or packets. Finally, the use of common resources such as I/O systems and memory becomes more critical because of true concurrency. The different architectural components are designed assuming different input event models and use different arbitration and resource sharing strategies. This results in a heterogeneous architecture that makes any kind of compositional analysis difficult. Processor architectures become increasingly parallel (multi-core), therefore, communication will be an increasingly important factor in efficiency and predictability.
System design has traditionally profited from principles such as the separation of concerns and the abstraction from resources. The abstraction from machine time was the most significant; transitions on higher levels of abstraction were counted, or only orders of magnitude were considered. Multi-layered designs encapsulate resources and offer new abstractions for the next higher level, e.g., several virtual machines instead of one real machine, a large virtual memory instead of a limited-size physical memory, and services independent of their location.
Preemptive scheduling is supposed to enhance predictability at the task-coordination level. However, this gives additional requirements to the next lower level and results in a larger variability due to cache effects. As a result, the predictability at the task-coordination level may suffer. There may even be several instances of scheduling in a multi-layered system with uncoordinated scheduling and sharing of common resources on all of these levels.
The primary goal of the interrupt mechanism is to reduce the latency in transferring data from a peripheral device to application tasks and vice versa. In other words, application tasks are delayed in favour of data transfers, which are implicitly considered more important for the system. In real-time control applications, however, the delay introduced in a control task by an interrupt execution can often be as critical as the one introduced on data transfer.
To address this problem, a portion of the device driver (or the entire handler) has to be scheduled together with the other tasks in the system. Hence, the interrupt handling mechanism has a strong dependency with the task scheduler.
In battery-operated embedded systems, DVS techniques are commonly used for developing energy-aware strategies to balance performance versus energy consumption. Unfortunately, however, due to inter-task dependencies (e.g., precedence constraints and mutual exclusion relations), task response times have a highly nonlinear relation with the processor speed. As a consequence, an increase of the processor speed may not always correspond to a performance improvement, and a small decrease of the speed can cause abrupt performance degradation.
Predictability requires proofs that can only be provided on the basis of correct models and by sound methods. Therefore, predictability depends on two orthogonal issues: properties of the system, and the methods and tools to extract estimates of the system behaviour.
One example is the determination of bounds on execution times of tasks. A predictable system will exhibit a small variability of the execution times under all admissible inputs, and the appropriate set of tools will derive tight bounds on the execution times. Only if both aspects match, a predictable system is obtained.
Even in the case of complete knowledge about the whole system, system complexity may prevent exhaustive methods for determining safe and precise guarantees concerning performance. Limited analysability of a complete state space results in a loss of precision. Examples are the use of pointers in software, certain cache-replacement and speculation strategies as well as other peculiarities of the hardware architecture.