Real-time systems are computer systems that have to perform their actions with fulfilment of timing constraints. Additional to performing the actions correctly, their correctness also depends on the fulfilments of these timing constraints.

The validation of timing aspects is called a schedulability analysis. A real-time system often consists of many tasks. Existing techniques for schedulability analysis require that the worst-case execution time (WCET) of each task is known.

Since the real WCET of a program is in general not computable, an upper bound to the real WCET has to be computed instead. For real-time systems, these WCET predictions must be safe, i.e., the real WCET must never be underestimated. On the other hand, to increase the probability of a successful schedulability analysis, the predictions should be as precise as possible.

Most static analyses, including approaches to WCET analysis, examine the control flow of the program. Because the behaviour of the program is typically not known in advance, an approximation to the control flow is used as the basis of the analysis. This approximation is called a control flow graph.

For good performance, modern real-time systems use modern hardware architectures. These use heuristic components, like caches and pipelines, to speed up the program in typical situations. Neglecting these speed up factors in a WCET analysis would lead to a dramatic overestimation of the real WCET of the program.

For taking into account hardware components, the program usually has to be analysed at the hardware level. Therefore, binary executables are the bases of analyses. Further, the timing behaviour of typical modern hardware depends on the data that is processed, and in particular on the addresses that are used to access memory. Again, full information about addresses is available from binary executables.

The first part of this work presents a novel approach to extracting control flow graphs from binary executables. The general task is non-trivial, since often, the possible control flow is not obvious, e.g., when function pointers, switch tables or dynamic dispatch come into play. Further, it is often hard to predict what is the influence of a certain machine instruction on control flow, e.g., because its usage is ambiguous.

The reconstruction algorithms presented in this work are designed with real-time systems in mind. The requirements for a safe analysis also have to be considered during the extraction of control flow graphs from binaries, since analyses can only be safe if the underlying data structure is safe, too.

The reconstruction of control flow graphs from binary executables will be conceptually split into two separate tasks: a) given a stream of bytes from a binary executable and the address in the processor's code pointer, the precise classification of the instruction that will be executed by the machine, b) given a set of instruction classifications and possible program start nodes, the automatic composition of a safe and precise control flow graph.

For solving the first task, we will use very efficient decision trees to convert raw bytes into instruction classifications. An algorithm will be presented that computes the decision trees automatically from very concise specifications that can trivially be derived from the vendor's architecture documentation. This is a novel approach that extricates the user from error-prone programming that had to be done in the past.

For the reconstruction of control flow from a set of instruction classifications, a bottom-up approach will be presented. This algorithm overcomes problems that top-down approaches usually have. Top-down approaches are fine for producing disassembly listings and for debugging purposes, but static analysis poses additional requirements on safety and precision that top-down algorithms cannot fulfil. Our bottom-up approach meets these requirements. Furthermore, it is implemented very efficiently.

The second part of this work deals with the analysis of real-time systems itself. Timing analysis that is close to hardware can be split into two parts: a) the analysis of the behaviour of the components at all blocks of the program and b) the computation of a global upper bound for the WCET based on the results of the analysis of each block. The latter analysis is called the path analysis.

An established technique of path analysis uses Integer Linear Programming (ILP). The idea is as follows: the program's control flow is described by a set of constraints, and the execution times of the program's blocks are combined in an objective function. The task of finding an upper bound of the WCET of the whole program is solved by maximising the objective function under consideration of the control flow constraints.

Because of the complex behaviour of modern hardware, sophisticated techniques for WCET analysis must typically be used. For instance, routine invocations in the program should not be analysed in isolation, since their timing behaviour may be very different from invocation to invocation. This is because the state of the machine's hardware components is typically very different for each invocation and this state influences performance a lot.

For this reason, analyses usually perform better when they consider routine invocations in different execution contexts, where the contexts depend on the history of the program execution.

To make use of contexts, both parts of WCET analysis, the analysis of the hardware components and the path analysis must handle them. Up to now, it was not examined how path analysis can be done with arbitrary static assignment of execution contexts. This work will close this gap by presenting a new approach to ILP-based path analysis that can handle contexts, providing a high degree of flexibility to the user of the WCET analysis tool.

All algorithms presented in this thesis are implemented in tools that are now widely used in educational as well as industrial applications.