In computer science, abstract interpretation is a theory of sound approximation of the semantics of computer programs, based on monotonic functions over ordered sets, especially lattices. It can be viewed as a partial execution of a computer program which gains information about its semantics (e.g., control-flow, data-flow) without performing all the calculations.
Its main concrete application is formal static analysis, the automatic extraction of information about the possible executions of computer programs; such analyses have two main usages:
Abstract interpretation was formalized by the French computer scientist working couple Patrick Cousot and Radhia Cousot in the late 1970s.[1][2]
This section illustrates abstract interpretation by means of real-world, non-computing examples.
Consider the people in a conference room. Assume a unique identifier for each person in the room, like a social security number in the United States. To prove that someone is not present, all one needs to do is see if their social security number is not on the list. Since two different people cannot have the same number, it is possible to prove or disprove the presence of a participant simply by looking up their number.
However it is possible that only the names of attendees were registered. If the name of a person is not found in the list, we may safely conclude that that person was not present; but if it is, we cannot conclude definitely without further inquiries, due to the possibility of homonyms (for example, two people named John Smith). Note that this imprecise information will still be adequate for most purposes, because homonyms are rare in practice. However, in all rigor, we cannot say for sure that somebody was present in the room; all we can say is that they were possibly here. If the person we are looking up is a criminal, we will issue an alarm; but there is of course the possibility of issuing a false alarm. Similar phenomena will occur in the analysis of programs.
If we are only interested in some specific information, say, "was there a person of age in the room?", keeping a list of all names and dates of births is unnecessary. We may safely and without loss of precision restrict ourselves to keeping a list of the participants' ages. If this is already too much to handle, we might keep only the age of the youngest, and oldest person, . If the question is about an age strictly lower than or strictly higher than , then we may safely respond that no such participant was present. Otherwise, we may only be able to say that we do not know.
In the case of computing, concrete, precise information is in general not computable within finite time and memory (see Rice's theorem and the halting problem). Abstraction is used to allow for generalized answers to questions (for example, answering "maybe" to a yes/no question, meaning "yes or no", when we (an algorithm of abstract interpretation) cannot compute the precise answer with certainty); this simplifies the problems, making them amenable to automatic solutions. One crucial requirement is to add enough vagueness so as to make problems manageable while still retaining enough precision for answering the important questions (such as "might the program crash?").
Given a programming or specification language, abstract interpretation consists of giving several semantics linked by relations of abstraction. A semantics is a mathematical characterization of a possible behavior of the program. The most precise semantics, describing very closely the actual execution of the program, are called the concrete semantics. For instance, the concrete semantics of an imperative programming language may associate to each program the set of execution traces it may produce – an execution trace being a sequence of possible consecutive states of the execution of the program; a state typically consists of the value of the program counter and the memory locations (globals, stack and heap). More abstract semantics are then derived; for instance, one may consider only the set of reachable states in the executions (which amounts to considering the last states in finite traces).
The goal of static analysis is to derive a computable semantic interpretation at some point. For instance, one may choose to represent the state of a program manipulating integer variables by forgetting the actual values of the variables and only keeping their signs (+, − or 0). For some elementary operations, such as multiplication, such an abstraction does not lose any precision: to get the sign of a product, it is sufficient to know the sign of the operands. For some other operations, the abstraction may lose precision: for instance, it is impossible to know the sign of a sum whose operands are respectively positive and negative.
Sometimes a loss of precision is necessary to make the semantics decidable (see Rice's theorem and the halting problem). In general, there is a compromise to be made between the precision of the analysis and its decidability (computability), or tractability (computational cost).
In practice the abstractions that are defined are tailored to both the program properties one desires to analyze, and to the set of target programs. The first large scale automated analysis of computer programs with abstract interpretation was motivated by the accident that resulted in the destruction of the first flight of the Ariane 5 rocket in 1996.[3]
Let be an ordered set, called concrete set, and let be another ordered set, called abstract set. These two sets are related to each other by defining total functions that map elements from one to the other.
A function is called an abstraction function if it maps an element in the concrete set to an element in the abstract set . That is, element in is the abstraction of in .
A function is called a concretization function if it maps an element in the abstract set to an element in the concrete set . That is, element in is a concretization of in .
Let , , , and be ordered sets. The concrete semantics is a monotonic function from to . A function from to is said to be a valid abstraction of if, for all in , we have .
Program semantics are generally described using fixed points in the presence of loops or recursive procedures. Suppose that is a complete lattice and let be a monotonic function from into . Then, any such that is an abstraction of the least fixed-point of , which exists, according to the Knaster–Tarski theorem.
The difficulty is now to obtain such an . If is of finite height, or at least verifies the ascending chain condition (all ascending sequences are ultimately stationary), then such an may be obtained as the stationary limit of the ascending sequence defined by induction as follows: (the least element of ) and .
In other cases, it is still possible to obtain such an through a (pair-)widening operator,[4] defined as a binary operator which satisfies the following conditions:
In some cases, it is possible to define abstractions using Galois connections where is from to and is from to . This supposes the existence of best abstractions, which is not necessarily the case. For instance, if we abstract sets of couples of real numbers by enclosing convex polyhedra, there is no optimal abstraction to the disc defined by .
One can assign to each variable available at a given program point an interval . A state assigning the value to variable will be a concretization of these intervals if, for all , we have . From the intervals and for variables and , respectively, one can easily obtain intervals for (namely, ) and for (namely, ); note that these are exact abstractions, since the set of possible outcomes for, say, , is precisely the interval . More complex formulas can be derived for multiplication, division, etc., yielding so-called interval arithmetics.[5]
Let us now consider the following very simple program:
y = x; z = x - y;
With reasonable arithmetic types, the result for z should be zero. But if we do interval arithmetic starting from x in [0, 1], one gets z in [−1, +1]. While each of the operations taken individually was exactly abstracted, their composition isn't.
The problem is evident: we did not keep track of the equality relationship between x and y; actually, this domain of intervals does not take into account any relationships between variables, and is thus a non-relational domain. Non-relational domains tend to be fast and simple to implement, but imprecise.
Some examples of relational numerical abstract domains are:
and combinations thereof (such as the reduced product,[2] cf. right picture).
When one chooses an abstract domain, one typically has to strike a balance between keeping fine-grained relationships, and high computational costs.
While high-level languages such as Python or Haskell use unbounded integers by default, lower-level programming languages such as C or assembly language typically operate on finitely-sized machine words, which are more suitably modeled using the integers modulo (where n is the bit width of a machine word). There are several abstract domains suitable for various analyses of such variables.
The bitfield domain treats each bit in a machine word separately, i.e., a word of width n is treated as an array of n abstract values. The abstract values are taken from the set , and the abstraction and concretization functions are given by:[14][15] , , , , , , . Bitwise operations on these abstract values are identical with the corresponding logical operations in some three-valued logics:[16]
|
|
|
Further domains include the signed interval domain and the unsigned interval domain. All three of these domains support forwards and backwards abstract operators for common operations such as addition, shifts, xor, and multiplication. These domains can be combined using the reduced product.[17]