DFAs, NFAs, pushdown automata, Turing machines... All are mathematical entities that model computation. These abstract systems have concrete, practical applications in computer science (CS).
For example, deterministic finite automata (DFAs) are associated with regular expressions, which computer programs that involve pattern matching frequently rely on. Also, knowing theoretical results such as the inability of any computation to determine whether or not another computation will stop (Halting Problem) can keeps programmers from attempting to write impossible computer programs.
Automata represent one approach to mathematically modeling computation. There are others.
For example, the mathematical logician Alonzo Church created a formalism of computation based on functions in the 1930s, called the \lambda`*-calculus*. The key notion in this approach is an operator (i.e., function) called :math:lambda` that is capable of generating other functions.
One of the earliest high-level programming languages, LISP (for LISt Processing language, 1959), is a practical computer implementation of the -calculus. LISP was designed originally for research in artificial intelligence (AI), a field in CS that perpetually seeks to extend the capabilities of computers to carry out tasks that humans can do. Scheme and Clojure are some contemporary programming languages descended from the original LISP, and and other widely used “functional” programming languages such as ML and Haskell are based on the -calculus. Programmers use these languages to develop useful applications, and researchers use them to explore new frontiers in computing.
From a theoretical viewpoint, the -calculus embodies all essential features of functional computation. This holds because the relationship between “inputs” (domain values in Mathematics, arguments/parameters in programming) and “outputs” (range values in Math, return values in programming) from functions expresses everything in a purely functional system of computations (no “state changes”), and -calculus is the mathematical theory of functions considered entirely according to their “inputs” and “outputs.”
In fact, it can be proven that any other foundation for functional computation, such as Turing machines (which can express any type of computation), will have exactly the same expressive power for functional computation as the -calculus [Pierce 95].
However, all of the computational models we’ve mentioned so far (Turing machines, -calculus, etc.) are for sequential computations only. This means that we assume only a single computational entity. Until a few years ago, it was reasonable to assume that only one computational processor would be available for most computations, because most computers had only one computational circuit for carrying out instructions.
Many computations require parallelizing according to the computational steps instead of (or in addition to) parallelizing according to the data. When a computation has multiple processors carrying out different sequences of computational steps in order to accomplish its work, we say that computation has task parallelism.
For example, imagine a computation that extracts certain elements from a body of text (e.g., proper names), then sorts those elements, and finally removing duplications. With multiple processors, one might program one processor to extract those elements, another to perform the sorting operation, and a third to remove the duplications. In effect, we have an assembly line of processes, also called a pipeline by computer scientists.
Computer scientists have found other computations exceedingly difficult to parallelize effectively. Notably, nobody knows how to parallelize finite state machines (FSMs) well, as a general class of computations. [View from Berkeley 06, p.16]
The -calculus, introduced in the next section, is an example of such a model of parallel computation.