ALGORITHMS OF INFORMATICS

Volume 1

Új Széchenyi Terv logó.


Table of Contents

Preface
Introduction
I. AUTOMATA
1. Automata and Formal Languages
1.1 Languages and grammars
1.1.1 Operations on languages
1.1.2 Specifying languages
1.1.3 Chomsky hierarchy of grammars and languages
1.1.4 Extended grammars
1.1.5 Closure properties in the Chomsky-classes
1.2 Finite automata and regular languages
1.2.1 Transforming nondeterministic finite automata
1.2.2 Equivalence of deterministic finite automata
1.2.3 Equivalence of finite automata and regular languages.
1.2.4 Finite automata with special moves
1.2.5 Minimization of finite automata
1.2.6 Pumping lemma for regular languages
1.2.7 Regular expressions
1.3 Pushdown automata and context-free languages
1.3.1 Pushdown automata
1.3.2 Context-free languages
1.3.3 Pumping lemma for context-free languages
1.3.4 Normal forms of the context-free languages
2. Compilers
2.1 The structure of compilers
2.2 Lexical analysis
2.2.1 The automaton of the scanner
2.2.2 Special problems
2.3 Syntactic analysis
2.3.1 parser
2.3.2 parsing
3. Compression and Decompression
3.1 Facts from information theory
3.1.1 The Discrete Memoryless Source
3.1.2 Prefix codes
3.1.3 Kraft's inequality and noiseless coding theorem
3.1.4 Shannon-Fano-Elias-codes
3.1.5 The Huffman coding algorithm
3.2 Arithmetic coding and modelling
3.2.1 Arithmetic coding
3.2.2 Modelling
3.3 Ziv-Lempel-coding
3.3.1 LZ77
3.3.2 LZ78
3.4 The Burrows-Wheeler-transform
3.5 Image compression
3.5.1 Representation of data
3.5.2 The discrete cosine transform
3.5.3 Quantisation
3.5.4 Coding
4. Reliable Computation
4.1 Probability theory
4.1.1 Terminology
4.1.2 The law of large numbers (with “large deviations”)
4.2 Logic circuits
4.2.1 Boolean functions and expressions
4.2.2 Circuits
4.2.3 Fast addition by a Boolean circuit
4.3 Expensive fault-tolerance in Boolean circuits
4.4 Safeguarding intermediate results
4.4.1 Cables
4.4.2 Compressors
4.4.3 Propagating safety
4.4.4 Endgame
4.4.5 The construction of compressors
4.5 The reliable storage problem
4.5.1 Clocked circuits
4.5.2 Storage
4.5.3 Error-correcting codes
4.5.4 Refreshers
II. COMPUTER ALGEBRA
5. Algebra
5.1 Fields, vector spaces, and polynomials
5.1.1 Ring theoretic concepts
5.1.2 Polynomials
5.2 Finite fields
5.3 Factoring polynomials over finite fields
5.3.1 Square-free factorisation
5.3.2 Distinct degree factorisation
5.3.3 The Cantor-Zassenhaus algorithm
5.3.4 Berlekamp's algorithm
5.4 Lattice reduction
5.4.1 Lattices
5.4.2 Short lattice vectors
5.4.3 Gauss' algorithm for two-dimensional lattices
5.4.4 A Gram-Schmidt orthogonalisation and weak reduction
5.4.5 Lovász-reduction
5.4.6 Properties of reduced bases
5.5 Factoring polynomials in
5.5.1 Preparations
5.5.2 The Berlekamp-Zassenhaus algorithm
5.5.3 The LLL algorithm
6. Computer Algebra
6.1 Data representation
6.2 Common roots of polynomials
6.2.1 Classical and extended Euclidean algorithm
6.2.2 Primitive Euclidean algorithm
6.2.3 The resultant
6.2.4 Modular greatest common divisor
6.3 Gröbner basis
6.3.1 Monomial order
6.3.2 Multivariate division with remainder
6.3.3 Monomial ideals and Hilbert's basis theorem
6.3.4 Buchberger's algorithm
6.3.5 Reduced Gröbner basis
6.3.6 The complexity of computing Gröbner bases
6.4 Symbolic integration
6.4.1 Integration of rational functions
6.4.2 The Risch integration algorithm
6.5 Theory and practice
6.5.1 Other symbolic algorithms
6.5.2 An overview of computer algebra systems
7. Cryptology
7.1 Foundations
7.1.1 Cryptography
7.1.2 Cryptanalysis
7.1.3 Algebra, number theory, and graph theory
7.2 Diffie and Hellman's secret-key agreement protocol
7.3 RSA and factoring
7.3.1 RSA
7.3.2 Digital RSA signatures
7.3.3 Security of RSA
7.4 The protocols of Rivest, Rabi, and Sherman
7.5 Interactive proof systems and zero-knowledge
7.5.1 Interactive proof systems and Arthur-Merlin games
7.5.2 Zero-knowledge protocol for graph isomorphism
8. Complexity Theory
8.1 Foundations
8.2 NP-completeness
8.3 Algorithms for the satisfiability problem
8.3.1 A deterministic algorithm
8.3.2 A randomised algorithm
8.4 Graph isomorphism and lowness
8.4.1 Reducibilities and complexity hierarchies
8.4.2 Graph isomorphism is in the low hierarchy
8.4.3 Graph isomorphism is in SPP
III. NUMERICAL METHODS
9. Competitive Analysis
9.1 Notions, definitions
9.2 The -server problem
9.3 Models related to computer networks
9.3.1 The data acknowledgement problem
9.3.2 The file caching problem
9.3.3 On-line routing
9.4 On-line bin packing models
9.4.1 On-line bin packing
9.4.2 Multidimensional models
9.5 On-line scheduling
9.5.1 On-line scheduling models
9.5.2 LIST model
9.5.3 TIME model
10. Game Theory
10.1 Finite games
10.1.1 Enumeration
10.1.2 Games represented by finite trees
10.2 Continuous games
10.2.1 Fixed-point methods based on best responses
10.2.2 Applying Fan's inequality
10.2.3 Solving the Kuhn-Tucker conditions
10.2.4 Reduction to optimization problems
10.2.5 Method of fictitious play
10.2.6 Symmetric matrix games
10.2.7 Linear programming and matrix games
10.2.8 The method of von Neumann
10.2.9 Diagonally strictly concave games
10.3 The oligopoly problem
11. Recurrences
11.1 Linear recurrence equations
11.1.1 Linear homogeneous equations with constant coefficients
11.1.2 Linear nonhomogeneous recurrence equations
11.2 Generating functions and recurrence equations
11.2.1 Definition and operations
11.2.2 Solving recurrence equations by generating functions
11.2.3 The Z-transform method
11.3 Numerical solution
12. Scientific Computing
12.1 Floating point arithmetic and error analysis
12.1.1 Classical error analysis
12.1.2 Forward and backward errors
12.1.3 Rounding errors and floating point arithmetic
12.1.4 The floating point arithmetic standard
12.2 Linear systems of equations
12.2.1 Direct methods for solving linear systems
12.2.2 Iterative methods for linear systems
12.2.3 Error analysis of linear algebraic systems
12.3 Eigenvalue problems
12.3.1 Iterative solutions of the eigenvalue problem
12.4 Numerical program libraries and software tools
12.4.1 Standard linear algebra subroutines
12.4.2 Mathematical software
Bibliography

List of Figures

1.1. Finite automaton.
1.2. The finite automaton of Example 1.9.
1.3. Nondeterministic finite automata.
1.4. Transition tables of the NFA in Fig. 1.3
1.5. The equivalent DFA with NFA The equivalent DFA with NFA \mathrm{A} in Fig. 1.3. in Fig. 1.3.
1.6. Equivalent DFA's (Example 1.11).
1.7. Non equivalent DFA's (Example 1.12).
1.8. DFA of the Example 1.13.
1.9. NFA associated to grammar in Example 1.14.
1.10. Relations between regular grammars and finite automata. To any regular grammar one may construct an NFA which accepts the language generated by that grammar. Any NFA can be transformed in an equivalent DFA. To any DFA one may construct a regular grammar which generates the language accepted by that DFA.
1.11. Finite automata with Finite automata with \epsilon -moves.-moves.
1.12. NFA equivalent to FA with NFA equivalent to FA with \varepsilon -moves given in Fig. 1.11.-moves given in Fig. 1.11.
1.13. (a) Representation of an NFA. Initial states are represented by a circle with an arrow, final states by a double circle. (b) Union of two NFA's.
1.14. (a) Product of two FA. (b) Iteration of an FA.
1.15. Minimization of DFA.
1.16. Minimum automaton equivalent with DFA in Fig. 1.15.
1.17. Sketch of DFA used in the proof of the pumping lemma.
1.18. Properties of regular expressions.
1.19. DFA from Example 1.20, to which regular expression is associated by Method 1.
1.20. DFA in Example 1.21 to which a regular expression is associated by Method 1. The computation are in Figure 1.21.
1.21. Determining a regular expression associated to DFA in Figure 1.20 using sets Determining a regular expression associated to DFA in Figure 1.20 using sets R_{ij}^{k} ..
1.22. Possible equivalent transformations for finding regular expression associated to an automaton.
1.23. Transformation of the finite automaton in Fig. 1.19.
1.24. Steps of Example 1.23.
1.25. Possible transformations to obtain finite automaton associated to a regular expression.
1.26. Associating finite automaton to regular expression Associating finite automaton to regular expression \varepsilon+(0+1)^{*}1 ..
1.27. Finite automaton associated to regular expression Finite automaton associated to regular expression \varepsilon+(0+1)^{*}1 ..
1.28. DFA's to minimize for Exercise 1.2-5
1.29. Finite automata for Exercises 1.2-6 and 1.2-7.
1.30. DFA for Exercise1.2-9.
1.31. Pushdown automaton.
1.32. Example of pushdown automaton.
1.33. Transition graph of the Example 1.27
1.34. Computation tree to show acceptance of the word Computation tree to show acceptance of the word 1001 (see Example 1.27). (see Example 1.27).
1.35. Computation tree to show that the pushdown automaton in Example 1.27 does not accept word Computation tree to show that the pushdown automaton in Example 1.27 does not accept word 101 ..
1.36. Recognising a word by empty stack (see Example 1.28).
1.37. Derivation (or syntax) tree of word Derivation (or syntax) tree of word aaaabb ..
1.38. Decomposition of tree in the proof of pumping lemma.
2.1. The translator.
2.2. The structure of compilers.
2.3. The programs of the analysis and the synthesis.
2.4. The positive integer and real number.
2.5. The identifier.
2.6. A comment.
2.7. The character string.
2.8. LL(k) grammar.grammar.
2.9. The sentential form and the analysed text.
2.10. The structure of the The structure of the LL(1) parser. parser.
2.11. The syntax tree of the sentence The syntax tree of the sentence i+i*i ..
2.12. The The LR(k) grammar. grammar.
2.13. The The \left[A\rightarrow\alpha.\beta,a\right] LR(1) -item. The \left[A\rightarrow\alpha.\beta,a\right] LR(1) -item.-item.
2.14. The function The function closure(\left[A\rightarrow\alpha.B\beta,a\right]) ..
2.15. The automaton of the Example 2.15.
2.16. The structure of the The structure of the LR(1) parser. parser.
2.17. The syntax tree of the sentence The syntax tree of the sentence aab ..
2.18. The automaton of the Example 2.22.
3.1. Frequency of letters in Frequency of letters in 1000 characters of English. characters of English.
3.2. Example of a code tree.
3.3. Example of Shannon code and Shannon-Fano-Elias-code.
3.4. Example of the Shannon-Fano-algorithm.
3.5. Example of a Huffman code.
3.6. Example of arithmetic encoding with interval expansion.
3.7. Table of the first values for the Krichevsky-Trofimov-estimator.
3.8. An example for a tree source.
3.9. Weighted context tree for source sequence ' Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 .' with past Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 .. The pair Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 . denotes Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 . zeros and Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 . ones preceded by the corresponding context Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 .. For the contexts Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 . it is Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 ..
4.1. AND, OR and NOT gate.
4.2. The assignment (values on nodes, configuration) gets propagated through all the gates. This is the “computation”.
4.3. Naive parallel addition.
4.4. Failure at a gate.
4.5. An executive organ.
4.6. A restoring organ.
4.7. An executive organ followed by a restoring organ.
4.8. Reliable circuit from a fault-free circuit.
4.9. A shift register.
4.10. Part of a circuit which computes the sum of two binary numbers Part of a circuit which computes the sum of two binary numbers x,y . We feed the digits of x and y beginning with the lowest-order ones, at the input nodes. The digits of the sum come out on the output edge. A shift register holds the carry.. We feed the digits of Part of a circuit which computes the sum of two binary numbers x,y . We feed the digits of x and y beginning with the lowest-order ones, at the input nodes. The digits of the sum come out on the output edge. A shift register holds the carry. and Part of a circuit which computes the sum of two binary numbers x,y . We feed the digits of x and y beginning with the lowest-order ones, at the input nodes. The digits of the sum come out on the output edge. A shift register holds the carry. beginning with the lowest-order ones, at the input nodes. The digits of the sum come out on the output edge. A shift register holds the carry.
4.11. A “computer” consists of some memory (shift registers) and a Boolean circuit operating on it. We can define the size of computation as the size of the computer times the number of steps.
4.12. Transmission through a noisy channel.
4.13. Using a refresher.
4.14. A regular expander.
6.1. The general scheme of modular computations.
6.2. Illustration of the operation of the Classical-Euclidean algorithm in Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 . and Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 .. In case (a), the input is Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 .. The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 . and Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 . is executed four times, values Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 ., Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 . and Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 . in these iterations are shown in the table. The Classical-Euclidean(Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 ., Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 .) algorithm outputs Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 . as result. In case (b), the input parameters are Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 .. The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial Illustration of the operation of the Classical-Euclidean algorithm in \mathbb{Z} and \mathbb{Q}[x] . In case (a), the input is a=-18,b=30,a,b\in\mathbb{Z} . The first two lines of the pseudocode compute the absolute values of the input numbers. The loop between lines 3 and 6 is executed four times, values r , c and d in these iterations are shown in the table. The Classical-Euclidean(-18 , 30 ) algorithm outputs 6 as result. In case (b), the input parameters are a=12x^{4}-68x^{3}+52x^{2}-92x+56,b=-12x^{3}+80x^{2}-84x+24\in\mathbb{Q}[x] . The first two lines compute the normal form of the polynomials, and the while loop is executed three times. The output of the algorithm is the polynomial \textrm{normal}(c)=x-2/3 ..
6.3. The illustration of the operation of the Primitive-Euclidean algorithm with input The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result.. The first two lines of the program compute the primitive parts of the polynomials. The loop between lines The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result. and The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result. is executed three times, the table shows the values of The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result., The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result. and The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result. in the iterations. In line The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result., variable The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result. equals The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result.. The Primitive-Euclidean(The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result.) algorithm returns The illustration of the operation of the Primitive-Euclidean algorithm with input a(x)=12x^{4}-68x^{3}+52x^{2}-92x+56,\ b(x)=-12x^{3}+80x^{2}-84x+24\in\mathbb{Z}[x] . The first two lines of the program compute the primitive parts of the polynomials. The loop between lines 3 and 6 is executed three times, the table shows the values of r , c and d in the iterations. In line 7 , variable \gamma equals \textrm{gcd}(4,4)=4 . The Primitive-Euclidean(a,b ) algorithm returns 4\cdot(3x-2) as result. as result.
7.1. A typical scenario in cryptography.
7.2. Example of an encryption by the shift cipher.
7.3. Vigenére square: Plaintext “H” is encrypted as “A” by key “T”.
7.4. Example of an encryption by the Vigenére cipher.
7.5. Three graphs: Three graphs: G is isomorphic to H , but not to F . is isomorphic to Three graphs: G is isomorphic to H , but not to F ., but not to Three graphs: G is isomorphic to H , but not to F ..
7.6. Two ciphertexts encrypting the same plaintext, see Exercise 7.1-1.
7.7. The Diffie-Hellman secret-key agreement protocol.
7.8. The RSA protocol.
7.9. Example of an RSA encryption (M = Message).
7.10. Digital RSA signatures.
7.11. The Rivest-Sherman protocol for secret-key agreement, based on The Rivest-Sherman protocol for secret-key agreement, based on \sigma
7.12. Goldreich, Micali, and Wigderson's zero-knowledge protocol for Goldreich, Micali, and Wigderson's zero-knowledge protocol for \mathtt{GI} ..
7.13. Simulation of the zero-knowledge protocol for Simulation of the zero-knowledge protocol for \mathtt{GI} without knowing \pi . without knowing Simulation of the zero-knowledge protocol for \mathtt{GI} without knowing \pi ..
8.1. A Turing machine with one input and two work tapes.
8.2. Transition function Transition function \delta of M for L=\{a^{n}b^{n}c^{n}\mid n\geq1\} . of Transition function \delta of M for L=\{a^{n}b^{n}c^{n}\mid n\geq1\} . for Transition function \delta of M for L=\{a^{n}b^{n}c^{n}\mid n\geq1\} ..
8.3. M 's states, their meaning and their intention.'s states, their meaning and their intention.
8.4. Comparison of some polynomial and exponential time functions.
8.5. What happens when the computers get faster?
8.6. Transition graph of a stochastic automaton for describing Random-SAT.
8.7. Runtimes of some algorithms for the satisfiability problem.
10.1. Prisoner's dilemma.
10.2. Game with no equilibrium.
10.3. Finite tree of Example 10.3.
10.4. Tree for Exercise 10.1-5.
10.5. The graph of function The graph of function g(\lambda) ..
11.1. The form of particular solutions.
11.2. Binary trees with two and three vertices.
12.1. Forward and backward error.
12.2. Gaussian elimination.

AnTonCom, Budapest, 2011

This electronic book was prepared in the framework of project Eastern Hungarian Informatics Books Repository no. TÁMOP-4.1.2-08/1/A-2009-0046

This electronic book appeared with the support of European Union and with the co-financing of European Social Fund

Nemzeti Fejlesztési Ügynökség http://ujszechenyiterv.gov.hu/ 06 40 638-638

Editor: Antal Iványi

Authors of Volume 1: László Lovász (Preface), Antal Iványi (Introduction), Zoltán Kása (Chapter 1), Zoltán Csörnyei (Chapter 2), Ulrich Tamm (Chapter 3), Péter Gács (Chapter 4), Gábor Ivanyos and Lajos Rónyai (Chapter 5), Antal Járai and Attila Kovács (Chapter 6), Jörg Rothe (Chapters 7 and 8), Csanád Imreh (Chapter 9), Ferenc Szidarovszky (Chapter 10), Zoltán Kása (Chapter 11), Aurél Galántai and András Jeney (Chapter 12)

Validators of Volume 1: Zoltán Fülöp (Chapter 1), Pál Dömösi (Chapter 2), Sándor Fridli (Chapter 3), Anna Gál (Chapter 4), Attila Pethő (Chapter 5), Lajos Rónyai (Chapter 6), János Gonda (Chapter 7), Gábor Ivanyos (Chapter 8), Béla Vizvári (Chapter 9), János Mayer (Chapter 10), András Recski (Chapter 11), Tamás Szántai (Chapter 12), Anna Iványi (Bibliography)

Authors of Volume 2: Burkhard Englert, Dariusz Kowalski, Gregorz Malewicz, and Alexander Shvartsman (Chapter 13), Tibor Gyires (Chapter 14), Claudia Fohry and Antal Iványi (Chapter 15), Eberhard Zehendner (Chapter 16), Ádám Balogh and Antal Iványi (Chapter 17), János Demetrovics and Attila Sali (Chapters 18 and 19), Attila Kiss (Chapter 20), István Miklós (Chapter 21), László Szirmay-Kalos (Chapter 22), Ingo Althöfer and Stefan Schwarz (Chapter 23)

Validators of Volume 2: István Majzik (Chapter 13), János Sztrik (Chapter 14), Dezső Sima (Chapters 15 and 16), László Varga (Chapter 17), Attila Kiss (Chapters 18 and 19), András Benczúr (Chapter 20), István Katsányi (Chapter 21), János Vida (Chapter 22), Tamás Szántai (Chapter 23), Anna Iványi (Bibliography)

©2011 AnTonCom Infokommunikációs Kft.

Homepage: http://www.antoncom.hu/

Preface

It is a special pleasure for me to recommend to the Readers the book Algorithms of Computer Science, edited with great care by Antal Iványi. Computer algorithms form a very important and fast developing branch of computer science. Design and analysis of large computer networks, large scale scientific computations and simulations, economic planning, data protection and cryptography and many other applications require effective, carefully planned and precisely analyzed algorithms.

Many years ago we wrote a small book with Péter Gács [168] under the title Algorithms. The three volumes of the book Algorithms of Computer Science show how this topic developed into a complex area that branches off into many exciting directions. It gives a special pleasure to me that so many excellent representatives of Hungarian computer science have cooperated to create this book. It is obvious to me that this book will be one of the most important reference books for students, researchers and computer users for a long time.

Budapest, July 2010

László Lovász

Introduction

The first volume of the book Informatikai algoritmusok appeared in 2004 in Hungarian [127], and the second volume of the book appeared in 2005 [128]. Two volumes contained 31 chapters: 23 chapters of the first and second volumes of the present electronic book, and further chapters on clustering, frequent elements in data bases, geoinformatics, inner-point methods, number theory, Petri-nets, queuing theory, and scheduling.

The Hungarian version of the first volume contains those chapters which were finished until May of 2004, and the second volume contains the chapters finished until April of 2005.

The printed English version contains the chapters submitted until April of 2007. Volume 1 [129] contains the chapters belonging to the fundamentals of informatics, while the second volume [130] contains the chapters having closer connection with some applications.

The first and second volumes of the given book represent an extended and corrected electronic version of the printed book written is English. The third volume of the present book contains new chapters.

The chapters of the first volume are divided into three parts. The chapters of Part 1 are connected with automata: Automata and Formal Languages (written by Zoltán Kása, Sapientia Hungarian University of Transylvania), Compilers (Zoltán Csörnyei, Eötvös Loránd University), Compression and Decompression (Ulrich Tamm, Chemnitz University of Technology Commitment), Reliable Computations (Péter Gács, Boston University).

The chapters of Part 2 have algebraic character: here are the chapters Algebra (written by Gábor Ivanyos and Lajos Rónyai, Budapest University of Technology and Economics), Computer Algebra (Antal Járai and Attila Kovács, Eötvös Loránd University), further Cryptology and Complexity Theory (Jörg Rothe, Heinrich Heine University).

The chapters of the third part have numeric character: Competitive Analysis (Csanád Imreh, University of Szeged), Game Theory (Ferenc Szidarovszky, The University of Arizona) and Scientific Computations (Aurél Galántai, Óbuda University and András Jeney, University of Miskolc).

The second volume is also divided into three parts. The chapters of Part 4 are connected with computer networks: Distributed Algorithms (Burkhard Englert, California State University; Dariusz Kowalski, University of Liverpool; Grzegorz Malewicz, University of Alabama; Alexander Allister Shvartsman, University of Connecticut), Parallel Algorithms (Claudia Fohry, University of Kassel and Antal Iványi, Eötvös Loránd University), Network Simulation (Tibor Gyires, Illinois State University) and Systolic Systems (Eberhard Zehendner, Friedrich Schiller University).

The chapters of Part 5 are Relational Databases and Query in Relational Databases (János Demetrovics, Eötvös Loránd University and Attila Sali, Alfréd Rényi Institute of Mathematics), Semistructured Data Bases (Attila Kiss, Eötvös Loránd University) and Memory Management (Ádám Balogh, Antal Iványi, Eötvös Loránd University).

The chapters of the third part of the second volume have close connections with biology: Bioinformatics (István Miklós, Rényi Institute of Mathematics), Human-Computer Interactions (Ingo Althöfer, Stefan Schwarz, Friedrich Schiller University), and Computer Graphics (László Szirmay-Kalos, Budapest University of Technology and Economics).

The chapters are validated by Gábor Ivanyos, István Majzik, Lajos Rónyai, András Recski, and Tamás Szántai (Budapest University of Technology and Economics), András Benczúr, Sándor Fridli, János Gonda, István Katsányi, Attila Kiss, László Varga, János Vida, and Béla Vizvári (Eötvös Loránd University), Dezső Sima (Óbuda University) Pál Dömösi, János Sztrik, and Attila Pethő (University of Debrecen), Zoltán Fülöp (University of Szeged), Anna Gál (University of Texas), János Mayer (University of Zürich).

The first and second volumes contain verbal description, pseudocode and analysis of over 200 algorithms, and over 350 figures and 120 examples illustrating how the algorithms work. Each section ends with exercises and each chapter ends with problems. In the two volumes you can find over 330 exercises and 70 problems.

We have supplied an extensive bibliography, in the section Chapter Notes of each chapter. In the bibliography the names of the authors, journals and publishers are usually active links to the corresponding web sites (the living elements are underlined in the printed version and on the screen too).

The LaTeX style file was written by Viktor Belényesi, Zoltán Csörnyei, László Domoszlai and Antal Iványi. The figures was drawn or corrected by Kornél Locher. Anna Iványi transformed the bibliography into hypertext. The DOCBOOK version was made by Marton 2001 Kft.

Using the data of the colofon page you can contact with any of the creators of the book. We welcome ideas for new exercises and problems, and also critical remarks or bug reports.

The publication of the printed book was supported by Department of Mathematics of Hungarian Academy of Science, and the electronic version received support from European Union and from the European Social Fund.

Budapest, June 22, 2011

Antal Iványi (tony@compalg.inf.elte.hu)

Part I. AUTOMATA

Table of Contents

1. Automata and Formal Languages
1.1 Languages and grammars
1.1.1 Operations on languages
1.1.2 Specifying languages
1.1.3 Chomsky hierarchy of grammars and languages
1.1.4 Extended grammars
1.1.5 Closure properties in the Chomsky-classes
1.2 Finite automata and regular languages
1.2.1 Transforming nondeterministic finite automata
1.2.2 Equivalence of deterministic finite automata
1.2.3 Equivalence of finite automata and regular languages.
1.2.4 Finite automata with special moves
1.2.5 Minimization of finite automata
1.2.6 Pumping lemma for regular languages
1.2.7 Regular expressions
1.3 Pushdown automata and context-free languages
1.3.1 Pushdown automata
1.3.2 Context-free languages
1.3.3 Pumping lemma for context-free languages
1.3.4 Normal forms of the context-free languages
2. Compilers
2.1 The structure of compilers
2.2 Lexical analysis
2.2.1 The automaton of the scanner
2.2.2 Special problems
2.3 Syntactic analysis
2.3.1 parser
2.3.2 parsing
3. Compression and Decompression
3.1 Facts from information theory
3.1.1 The Discrete Memoryless Source
3.1.2 Prefix codes
3.1.3 Kraft's inequality and noiseless coding theorem
3.1.4 Shannon-Fano-Elias-codes
3.1.5 The Huffman coding algorithm
3.2 Arithmetic coding and modelling
3.2.1 Arithmetic coding
3.2.2 Modelling
3.3 Ziv-Lempel-coding
3.3.1 LZ77
3.3.2 LZ78
3.4 The Burrows-Wheeler-transform
3.5 Image compression
3.5.1 Representation of data
3.5.2 The discrete cosine transform
3.5.3 Quantisation
3.5.4 Coding
4. Reliable Computation
4.1 Probability theory
4.1.1 Terminology
4.1.2 The law of large numbers (with “large deviations”)
4.2 Logic circuits
4.2.1 Boolean functions and expressions
4.2.2 Circuits
4.2.3 Fast addition by a Boolean circuit
4.3 Expensive fault-tolerance in Boolean circuits
4.4 Safeguarding intermediate results
4.4.1 Cables
4.4.2 Compressors
4.4.3 Propagating safety
4.4.4 Endgame
4.4.5 The construction of compressors
4.5 The reliable storage problem
4.5.1 Clocked circuits
4.5.2 Storage
4.5.3 Error-correcting codes
4.5.4 Refreshers

Chapter 1. Automata and Formal Languages

Automata and formal languages play an important role in projecting and realizing compilers. In the first section grammars and formal languages are defined. The different grammars and languages are discussed based on Chomsky hierarchy. In the second section we deal in detail with the finite automata and the languages accepted by them, while in the third section the pushdown automata and the corresponding accepted languages are discussed. Finally, references from a rich bibliography are given.

1.1 Languages and grammars

A finite and nonempty set of symbols is called an alphabet. The elements of an alphabet are letters, but sometimes are named also symbols.

With the letters of an alphabet words are composed. If then is a word over the alphabet (the letters are not necessary distinct). The number of letters of a word, with their multiplicities, constitutes the length of the word. If then the length of is If then the word is an empty word, which will be denoted by (sometimes in other books). The set of words over the alphabet will be denoted by :

For the set of nonempty words over the notation will be used. The set of words of length over will be denoted by , and Then

The words and are equal (i.e. ), if and

We define in the binary operation called concatenation. The concatenation (or product) of the words and is the word . It is clear that This operation is associative but not commutative. Its neutral element is because for all . with the concatenation is a monoid.

We introduce the power operation. If then and for The reversal (or mirror image) of the word is . The reversal of sometimes is denoted by or . It is clear that and

Word is a prefix of the word if there exists a word such that . If then is a proper prefix of . Similarly is a suffix of if there exists a word such that . The proper suffix can also be defined. Word is a subword of the word if there are words and such that If then is a proper subword.

A subset of is called a language over the alphabet . Sometimes this is called a formal language because the words are here considered without any meanings. Note that is the empty language while is a language which contains the empty word.

1.1.1 Operations on languages

If are languages over we define the following operations

union

intersection

difference

complement

multiplication

power

if

iteration or star operation

mirror

We will use also the notation

The union, product and iteration are called regular operations.

1.1.2 Specifying languages

Languages can be specified in several ways. For example a language can be specified using

1) the enumeration of its words,

2) a property, such that all words of the language have this property but other word have not,

3) a grammar.

Specifying languages by listing their elements.

For example the following are languages

Even if we cannot enumerate the elements of an infinite set infinite languages can be specified by enumeration if after enumerating the first some elements we can continue the enumeration using a rule. The following is such a language

Specifying languages by properties.

The following sets are languages

where denotes the number of letters in word and the number of letters .

Specifying languages by grammars.

Define the generative grammar or shortly the grammar.

Definition 1.1 A grammar is an ordered quadruple , where

is the alphabet of variables (or nonterminal symbols),

is the alphabet of terminal symbols, where ,

is a finite set, that is is the finite set of productions of the form , where and contains at least a nonterminal symbol,

is the start symbol.

Remarks. Instead of the notation sometimes is used.

In the production or word is called the left-hand side of the production while the right-hand side. If for a grammar there are more than one production with the same left-hand side, then these production

We define on the set the relation called direct derivation

In fact we replace in an appearance of the subword by and we get . Another notations for the same relation can be or .

If we want to emphasize the used grammar , then the notation can be replaced by . Relation is the reflexive and transitive closure of , while denotes its transitive closure. Relation is called a derivation.

From the definition of a reflexive and transitive relation we can deduce the following: , if there exist the words and . This can be written shortly If then . The same way we can define the relation except that always, so at least one direct derivation will de used.

Definition 1.2 The language generated by grammar is the set

So contains all words over the alphabet which can be derived from the start symbol using the productions from .

Example 1.1 Let where

It is easy to see than because

where up to the last but one replacement the first production () was used, while at the last replacement the production . This derivation can be written Therefore can be derived from for all and no other words can be derived from .

Definition 1.3 Two grammars and are equivalent, and this is denoted by if .

Example 1.2 The following two grammars are equivalent because both of them generate the language .

, where

,

where

First let us prove by mathematical induction that for If then

The inductive hypothesis is We use production , then times production , and then production , afterwards again times production . Therefore

If now we use production we get for , but by the production , so for any . We have to prove in addition that using the productions of the grammar we cannot derive only words of the form . It is easy to see that a successful derivation (which ends in a word containing only terminals) can be obtained only in the presented way.

Similarly for

Here orderly were used the productions ( times), , ( times), , ( times), , ( times). But So , . It is also easy to see than other words cannot be derived using grammar .

The grammars and are not equivalent because .

Theorem 1.4 Not all languages can be generated by grammars.

Proof. We encode grammars for the proof as words on the alphabet . For a given grammar let and The encoding is the following:

the code of is the code of is

In the code of the grammar the letters are separated by 000, the code of the arrow is 0000, and the productions are separated by 00000.

It is enough, of course, to encode the productions only. For example, consider the grammar

.

The code of is 10101, the code of is 1001001, the code of is 10011001. The code of the grammar is

From this encoding results that the grammars with terminal alphabet can be enumerated as and the set of these grammars is a denumerable infinite set.

Footnote Let us suppose that in the alphabet there is a linear order , let us say . The words which are codes of grammars can be enumerated by ordering them first after their lengths, and inside the equal length words, alphabetically, using the order of their letters. But we can use equally the lexicographic order, which means that ( is before ) if is a proper prefix of or there exists the decompositions and , where , , are subwords, and letters with .

Consider now the set of all languages over denoted by , that is . The set is denumerable because its words can be ordered. Let this order , where . We associate to each language an infinite binary sequence the following way:

It is easy to see that the set of all such binary sequences is not denumerable, because each sequence can be considered as a positive number less than 1 using its binary representation (The decimal point is considered to be before the first digit). Conversely, to each positive number less than 1 in binary representation a binary sequence can be associated. So, the cardinality of the set of infinite binary sequences is equal to cardinality of interval , which is of continuum power. Therefore the set is of continuum cardinality. Now to each grammar with terminal alphabet associate the corresponding generated language over . Since the cardinality of the set of grammars is denumerable, there will exist a language from , without associated grammar, a language which cannot be generated by a grammar.

1.1.3 Chomsky hierarchy of grammars and languages

Putting some restrictions on the form of productions, four type of grammars can be distinguished.

Definition 1.5 Define for a grammar the following four types.

A grammar is of type 0 (phrase-structure grammar) if there are no restrictions on productions.

A grammar is of type 1 (context-sensitive grammar) if all of its productions are of the form , where , , . A production of the form can also be accepted if the start symbol does not occur in the right-hand side of any production.

A grammar is of type 2 (context-free grammar) if all of its productions are of the form , where , . A production of the form can also be accepted if the start symbol does not occur in the right-hand side of any production.

A grammar is of type 3 (regular grammar) if its productions are of the form or , where and . A production of the form can also be accepted if the start symbol does not occur in the right-hand side of any production.

If a grammar is of type then language is also of type .

This classification was introduced by Noam Chomsky.

A language is of type () if there exists a grammar of type which generates the language , so .

Denote by () the class of the languages of type . Can be proved that

By the definition of different type of languages, the inclusions () are evident, but the strict inclusions () must be proved.

Example 1.3 We give an example for each type of context-sensitive, context-free and regular grammars.

Context-sensitive grammar. , where

Elements of are:

Language contains words of the form with and .

Context-free grammar. , where

Elements of are:

Language contains algebraic expressions which can be correctly built using letter , operators and and brackets.

Regular grammar. , where

Elements of are:

Language contains words over the alphabet with at least two letters at the beginning.

It is easy to prove that any finite language is regular. The productions will be done to generate all words of the language. For example, if is in the language, then we introduce the productions: , , , where is the start symbol of the language and are distinct nonterminals. We define such productions for all words of the language using different nonterminals for different words, excepting the start symbol . If the empty word is also an element of the language, then the production is also considered.

The empty set is also a regular language, because the regular grammar generates it.

Eliminating unit productions.

A production of the form is called a unit production, where . Unit productions can be eliminated from a grammar in such a way that the new grammar will be of the same type and equivalent to the first one.

Let be a grammar with unit productions. Define an equivalent grammar without unit productions. The following algorithm will construct the equivalent grammar.

Eliminate-Unit-Productions()

  1  if the unit productions  and  are in  put also 
       the unit production  in  while  can be extended
  2  if the unit production  and the production  () are in  
       put also the production  in 
  3  let  be the set of productions of  except unit productions 
  4  RETURN  

Clearly, and are equivalent. If is of type then is also of type

Example 1.4 Use the above algorithm in the case of the grammar , where contains

Using the first step of the algorithm, we get the following new unit productions:

            (because of and ),

            (because of and ),

            (because of and ),

            (because of and ),

            (because of and ),

            (because of and ).

In the second step of the algorithm will be considered only productions with or in the right-hand side, since productions , and can be used (the other productions are all unit productions). We get the following new productions:

            (because of and ),

              (because of and ),

            (because of and ),

            (because of and ),

            (because of and ).

The new grammar will have the productions:

Grammars in normal forms.

A grammar is to be said a grammar in normal form if its productions have no terminal symbols in the left-hand side.

We need the following notions. For alphabets and a homomorphism is a function for which . It is easy to see that for arbitrary value is uniquely determined by the restriction of on , because

If a homomorphism is a bijection then is an isomorphism.

Theorem 1.6 To any grammar an equivalent grammar in normal form can be associated.

Proof. Grammars of type 2 and 3 have in left-hand side of any productions only a nonterminal, so they are in normal form. The proof has to be done for grammars of type 0 and 1 only.

Let be the original grammar and we define the grammar in normal form as .

Let be those terminal symbols which occur in the left-hand side of productions. We introduce the new nonterminals . The following notation will be used: , , and .

Define the isomorphism , where

Define the set of production as

In this case if and only if From this the theorem immediately results because

Example 1.5 Let , where contains

In the left-hand side of productions the terminals occur, therefore consider the new nonterminals , and include in also the new productions , and .

Terminals will be replaced by nonterminals respectively, and we get the set as

Let us see what words can be generated by this grammars. It is easy to see that because

, so

We prove, using the mathematical induction, that for . For this is the case, as we have seen before. Continuing the derivation we get , and this is what we had to prove.

But So . These words can be generated also in .

1.1.4 Extended grammars

In this subsection extended grammars of type 1, 2 and 3 will be presented.

Extended grammar of type 1. All productions are of the form , where , excepted possibly the production .

Extended grammar of type 2. All productions are of the form , where

Extended grammar of type 3. All productions are of the form or , where .

Theorem 1.7 To any extended grammar an equivalent grammar of the same type can be associated.

Proof. Denote by the extended grammar and by the corresponding equivalent grammar of the same type.

Type 1. Define the productions of grammar by rewriting the productions , where of the extended grammar in the form allowed in the case of grammar by the following way.

Let () be a production of , which is not in the required form. Add to the set of productions of the following productions, where are new nonterminals:

Furthermore, add to the set of productions of without any modification the productions of which are of permitted form, i.e. . Inclusion can be proved because each used production of in a derivation can be simulated by productions obtained from it. Furthermore, since the productions of can be used only in the prescribed order, we could not obtain other words, so also is true.

Type 2. Let . Productions of form have to be eliminated, only can remain, if doesn't occur in the right-hand side of productions. For this define the following sets:

Since for we have , and is a finite set, there must exists such a for which . Let us denote this set as . It is easy to see that a nonterminal is in if and only if . (In addition if and only if .)

We define the productions of starting from the productions of in the following way. For each production with of add to the set of productions of this one and all productions which can be obtained from it by eliminating from one or more nonterminals which are in , but only in the case when the right-hand side does not become .

It in not difficult to see that this grammar generates the same language as does, except the empty word . So, if then the proof is finished. But if , then there are two cases. If the start symbol does not occur in any right-hand side of productions, then by introducing the production , grammar will generate also the empty word. If occurs in a production in the right-hand side, then we introduce a new start symbol and the new productions and . Now the empty word can also be generated by grammar .

Type 3. First we use for the procedure defined for grammars of type 2 to eliminate productions of the form . From the obtained grammar we eliminate the unit productions using the algorithm ELIMINATE-UNIT-PRODUCTIONS.

In the obtained grammar for each production , where , add to the productions of also the followings

where are new nonterminals. It is easy to prove that grammar built in this way is equivalent to .

Example 1.6 Let be an extended grammar of type 1, where , and contains the following productions:

The only production which is not context-sensitive is . Using the method given in the proof, we introduce the productions:

Now the grammar is context-sensitive, where the elements of are

It can be proved that .

Example 1.7 Let be an extended grammar of type 2, where contains:

Then , , . The productions of the new grammar are:

The original grammar generates also the empty word and because occurs in the right-hand side of a production, a new start symbol and two new productions will be defined: . The context-free grammar equivalent to the original grammar is with the productions:

Both of these grammars generate language .

Example 1.8 Let be the extended grammar of type 3 under examination, where :

First, we eliminate production . Since , the productions will be

The latter production (which a unit production) can also be eliminated, by replacing it with . Productions and have to be transformed. Since, both productions have the same right-hand side, it is enough to introduce only one new nonterminal and to use the productions and instead of . Production will be replaced by . The new grammar is , where :

Can be proved that .

1.1.5 Closure properties in the Chomsky-classes

We will prove the following theorem, by which the Chomsky-classes of languages are closed under the regular operations, that is, the union and product of two languages of type is also of type , the iteration of a language of type is also of type ().

Theorem 1.8 The class () of languages is closed under the regular operations.

Proof. For the proof we will use extended grammars. Consider the extended grammars and of type each. We can suppose that .

Union. Let .

We will show that If then from the assumption that and are of type follows by definition that also is of type . If and one of the grammars generates the empty word, then we eliminate from the corresponding production (possibly the both) () and replace it by production .

Product. Let .

We will show that By definition, if then will be of the same type. If and there is production in but there is no production in then production will be replaced by . We will proceed the same way in the symmetrical case. If there is in production and in production then they will be replaced by .

In the case of regular grammars (), because is not a regular production, we need to use another grammar , where the difference between and lies in that instead of productions in the form in will exist production of the form .

Iteration. Let .

In the case of grammars of type 2 let . Then also is of type 2.

In the case of grammars of type 3, as in the case of product, we will change the productions, that is , where the difference between and lies in that for each will be replaced by , and the others will be not changed. Then also will be of type 3.

The productions given in the case of type 2 are not valid for , because when applying production we can get the derivations of type , , , where can be a left-hand side of a production. In this case, replacing by its right-hand side in derivation , we can generate a word which is not in the iterated language. To avoid such situations, first let us assume that the language is in normal form, i.e. the left-hand side of productions does not contain terminals (see Section 1.1), second we introduce a new nonterminal , so the set of nonterminals now is , and the productions are the following:

Now we can avoid situations in which the left-hand side of a production can extend over the limits of words in a derivation because of the iteration. The above derivations can be used only by beginning with and getting derivation . Here we can not replace unless the last symbol in is a terminal symbol, and only after using a production of the form .

It is easy to show that for each type.

Exercises

1.1-1 Give a grammar which generates language and determine its type.

1.1-2 Let be an extended context-free grammar, where ,

.

Give an equivalent context-free grammar.

1.1-3 Show that and are regular languages over arbitrary alphabet .

1.1-4 Give a grammar to generate language , where represents the number of 0's in word and the number of 1's.

1.1-5 Give a grammar to generate all natural numbers.

1.1-6 Give a grammar to generate the following languages, respectively:

,

,

,

.

1.1-7 Let be an extended grammar, where , and contains the productions:

Determine the type of this grammar. Give an equivalent, not extended grammar with the same type. What language it generates?

1.2 Finite automata and regular languages

Finite automata are computing models with input tape and a finite set of states (Fig. 1.1). Among the states some are called initial and some final. At the beginning the automaton read the first letter of the input word written on the input tape. Beginning with an initial state, the automaton read the letters of the input word one after another while change its states, and when after reading the last input letter the current state is a final one, we say that the automaton accepts the given word. The set of words accepted by such an automaton is called the language accepted (recognized) by the automaton.

Figure 1.1.  Finite automaton.

Finite automaton.

Definition 1.9 A nondeterministic finite automaton (NFA) is a system , where

is a finite, nonempty set of states,

is the input alphabet,

is the set of transitions (or of edges), where ,

is the set of initial states,

is the set of final states.

An NFA is in fact a directed, labelled graph, whose vertices are the states and there is a (directed) edge labelled with from vertex to vertex if . Among vertices some are initial and some final states. Initial states are marked by a small arrow entering the corresponding vertex, while the final states are marked with double circles. If two vertices are joined by two edges with the same direction then these can be replaced by only one edge labelled with two letters. This graph can be called a transition graph.

Example 1.9 Let , where , ,

,

The automaton can be seen in Fig. 1.2.

Figure 1.2.  The finite automaton of Example 1.9.

The finite automaton of Example 1.9.


In the case of an edge vertex is the start-vertex, the end-vertex and the label. Now define the notion of the walk as in the case of graphs. A sequence

of edges of a NFA is a walk with the label . If then and . Such a walk is called an empty walk. For a walk the notation

will be used, or if then we write shortly . Here is the start-vertex and the end-vertex of the walk. The states in a walk are not necessary distinct. A walk is productive if its start-vertex is an initial state and its end-vertex is a final state. We say that an NFA accepts or recognizes a word if this word is the label of a productive walk. The empty word is accepted by an NFA if there is an empty productive walk, i.e. there is an initial state which is also a final state.

Figure 1.3.  Nondeterministic finite automata.

Nondeterministic finite automata.

Figure 1.4.  Transition tables of the NFA in Fig. 1.3

Transition tables of the NFA in Fig. 1.3


The set of words accepted by an NFA will be called the language accepted by this NFA. The language accepted or recognized by NFA is

The NFA and are equivalent if .

Sometimes it is useful the following transition function:

This function associate to a state and input letter the set of states in which the automaton can go if its current state is and the head is on input letter .

Denote by the cardinal (the number of elements) of .

Footnote. The same notation is used for the cardinal of a set and length of a word, but this is no matter of confusion because for word we use lowercase letters and for set capital letters. The only exception is , but this could not be confused with a word.

An NFA is a deterministic finite automaton (DFA) if

In Fig. 1.2 a DFA can be seen.

Condition can be replaced by

If for a DFA for each state and for each letter then it is called a complete DFA.

Every DFA can be transformed in a complete DFA by introducing a new state, which can be called a snare state. Let be a DFA. An equivalent and complete DFA will be , where is the new state and . It is easy to see that . Using the transition function we can easily define the transition table. The rows of this table are indexed by the elements of , its columns by the elements of . At the intersection of row and column we put . In the case of Fig. 1.2, the transition table is:

The NFA in Fig. 1.3 are not deterministic: the first (automaton A) has two initial states, the second (automaton B) has two transitions with from state (to states and ). The transition table of these two automata are in Fig. 1.4. is set of words over which do not begin with two zeroes (of course is in language), is the set of words which contain as a subword.

Eliminating inaccessible states.

Let be a finite automaton. A state is accessible if it is on a walk which starts by an initial state. The following algorithm determines the inaccessible states building a sequence , , of sets, where is the set of initial states, and for any is the set of accessible states, which are at distance at most from an initial state.

Inaccessible-States(A)

  1   
  2   
  3  REPEAT 
  4     
  5    FOR all  
  6       DO FOR all  
  7          DO  
  8  UNTIL  
  9   
 10  RETURN  

The inaccessible states of the automaton can be eliminated without changing the accepted language.

If and then the running time of the algorithm (the number of steps) in the worst case is , because the number of steps in the two embedded loops is at most and in the loop rEPEAT at most .

Set has the property that if and only if . The above algorithm can be extended by inserting the condition to decide if language is or not empty.

Eliminating nonproductive states.

Let be a finite automaton. A state is productive if it is on a walk which ends in a terminal state. For finding the productive states the following algorithm uses the function :

This function for a state and a letter gives the set of all states from which using this letter the automaton can go into the state .

Nonproductive-States(A)

  1   
  2   
  3  REPEAT 
  4     
  5    FOR all  
  6       DO FOR all  
  7          DO  
  8  UNTIL  
  9   
 10  RETURN  

The nonproductive states of the automaton can be eliminated without changing the accepted language.

If is the number of states, the number of letters in the alphabet, then the running time of the algorithm is also as in the case of the algorithm INACCESSIBLE-STATES.

The set given by the algorithm has the property that if and only if . So, by a little modification it can be used to decide if language is or not empty.

1.2.1 Transforming nondeterministic finite automata

As follows we will show that any NFA can be transformed in an equivalent DFA.

Theorem 1.10 For any NFA one may construct an equivalent DFA.

Proof. Let be an NFA. Define a DFA , where

,

edges of are those triplets for which are not empty, and ,

,

.

We prove that .

a) First prove that . Let . Then there exists a walk

Using the transition function of NFA we construct the sets , . Then and since we get , so . Thus, there exists a walk

There are sets for which , and for we have , and

is a productive walk. Therefore . That is .

b) Now we show that . Let . Then there is a walk

Using the definition of we have , i.e. there exists , that is by the definitions of and there is such that . Similarly, there are the states such that , where , thus, there is a walk

so .

In constructing DFA we can use the corresponding transition function :

The empty set was excluded from the states, so we used here instead of .

Figure 1.5.  The equivalent DFA with NFA The equivalent DFA with NFA \mathrm{A} in Fig. 1.3. in Fig. 1.3.

The equivalent DFA with NFA \mathrm{A} in Fig. 1.3.

Example 1.10 Apply Theorem 1.10 to transform NFA in Fig. 1.3. Introduce the following notation for the states of the DFA:

where is the initial state. Using the transition function we get the transition table:

This automaton contains many inaccessible states. By algorithm INACCESSIBLE-STATES we determine the accessible states of DFA:

.

Initial state is also a final state. States and are final states. States are inaccessible and can be removed from the DFA. The transition table of the resulted DFA is

The corresponding transition graph is in Fig. 1.5.

The algorithm given in Theorem 1.10 can be simplified. It is not necessary to consider all subset of the set of states of NFA. The states of DFA can be obtained successively. Begin with the state and determine the states for all . For the newly obtained states we determine the states accessible from them. This can be continued until no new states arise.

In our previous example is the initial state. From this we get

The transition table is

which is the same (excepted the notation) as before.

The next algorithm will construct for an NFA the transition table of the equivalent DFA , but without to determine the final states (which can easily be included). Value of ISIN() in the algorithm is true if state is already in and is false otherwise. Let be an ordered list of the letters of .

NFA-DFA(A)

  1   
  2   
  3          counts the rows. 
  4          counts the states. 
  5  REPEAT 
  6    FOR         counts the columns. 
  7       DO  
  8          IF  
  9             THEN IF ISIN() 
 10                THEN  
 11                ELSE  
 12                    
 13                    
 14                    
 15             ELSE  
 16     
 17  UNTIL  
 18  RETURN transition table  of  

Since loop rEPEAT is executed as many times as the number of states of new automaton, in worst case the running time can be exponential, because, if the number of states in NFA is , then DFA can have even states. (The number of subsets of a set of elements is , including the empty set.)

Theorem 1.10 will have it that to any NFA one may construct an equivalent DFA. Conversely, any DFA is also an NFA by definition. So, the nondeterministic finite automata accepts the same class of languages as the deterministic finite automata.

1.2.2 Equivalence of deterministic finite automata

In this subsection we will use complete deterministic finite automata only. In this case has a single element. In formulae, sometimes, instead of set we will use its single element. We introduce for a set the function which give us the single element of set , so . Using walks which begin with the initial state and have the same label in two DFA's we can determine the equivalence of these DFA's. If only one of these walks ends in a final state, then they could not be equivalent.

Consider two DFA's over the same alphabet and . We are interested to determine if they are or not equivalent. We construct a table with elements of form , where and . Beginning with the second column of the table, we associate a column to each letter of the alphabet . If the first element of the th row is then at the cross of th row and the column associated to letter will be the pair .

In the first column of the first row we put and complete the first row using the above method. If in the first row in any column there occur a pair of states from which one is a final state and the other not then the algorithm ends, the two automata are not equivalent. If there is no such a pair of states, every new pair is written in the first column. The algorithm continues with the next unfilled row. If no new pair of states occurs in the table and for each pair both of states are final or both are not, then the algorithm ends and the two DFA are equivalent.

If , and then taking into account that in worst case loop rEPEAT is executed times, loop fOR times, the running time of the algorithm in worst case will be , or if then .

Our algorithm was described to determine the equivalence of two complete DFA's. If we have to determine the equivalence of two NFA's, first we transform them into complete DFA's and after this we can apply the above algorithm.

DFA-Equivalence()

  1  write in the first column of the first row the pair  
  2   
  3  REPEAT 
  4     
  5    let  be the pair in the first column of the th row 
  6    FOR all  
  7       DO write in the column associated to  in the th row 
                the pair 
  8          IF one state in  is final 
                   and the other not
  9             THEN RETURN NO 
 10             ELSE write pair  
                      in the next empty row of the first column,
                      if not occurred already in the first column
 11  UNTIL the first element of th row becomes empty 
 12  RETURN YES 

Example 1.11 Determine if the two DFA's in Fig. 1.6 are equivalent or not. The algorithm gives the table

The two DFA's are equivalent because all possible pairs of states are considered and in every pair both states are final or both are not final.

Figure 1.6.  Equivalent DFA's (Example 1.11).

Equivalent DFA's (Example 1.11).

Example 1.12 The table of the two DFA's in Fig. 1.7 is:

These two DFA's are not equivalent, because in the last column of the second row in the pair the first state is final and the second not.

Figure 1.7.  Non equivalent DFA's (Example 1.12).

Non equivalent DFA's (Example 1.12).

1.2.3 Equivalence of finite automata and regular languages.

We have seen that NFA's accept the same class of languages as DFA's. The following theorem states that this class is that of regular languages.

Theorem 1.11 If is a language accepted by a DFA, then one may construct a regular grammar which generates language .

Proof. Let be the DFA accepting language , that is . Define the regular grammar with the productions:

If for and , then put production in .

If and , then put also production in .

Prove that .

Let and . Thus, since accepts word , there is a walk

Then there are in the productions

(in the right-hand side of the last production does not occur, because ), so there is the derivation

Therefore, .

Conversely, let and . Then there exists a derivation

in which productions

were used, which by definition means that in DFA there is a walk

and since is a final state, .

If the DFA accepts also the empty word , then in the above grammar we introduce a new start symbol instead of , consider the new production and for each production introduce also .

Example 1.13 Let be a DFA, where . The corresponding transition table is

The transition graph of is in Fig. 1.8. By Theorem 1.11 we define regular grammar with the productions in

One may prove that .

Figure 1.8.  DFA of the Example 1.13.

DFA of the Example 1.13.


The method described in the proof of Theorem 1.11 easily can be given as an algorithm. The productions of regular grammar obtained from the DFA can be determined by the following algorithm.

Regular-Grammar-from-DFA(A)

  1   
  2  FOR all  
  3    DO FOR all  
  4       DO FOR all  
  5          DO IF  
  6             THEN  
  7                IF  
  8                   THEN  
  9  IF  
 10    THEN  
 11  RETURN  

It is easy to see that the running time of the algorithm is , if the number of states is and the number of letter in alphabet is . In lines 2–4 we can consider only one loop, if we use the elements of . Then the worst case running time is , where is the number of transitions of DFA. This is also , since all transitions are possible. This algorithm is:

Regular-Grammar-from-Dfa'(A)

  1   
  2  FOR all  
  3    DO  
  4       IF  
  5          THEN  
  6  IF  
  7    THEN  
  8  RETURN  

Theorem 1.12 If is a regular language, then one may construct an NFA that accepts language .

Proof. Let be the grammar which generates language . Define NFA :

, where (i.e. is a new symbol),

For every production , define transition in .

For every production , define transition in .

Prove that .

Let , . Then there is in a derivation of word :

This derivation is based on productions

Then, by the definition of the transitions of NFA there exists a walk

Thus, . If , there is production , but in this case the initial state is also a final one, so . Therefore, .

Let now . Then there exists a walk

If is the empty word, then instead of we have in the above formula , which also is a final state. In other cases only can be as last symbol. Thus, in there exist the productions

and there is the derivation

thus, and therefore .

Example 1.14 Let be a regular grammar. The NFA associated is , where . The corresponding transition table is

The transition graph is in Fig. 1.9. This NFA can be simplified, states and can be contracted in one final state.

Figure 1.9.  NFA associated to grammar in Example 1.14.

NFA associated to grammar in Example 1.14.


Using the above theorem we define an algorithm which associate an NFA to a regular grammar .

NFA-from-Regular-Grammar(A)

  1   
  2   
  3  FOR all  
  4    DO FOR all  
  5       DO IF  
  6          THEN  
  7       FOR all  
  8          DO IF  
  9             THEN  
 10  IF  
 11    THEN  
 12    ELSE  
 13  RETURN A 

As in the case of algorithm REGULAR-GRAMMAR-FROM-DFA, the running time is , where is number of nonterminals and the number of terminals. Loops in lines 3, 4 and 7 can be replaced by only one, which uses productions. The running time in this case is better and is equal to , if is the number of productions. This algorithm is:

NFA-from-Regular-Grammar'(A)

  1   
  2   
  3  FOR all  
  4    DO IF  
  5       THEN  
  6    IF  
  7       THEN  
  8  IF  
  9    THEN  
 10    ELSE  
 11  RETURN A 

From Theorems 1.10, 1.11 and 1.12 results that the class of regular languages coincides with the class of languages accepted by NFA's and also with class of languages accepted by DFA's. The result of these three theorems is illustrated in Fig. 1.10 and can be summarised also in the following theorem.

Figure 1.10.  Relations between regular grammars and finite automata. To any regular grammar one may construct an NFA which accepts the language generated by that grammar. Any NFA can be transformed in an equivalent DFA. To any DFA one may construct a regular grammar which generates the language accepted by that DFA.

Relations between regular grammars and finite automata. To any regular grammar one may construct an NFA which accepts the language generated by that grammar. Any NFA can be transformed in an equivalent DFA. To any DFA one may construct a regular grammar which generates the language accepted by that DFA.

Theorem 1.13 The following three class of languages are the same:

the class of regular languages,

the class of languages accepted by DFA's,

the class of languages accepted by NFA's.

Operation on regular languages

It is known (see Theorem 1.8) that the set of regular languages is closed under the regular operations, that is if are regular languages, then languages , and are also regular. For regular languages are true also the following statements.

The complement of a regular language is also regular. This is easy to prove using automata. Let be a regular language and let be a DFA which accepts language . It is easy to see that the DFA accepts language . So, is also regular.

The intersection of two regular languages is also regular. Since , the intersection is also regular.

The difference of two regular languages is also regular. Since , the difference is also regular.

1.2.4 Finite automata with special moves

A finite automaton with -moves (FA with -moves) extends NFA in such way that it may have transitions on the empty input , i.e. it may change a state without reading any input symbol. In the case of a FA with -moves for the set of transitions it is true that .

The transition function of a FA with -moves is:

The FA with -moves in Fig. 1.11 accepts words of form , where and .

Figure 1.11.  Finite automata with Finite automata with \epsilon -moves.-moves.

Finite automata with \epsilon -moves.

Theorem 1.14 To any FA with -moves one may construct an equivalent NFA (without -moves).

Let be an FA with -moves and we construct an equivalent NFA . The following algorithm determines sets and . For a state denote by the set of states (including even ) in which one may go from using -moves only. This may be extended also to sets

Clearly, for all and both and may be computed. Suppose in the sequel that these are given.

The following algorithm determine the transitions using the transition function , which is defined in line 5.

If and , then lines 2–6 show that the running time in worst case is .

Eliminate-Epsilon-Moves(A)

  1   
  2  FOR all  
  3    DO FOR all  
  4       DO  
  5           
  6   
  7  RETURN  

Example 1.15 Consider the FA with -moves in Fig. 1.11. The corresponding transition table is:

Apply algorithm ELIMINATE-EPSILON-MOVES.

, ,

, and its intersection with is not empty, thus .

,

.

,

,

,

,

.

The transition table of NFA is:

and the transition graph is in Fig. 1.12.

Figure 1.12.  NFA equivalent to FA with NFA equivalent to FA with \varepsilon -moves given in Fig. 1.11.-moves given in Fig. 1.11.

NFA equivalent to FA with \varepsilon -moves given in Fig. 1.11.


Define regular operations on NFA: union, product and iteration. The result will be an FA with -moves.

Operation will be given also by diagrams. An NFA is given as in Fig. 1.13(a). Initial states are represented by a circle with an arrow, final states by a double circle.

Figure 1.13.  (a) Representation of an NFA. Initial states are represented by a circle with an arrow, final states by a double circle. (b) Union of two NFA's.

(a) Representation of an NFA. Initial states are represented by a circle with an arrow, final states by a double circle. (b) Union of two NFA's.


Let and be NFA. The result of any operation is a FA with -moves . Suppose that always. If not, we can rename the elements of any set of states.

Union. , where

,

,

,

,

.

For the result of the union see Fig. 1.13(b). The result is the same if instead of a single initial state we choose as set of initial states the union . In this case the result automaton will be without -moves. By the definition it is easy to see that .

Product. , where

,

,

,

,

.

For the result automaton see Fig. 1.14(a). Here also .

Figure 1.14.  (a) Product of two FA. (b) Iteration of an FA.

(a) Product of two FA. (b) Iteration of an FA.

Iteration. , where

,

,

,

,

.

The iteration of an FA can be seen in Fig. 1.14(b). For this operation it is also true that .

The definition of these tree operations proves again that regular languages are closed under the regular operations.

1.2.5 Minimization of finite automata

A DFA is called minimum state automaton if for any equivalent complete DFA it is true that . We give an algorithm which builds for any complete DFA an equivalent minimum state automaton.

Figure 1.15.  Minimization of DFA.

Minimization of DFA.


States and of an DFA are equivalent if for arbitrary word we reach from both either final or nonfinal states, that is

if for any word

If two states are not equivalent, then they are distinguishable. In the following algorithm the distinguishable states will be marked by a star, and equivalent states will be merged. The algorithm will associate list of pair of states with some pair of states expecting a later marking by a star, that is if we mark a pair of states by a star, then all pairs on the associated list will be also marked by a star. The algorithm is given for DFA without inaccessible states. The used DFA is complete, so contains exact one element, function defined in Subsection 1.2.2, which gives the unique element of the set, will be also used here.

Automaton-Minimization()

  1  mark with a star all pairs of states  for which  and  or  and  
  2  associate an empty list with each unmarked pair  
  3  FOR all unmarked pair of states  and for all symbol  
          examine pairs of states 
          IF any of these pairs is marked,
             THEN mark also pair  with all the elements on the list before
                associated with pair 
             ELSE IF all the above pairs are unmarked
                THEN put pair  on each list associated with pairs
                   , unless 
  4  merge all unmarked (equivalent) pairs 

Figure 1.16.  Minimum automaton equivalent with DFA in Fig. 1.15.

Minimum automaton equivalent with DFA in Fig. 1.15.


After finishing the algorithm, if a cell of the table does not contain a star, then the states corresponding to its row and column index, are equivalent and may be merged. Merging states is continued until it is possible. We can say that the equivalence relation decomposes the set of states in equivalence classes, and the states in such a class may be all merged.

Remark. The above algorithm can be used also in the case of an DFA which is not complete, that is there are states for which does not exist transition. Then a pair may occur, and if is a final state, consider this pair marked.

Example 1.16 Let be the DFA in Fig. 1.15. We will use a table for marking pairs with a star. Marking pair means putting a star in the cell corresponding to row and column (or row and column ).

First we mark pairs , , , and (because is the single final state). Then consider all unmarked pairs and examine them as the algorithm requires. Let us begin with pair . Associate with it pairs , that is , . Because pair is already marked, mark also pair .

In the case of pair the new pairs are and . With pair associate pair on a list, that is

Now continuing with one obtain pairs and , with which nothing are associated by algorithm.

Continue with pair . The associated pairs are and . None of them are marked, so associate with them on a list pair , that is

Now continuing with we get the pairs and , and because this latter is marked we mark pair and also pair associated to it on a list. Continuing we will get the table in Fig. 1.15, that is we get that and . After merging them we get an equivalent minimum state automaton (see Fig. 1.16).

1.2.6 Pumping lemma for regular languages

The following theorem, called pumping lemma for historical reasons, may be efficiently used to prove that a language is not regular. It is a sufficient condition for a regular language.

Theorem 1.15 (pumping lemma) For any regular language there exists a natural number (depending only on ), such that any word of with length at least may be written as such that

(1) ,

(2) ,

(3) for all .

Proof. If is a regular language, then there is such an DFA which accepts (by Theorems 1.12 and 1.10). Let be this DFA, so . Let be the number of its states, that is . Let and . Then, because the automaton accepts word , there are states and walk

Figure 1.17.  Sketch of DFA used in the proof of the pumping lemma.

Sketch of DFA used in the proof of the pumping lemma.


Because the number of states is and , by the pigeonhole principle states can not all be distinct (see Fig. 1.17), there are at least two of them which are equal.

Footnote. Pigeonhole principle: If we have to put more than objects into boxes, then at least one box will contain at least two objects.

Let , where and is the least such index. Then . Decompose word as:

.

This decomposition immediately yields to and . We will prove that for any .

Because , there exists an walk

and because of , this may be written also as

From this walk can be omitted or can be inserted many times. So, there are the following walks:

Therefore for all , and this proves the theorem.

Example 1.17 We use the pumping lemma to show that is not regular. Assume that is regular, and let be the corresponding natural number given by the pumping lemma. Because the length of the word is , this word can be written as in the lemma. We prove that this leads to a contradiction. Let be the decomposition as in the lemma. Then , so and can contain no other letters than , and because we must have , word contains at least one . Then for will contain a different number of 's and 's, therefore for any . This is a contradiction with the third assertion of the lemma, this is why that assumption that is regular, is false. Therefore .

Because the context-free grammar generates language , we have . From these two follow that .

Example 1.18 We show that is not regular. ( is the number of 0's in , while the number of 1's).

We proceed as in the previous example using here word , where is the natural number associated by lemma to language .

Example 1.19 We prove, using the pumping lemma, that is not a regular language. Let be, where here is also the natural number associated to by the pumping lemma. From we have that contains no other letters than , but it contains at least one. By lemma we have , that is not possible. Therefore is not regular.

Pumping lemma has several interesting consequences.

Corollary 1.16 Regular language is not empty if and only if there exists a word , , where is the natural number associated to by the pumping lemma.

Proof. The assertion in a direction is obvious: if there exists a word shorter than in , then . Conversely, let and let be the shortest word in . We show that . If , then we apply the pumping lemma, and give the decomposition , and . This is a contradiction, because and is the shortest word in . Therefore .

Corollary 1.17 There exists an algorithm that can decide if a regular language is or not empty.

Proof. Assume that , where is a DFA. By Consequence 1.16 and Theorem 1.15 language is not empty if and only if it contains a word shorter than , where is the number of states of automaton . By this it is enough to decide that there is a word shorter than which is accepted by automaton . Because the number of words shorter than is finite, the problem can be decided.

When we had given an algorithm for inaccessible states of a DFA, we remarked that the procedure can be used also to decide if the language accepted by that automaton is or not empty. Because finite automata accept regular languages, we can consider to have already two procedures to decide if a regular languages is or not empty. Moreover, we have a third procedure, if we take into account that the algorithm for finding productive states also can be used to decide on a regular language when it is empty.

Corollary 1.18 A regular language is infinite if and only if there exists a word such that , where is the natural number associated to language , given by the pumping lemma.

Proof. If is infinite, then it contains words longer than , and let be the shortest word longer than in . Because is regular we can use the pumping lemma, so , where , thus is also true. By the lemma . But because and the shortest word in longer than is , we get . From we get also .

Conversely, if there exists a word such that , then using the pumping lemma, we obtain that , and for any , therefore is infinite.

Now, the question is: how can we apply the pumping lemma for a finite regular language, since by pumping words we get an infinite number of words? The number of states of a DFA accepting language is greater than the length of the longest word in . So, in there is no word with length at least , when is the natural number associated to by the pumping lemma. Therefore, no word in can be decomposed in the form , where , , , and this is why we can not obtain an infinite number of words in .

1.2.7 Regular expressions

In this subsection we introduce for any alphabet the notion of regular expressions over and the corresponding representing languages. A regular expression is a formula, and the corresponding language is a language over . For example, if , then , , are regular expressions over which represent respectively languages , , . The exact definition is the following.

Definition 1.19 Define recursively a regular expression over and the language it represent.

is a regular expression representing the empty language.

is a regular expression representing language .

If , then is a regular expression representing language .

If , are regular expressions representing languages and respectively, then , , are regular expressions representing languages , and respectively.

Regular expression over can be obtained only by using the above rules a finite number of times.

Some brackets can be omitted in the regular expressions if taking into account the priority of operations (iteration, product, union) the corresponding languages are not affected. For example instead of we can consider .

Two regular expressions are equivalent if they represent the same language, that is if , where and are the languages represented by regular expressions and respectively. Figure 1.18 shows some equivalent expressions.

Figure 1.18.  Properties of regular expressions.

Properties of regular expressions.


We show that to any finite language can be associated a regular expression which represent language . If , then . If , then , where for any expression is a regular expression representing language . This latter can be done by the following rule. If , then , else if , where depends on , then , where the brackets are omitted. We prove the theorem of Kleene which refers to the relationship between regular languages and regular expression.

Theorem 1.20 (Kleene's theorem) Language is regular if and only if there exists a regular expression over representing language .

Proof. First we prove that if is a regular expression, then language which represents is also regular. The proof will be done by induction on the construction of expression.

If , , , then , , respectively. Since is finite in all three cases, it is also regular.

If , then , where and are the languages which represent the regular expressions and respectively. By the induction hypothesis languages and are regular, so is also regular because regular languages are closed on union. Cases and can be proved by similar way.

Conversely, we prove that if is a regular language, then a regular expression can be associated to it, which represent exactly the language . If is regular, then there exists a DFA for which . Let the states of the automaton . Define languages for all and . is the set of words, for which automaton goes from state to state without using any state with index greater than . Using transition graph we can say: a word is in , if from state we arrive to state following the edges of the graph, and concatenating the corresponding labels on edges we get exactly that word, not using any state . Sets can be done also formally:

, if ,

,

for all .

We can prove by induction that sets can be described by regular expressions. Indeed, if , then for all and languages are finite, so they can be expressed by regular expressions representing exactly these languages. Moreover, if for all and language can be expressed by regular expression, then language can be expressed also by regular expression, which can be corresponding constructed from regular expressions representing languages , , and respectively, using the above formula for .

Finally, if is the set of final states of the DFA , then can be expressed by a regular expression obtained from expressions representing languages using operation .

Further on we give some procedures which associate DFA to regular expressions and conversely regular expression to DFA.

Associating regular expressions to finite automata.

We present here three methods, each of which associate to a DFA the corresponding regular expression.

Method 1. Using the result of the theorem of Kleene, we will construct the sets , and write a regular expression which represents the language , where is the set of final states of the automaton.

Figure 1.19.  DFA from Example 1.20, to which regular expression is associated by Method 1.

DFA from Example 1.20, to which regular expression is associated by Method 1.

Figure 1.20.  DFA in Example 1.21 to which a regular expression is associated by Method 1. The computation are in Figure 1.21.

DFA in Example 1.21 to which a regular expression is associated by Method 1. The computation are in Figure 1.21.

Figure 1.21.  Determining a regular expression associated to DFA in Figure 1.20 using sets Determining a regular expression associated to DFA in Figure 1.20 using sets R_{ij}^{k} ..

Determining a regular expression associated to DFA in Figure 1.20 using sets R_{ij}^{k} .

Example 1.20 Consider the DFA in Fig. 1.19.

Then the regular expression corresponding to is .

Example 1.21 Find a regular expression associated to DFA in Fig. 1.20. The computations are in Figure 1.21. The regular expression corresponding to is .

Method 2. Now we generalize the notion of finite automaton, considering words instead of letters as labels of edges. In such an automaton each walk determine a regular expression, which determine a regular language. The regular language accepted by a generalized finite automaton is the union of regular languages determined by the productive walks. It is easy to see that the generalized finite automata accept regular languages.

The advantage of generalized finite automata is that the number of its edges can be diminuted by equivalent transformations, which do not change the accepted language, and leads to a graph with only one edge which label is exactly the accepted language. The possible equivalent transformations can be seen in Fig. 1.22. If some of the vertices 1, 2, 4, 5 on the figure coincide, in the result they are merged, and a loop will arrive.

First, the automaton is transformed by corresponding -moves to have only one initial and one final state. Then, applying the equivalent transformations until the graph will have only one edge, we will obtain as the label of this edge the regular expression associated to the automaton.

Figure 1.22.  Possible equivalent transformations for finding regular expression associated to an automaton.

Possible equivalent transformations for finding regular expression associated to an automaton.

Figure 1.23.  Transformation of the finite automaton in Fig. 1.19.

Transformation of the finite automaton in Fig. 1.19.

Example 1.22 In the case of Fig. 1.19 the result is obtained by steps illustrated in Fig. 1.23. This result is , which represents the same language as obtained by Method 1 (See example 1.20).

Example 1.23 In the case of Fig. 1.20 is not necessary to introduce new initial and final state. The steps of transformations can be seen in Fig. 1.23. The resulted regular expression can be written also as , which is the same as obtained by the previous method.

Figure 1.24.  Steps of Example 1.23.

Steps of Example 1.23.

Method 3. The third method for writing regular expressions associated to finite automata uses formal equations. A variable is associated to each state of the automaton (to different states different variables). Associate to each state an equation which left side contains , its right side contains sum of terms of form or , where is a variable associated to a state, and is its corresponding input symbol. If there is no incoming edge in the state corresponding to then the right side of the equation with left side contains , otherwise is the sum of all terms of the form for which there is a transition labelled with letter from state corresponding to to the state corresponding to . If the state corresponding to is also an initial and a final state, then on right side of the equation with the left side will be also a term equal to . For example in the case of Fig. 1.20 let these variable corresponding to the states . The corresponding equation are

.

If an equation is of the form , where are arbitrary words not containing , then it is easy to see by a simple substitution that is a solution of the equation.

Because these equations are linear, all of them can be written in the form or , where do not contain any variable. Substituting this in the other equations the number of remaining equations will be diminuted by one. In such a way the system of equation can be solved for each variable.

The solution will be given by variables corresponding to final states summing the corresponding regular expressions.

In our example from the first equation we get . From here , or , and solving this we get . Variable can be obtained immediately and we obtain .

Using this method in the case of Fig. 1.19, the following equations will be obtained

Therefore

.

Adding the two equations we will obtain

, from where (considering as and as ) we get the result

.

From here the value of after the substitution is

,

which is equivalent to the expression obtained using the other methods.

Associating finite automata to regular expressions.

Associate to the regular expression a generalized finite automaton:

After this, use the transformations in Fig. 1.25 step by step, until an automaton with labels equal to letters from or will be obtained.

Example 1.24 Get started from regular expression . The steps of transformations are in Fig. 1.26(a)-(e). The last finite automaton (see Fig. 1.26(e)) can be done in a simpler form as can be seen in Fig. 1.26(f). After eliminating the -moves and transforming in a deterministic finite automaton the DFA in Fig. 1.27 will be obtained, which is equivalent to DFA in Fig. 1.19.

Figure 1.25.  Possible transformations to obtain finite automaton associated to a regular expression.

Possible transformations to obtain finite automaton associated to a regular expression.

Figure 1.26.  Associating finite automaton to regular expression Associating finite automaton to regular expression \varepsilon+(0+1)^{*}1 ..

Associating finite automaton to regular expression \varepsilon+(0+1)^{*}1 .

Figure 1.27.  Finite automaton associated to regular expression Finite automaton associated to regular expression \varepsilon+(0+1)^{*}1 ..

Finite automaton associated to regular expression \varepsilon+(0+1)^{*}1 .

Exercises

1.2-1 Give a DFA which accepts natural numbers divisible by 9.

1.2-2 Give a DFA which accepts the language containing all words formed by

a. an even number of 0's and an even number of 1's,

b. an even number of 0's and an odd number of 1's,

c. an odd number of 0's and an even number of 1's,

d. an odd number of 0's and an odd number of 1's.

1.2-3 Give a DFA to accept respectively the following languages:

.

1.2-4 Give an NFA which accepts words containing at least two 0's and any number of 1's. Give an equivalent DFA.

1.2-5 Minimize the DFA's in Fig. 1.28.

Figure 1.28.  DFA's to minimize for Exercise 1.2-5

DFA's to minimize for Exercise 1.2-5


1.2-6 Show that the DFA in 1.29.(a) is a minimum state automaton.

Figure 1.29.  Finite automata for Exercises 1.2-6 and 1.2-7.

Finite automata for Exercises 1.2-6 and 1.2-7.


1.2-7 Transform NFA in Fig. 1.29.(b) in a DFA, and after this minimize it.

1.2-8 Define finite automaton which accepts all words of the form (), and finite automaton which accepts all words of the form (). Define the union automaton , and then eliminate the -moves.

1.2-9 Associate to DFA in Fig. 1.30 a regular expression.

Figure 1.30.  DFA for Exercise1.2-9.

DFA for Exercise1.2-9.


1.2-10 Associate to regular expression a DFA.

1.2-11 Prove, using the pumping lemma, that none of the following languages are regular:

.

1.2-12 Prove that if is a regular language, then is also regular.

1.2-13 Prove that if is a regular language, then the following languages are also regular.

.

1.2-14 Show that the following languages are all regular.

,

,

.

1.3 Pushdown automata and context-free languages

In this section we deal with the pushdown automata and the class of languages — the context-free languages — accepted by them.

As we have been seen in Section 1.1, a context-free grammar is one with the productions of the form , , . The production is also permitted if does not appear in right hand side of any productions. Language is the context-free language generated by grammar .

1.3.1 Pushdown automata

We have been seen that finite automata accept the class of regular languages. Now we get to know a new kind of automata, the so-called pushdown automata, which accept context-free languages. The pushdown automata differ from finite automata mainly in that to have the possibility to change states without reading any input symbol (i.e. to read the empty symbol) and possess a stack memory, which uses the so-called stack symbols (See Fig. 1.31).

Figure 1.31.  Pushdown automaton.

Pushdown automaton.


The pushdown automaton get a word as input, start to function from an initial state having in the stack a special symbol, the initial stack symbol. While working, the pushdown automaton change its state based on current state, next input symbol (or empty word) and stack top symbol and replace the top symbol in the stack with a (possibly empty) word.

There are two type of acceptances. The pushdown automaton accepts a word by final state when after reading it the automaton enter a final state. The pushdown automaton accepts a word by empty stack when after reading it the automaton empties its stack. We show that these two acceptances are equivalent.

Definition 1.21 A nondeterministic pushdown automaton is a system

where

is the finite, non-empty set of states

is the input alphabet,

is the stack alphabet,

is the set of transitions or edges,

is the initial state,

is the start symbol of stack,

is the set of final states.

A transition means that if pushdown automaton is in state , reads from the input tape letter (instead of input letter we can also consider the empty word ), and the top symbol in the stack is , then the pushdown automaton enters state and replaces in the stack by word . Writing word in the stack is made by natural order (letters of word will be put in the stack letter by letter from left to right). Instead of writing transition we will use a more suggestive notation .

Here, as in the case of finite automata, we can define a transition function

which associate to current state, input letter and top letter in stack pairs of the form , where is the word written in stack and the new state.

Because the pushdown automaton is nondeterministic, we will have for the transition function

(if the pushdown automaton reads an input letter and moves to right), or

(without move on the input tape).

A pushdown automaton is deterministic, if for any and we have

, and

if , then , .

We can associate to any pushdown automaton a transition table, exactly as in the case of finite automata. The rows of this table are indexed by elements of , the columns by elements from and (to each and will correspond a column). At intersection of row corresponding to state and column corresponding to and we will have pairs if . The transition graph, in which the label of edge will be corresponding to transition , can be also defined.

Figure 1.32.  Example of pushdown automaton.

Example of pushdown automaton.

Example 1.25 . Elements of are:

The transition function:

The transition table:

Because for the transition function every set which is not empty contains only one element (e.g. ), in the above table each cell contains only one element, and the set notation is not used. Generally, if a set has more than one element, then its elements are written one under other. The transition graph of this pushdown automaton is in Fig. 1.32.

The current state, the unread part of the input word and the content of stack constitutes a configuration of the pushdown automaton, i.e. for each , and the triplet can be a configuration. If and , then the pushdown automaton can change its configuration in two ways:

,

if

,

if .

The reflexive and transitive closure of the relation will be denoted by . Instead of using , sometimes is considered.

How does work such a pushdown automaton? Getting started with the initial configuration we will consider all possible next configurations, and after this the next configurations to these next configurations, and so on, until it is possible.

Definition 1.22 Pushdown automaton accepts (recognizes) word by final state if there exist a sequence of configurations of for which the following are true:

the first element of the sequence is ,

there is a going from each element of the sequence to the next element, excepting the case when the sequence has only one element,

the last element of the sequence is , where and .

Therefore pushdown automaton accepts word by final state, if and only if for some and . The set of words accepted by final state by pushdown automaton will be called the language accepted by by final state and will be denoted by .

Definition 1.23 Pushdown automaton accepts (recognizes) word by empty stack if there exist a sequence of configurations of for which the following are true:

the first element of the sequence is ,

there is a going from each element of the sequence to the next element,

the last element of the sequence is and is an arbitrary state.

Therefore pushdown automaton accepts a word by empty stack if for some . The set of words accepted by empty stack by pushdown automaton will be called the language accepted by empty stack by and will be denoted by .

Example 1.26 Pushdown automaton of Example 1.25 accepts the language by final state. Consider the derivation for words and .

Word is accepted by the considered pushdown automaton because

and because is a final state the pushdown automaton accepts this word. But the stack being empty, it accepts this word also by empty stack.

Because the initial state is also a final state, the empty word is accepted by final state, but not by empty stack.

To show that word is not accepted, we need to study all possibilities. It is easy to see that in our case there is only a single possibility:

, but there is no further going, so word is not accepted.

Example 1.27 The transition table of the pushdown automaton is:

Figure 1.33.  Transition graph of the Example 1.27

Transition graph of the Example 1.27


The corresponding transition graph can be seen in Fig. 1.33. Pushdown automaton accepts the language . Because is nemdeterministic, all the configurations obtained from the initial configuration can be illustrated by a computation tree. For example the computation tree associated to the initial configuration can be seen in Fig. 1.34. From this computation tree we can observe that, because is a leaf of the tree, pushdown automaton accepts word 1001 by empty stack. The computation tree in Fig. 1.35 shows that pushdown automaton does not accept word , because the configurations in leaves can not be continued and none of them has the form .

Figure 1.34.  Computation tree to show acceptance of the word Computation tree to show acceptance of the word 1001 (see Example 1.27). (see Example 1.27).

Computation tree to show acceptance of the word 1001 (see Example 1.27).

Figure 1.35.  Computation tree to show that the pushdown automaton in Example 1.27 does not accept word Computation tree to show that the pushdown automaton in Example 1.27 does not accept word 101 ..

Computation tree to show that the pushdown automaton in Example 1.27 does not accept word 101 .

Theorem 1.24 A language is accepted by a nondeterministic pushdown automaton by empty stack if and only if it can be accepted by a nondeterministic pushdown automaton by final state.

Proof. a) Let

be the pushdown automaton which accepts by empty stack language . Define pushdown automaton , where and

.

Working of : Pushdown automaton with an -move first goes in the initial state of , writing (the initial stack symbol of ) in the stack (beside ). After this it is working as . If for a given word empties its stack, then still has in the stack, which can be deleted by using an -move, while a final state will be reached. can reach a final state only if has emptied the stack.

b) Let be a pushdown automaton, which accepts language by final state. Define pushdown automaton , where , and

.

Working : Pushdown automaton with an -move writes in the stack beside the initial stack symbol of , then works as , i.e reaches a final state for each accepted word. After this empties the stack by an -move. can empty the stack only if goes in a final state.

The next two theorems prove that the class of languages accepted by nondeterministic pushdown automata is just the set of context-free languages.

Theorem 1.25 If is a context-free grammar, then there exists such a nondeterministic pushdown automaton which accepts by empty stack, i.e. .

We outline the proof only. Let be a context-free grammar. Define pushdown automaton , where , and the set of transitions is:

If there is in the set of productions of a production of type , then let put in the transition ,

For any letter let put in the transition . If there is a production in , the pushdown automaton put in the stack the mirror of with an -move. If the input letter coincides with that in the top of the stack, then the automaton deletes it from the stack. If in the top of the stack there is a nonterminal , then the mirror of right-hand side of a production which has in its left-hand side will be put in the stack. If after reading all letters of the input word, the stack will be empty, then the pushdown automaton recognized the input word.

The following algorithm builds for a context-free grammar the pushdown automaton , which accepts by empty stack the language generated by .

From-Cfg-to-Pushdown-Automaton()

  1  FOR all production  
          DO put in  the transition 
  2  FOR all terminal   
          DO put in  the transition 
  3  RETURN   

If has productions and terminals, then the number of step of the algorithm is .

Example 1.28 Let . Then , with the following transition table.

Let us see how pushdown automaton accepts word , which in grammar can be derived in the following way:

where productions and were used. Word is accepted by empty stack (see Fig. 1.36).

Figure 1.36.  Recognising a word by empty stack (see Example 1.28).

Recognising a word by empty stack (see Example 1.28).

Theorem 1.26 For a nondeterministic pushdown automaton there exists always a context-free grammar such that accepts language by empty stack, i.e. .

Instead of a proof we will give a method to obtain grammar . Let be the nondeterministic pushdown automaton in question.

Then , where

and .

Productions in will be obtained as follows.

For all state put in production .

If , where , () and , put in for all possible states productions

.

If , where , and , put in production

.

The context-free grammar defined by this is an extended one, to which an equivalent context-free language can be associated. The proof of the theorem is based on the fact that to every sequence of configurations, by which the pushdown automaton accepts a word, we can associate a derivation in grammar . This derivation generates just the word in question, because of productions of the form , which were defined for all possible states . In Example 1.27 we show how can be associated a derivation to a sequence of configurations. The pushdown automaton defined in the example recognizes word 00 by the sequence of configurations

,

which sequence is based on the transitions

,

,

.

To these transitions, by the definition of grammar , the following productions can be associated

(1) for all states ,

(2) ,

(3) .

Furthermore, for each state productions were defined.

By the existence of production there exists the derivation , where can be chosen arbitrarily. Let choose in above production (1) state to be equal to . Then there exists also the derivation

,

where can be chosen arbitrarily. If , then the derivation

will result. Now let equal to , then

,

which proves that word 00 can be derived used the above grammar.

The next algorithm builds for a pushdown automaton a context-free grammar , which generates the language accepted by pushdown automaton by empty stack.

From-Pushdown-Automaton-to-Cf-Grammar()

  1  FOR all  
  2    DO put in  production  
  3  FOR all         ,   
       (), 
  4       DO FOR all states  
  5          DO put in  productions  
  6  FOR All         ,   
  7       DO put in  production   

If the automaton has states and productions, then the above algorithm executes at most steps, so in worst case the number of steps is . Finally, without proof, we mention that the class of languages accepted by deterministic pushdown automata is a proper subset of the class of languages accepted by nondeterministic pushdown automata. This points to the fact that pushdown automata behave differently as finite automata.

Example 1.29 As an example, consider pushdown automaton from the Example 1.28: . Grammar is:

where for all instead of we shortly used . The transitions:

Based on these, the following productions are defined:

.

It is easy to see that can be eliminated, and the productions will be:

,

,

, ,

and these productions can be replaced:

,

.

1.3.2 Context-free languages

Consider context-free grammar . A derivation tree of is a finite, ordered, labelled tree, which root is labelled by the the start symbol , every interior vertex is labelled by a nonterminal and every leaf by a terminal. If an interior vertex labelled by a nonterminal has descendents, then in there exists a production such that the descendents are labelled by letters , , . The result of a derivation tree is a word over , which can be obtained by reading the labels of the leaves from left to right. Derivation tree is also called syntax tree.

Consider the context-free grammar . It generates language . Derivation of word is:

In Fig. 1.37 this derivation can be seen, which result is .

Figure 1.37.  Derivation (or syntax) tree of word Derivation (or syntax) tree of word aaaabb ..

Derivation (or syntax) tree of word aaaabb .


To every derivation we can associate a syntax tree. Conversely, to any syntax tree more than one derivation can be associated. For example to syntax tree in Fig. 1.37 the derivation

also can be associated.

Definition 1.27 Derivation is a leftmost derivation, if for all there exist words , and productions , for which we have

Consider grammar:

.

In this grammar word has two different leftmost derivations:

,

.

Definition 1.28 A context-free grammar is ambiguous if in there exists a word with more than one leftmost derivation. Otherwise is unambiguous.

The above grammar is ambiguous, because word has two different leftmost derivations. A language can be generated by more than one grammar, and between them can exist ambiguous and unambiguous too. A context-free language is inherently ambiguous, if there is no unambiguous grammar which generates it.

Example 1.30 Examine the following two grammars.

Grammar is ambiguous because

and

.

Grammar is unambiguous.

Can be proved that .

1.3.3 Pumping lemma for context-free languages

Like for regular languages there exists a pumping lemma also for context-free languages.

Theorem 1.29 (pumping lemma) For any context-free language there exists a natural number (which depends only on ), such that every word of the language longer than can be written in the form and the following are true:

(1) ,

(2) ,

(3) ,

(4) is also in for all .

Figure 1.38.  Decomposition of tree in the proof of pumping lemma.

Decomposition of tree in the proof of pumping lemma.

Proof. Let be a grammar without unit productions, which generates language . Let be the number of nonterminals, and let be the maximum of lengths of right-hand sides of productions, i.e. . Let and , such that . Then there exists a derivation tree with the result . Let be the height of (the maximum of path lengths from root to leaves). Because in all interior vertices have at most descendents, has at most leaves, i.e. . On the other hand, because of , we get that . From this follows that in derivation tree there is a path from root to a leave in which there are more than vertices. Consider such a path. Because in the number of nonterminals is and on this path vertices different from the leaf are labelled with nonterminals, by the pigeonhole principle, it must be a nonterminal on this path which occurs at least twice.

Let us denote by the nonterminal being the first on this path from root to the leaf which firstly repeat. Denote by the subtree, which root is this occurrence of . Similarly, denote by the subtree, which root is the second occurrence of on this path. Let be the result of the tree . Then the result of is in form , while of in . Derivation tree with this decomposition of can be seen in Fig. 1.38. We show that this decomposition of satisfies conditions (1)–(4) of lemma. Because in there are no -productions (except maybe the case ), we have . Furthermore, because each interior vertex of the derivation tree has at least two descendents (namely there are no unit productions), also the root of has, hence . Because is the first repeated nonterminal on this path, the height of is at most , and from this results.

After eliminating from all vertices of excepting the root, the result of obtained tree is , i.e. .

Similarly, after eliminating we get , and finally because of the definition of we get Then . Therefore and for all . Therefore, for all we have , i.e. for all .

Now we present two consequences of the lemma.

Corollary 1.30 .

Proof. This consequence states that there exists a context-sensitive language which is not context-free. To prove this it is sufficient to find a context-sensitive language for which the lemma is not true. Let this language be .

To show that this language is context-sensitive it is enough to give a convenient grammar. In Example 1.2 both grammars are extended context-sensitive, and we know that to each extended grammar of type an equivalent grammar of the same type can be associated.

Let be the natural number associated to by lemma, and consider the word . Because of , if is context-free can be decomposed in such that conditions (1)–(4) are true. We show that this leads us to a contradiction.

Firstly, we will show that word and can contain only one type of letters. Indeed if either or contain more than one type of letters, then in word the order of the letters will be not the order , so , which contradicts condition (4) of lemma.

If both and contain at most one type of letters, then in word the number of different letters will be not the same, so . This also contradicts condition (4) in lemma. Therefore is not context-free.

Corollary 1.31 The class of context-free languages is not closed under the intersection.

Proof. We give two context-free languages which intersection is not context-free. Let , and

where :

,

,

,

and , where :

,

,

.

Languages and are context-free. But

is not context-free (see the proof of the Consequence 1.30).

1.3.4 Normal forms of the context-free languages

In the case of arbitrary grammars the normal form was defined (see Section 1.1) as grammars with no terminals in the left-hand side of productions. The normal form in the case of the context-free languages will contains some restrictions on the right-hand sides of productions. Two normal forms (Chomsky and Greibach) will be discussed.

Chomsky normal form

Definition 1.32 A context-free grammar is in Chomsky normal form, if all productions have form or , where , .

Example 1.31 Grammar is in Chomsky normal form and .

To each -free context-free language can be associated an equivalent grammar is Chomsky normal form. The next algorithm transforms an -free context-free grammar in grammar which is in Chomsky normal form.

Chomsky-Normal-Form()

  1   
  2  eliminate unit productions, and let  the new set of productions 
       (see algorithm ELIMINATE-UNIT-PRODUCTIONS in Section 1.1)
  3  in  replace in each production with at least two letters in right-hand side 
       all terminals  by a new nonterminal , and add this nonterminal to 
       and add production  to 
  4  replace all productions , where  and , 
       by the following:
        ,
        ,
        
        ,
        ,
       where  are new nonterminals, and add them to 
  5  RETURN  

Example 1.32 Let . It is easy to see that . Steps of transformation to Chomsky normal form are the following:

Step 1:

Step 2: After eliminating the unit production the productions are:

,

.

Step 3: We introduce three new nonterminals because of the three terminals in productions. Let these be . Then the production are:

,

,

,

.

Step 4: Only one new nonterminal (let this ) must be introduced because of a single production with three letters in the right-hand side. Therefore , and the productions in are:

,

,

,

,

.

All these productions are in required form.

Greibach normal form

Definition 1.33 A context-free grammar is in Greibach normal form if all production are in the form , where , , .

Example 1.33 Grammar is in Greibach normal form and .

To each -free context-free grammar an equivalent grammar in Greibach normal form can be given. We give and algorithm which transforms a context-free grammar in Chomsky normal form in a grammar in Greibach normal form.

First, we give an order of the nonterminals: , where is the start symbol. The algorithm will use the notations , .

Greibach-Normal-Form()

  1   
  2   
  3  FOR  TO        Case  
  4    DO FOR  TO  
  5       DO for all productions  and  (where  has no  
                as first letter) in  productions ,
                delete from  productions 
  6    IF there is a production        Case  
  7       THEN put in  the new nonterminal , 
                for all productions  put in  productions 
                and , delete from  production ,
empty>         for all production  (where  is not the first letter of )
                put in  production 
  8  FOR  DOWNTO        Case  
  9    DO FOR  TO  
 10       DO for all productions  and  
                put in  production  and
                delete from  productions ,
 11  FOR  TO        Case  
 12    DO FOR  TO  
 13       DO for all productions  and  
                put in  production  and
                delete from  productions 
 14  RETURN  

The algorithm first transform productions of the form such that or , where this latter is in Greibach normal form. After this, introducing a new nonterminal, eliminate productions , and using substitutions all production of the form and will be transformed in Greibach normal form.

Example 1.34 Transform productions in Chomsky normal form

in Greibach normal form.

Steps of the algorithm:

3–5: Production must be transformed. For this production is appropriate. Put in the set of productions and eliminate .

The productions will be:

6–7: Elimination of production will be made using productions:

Then, after steps 6–7. the productions will be:

8–10: We make substitutions in productions with in left-hand side. The results is:

11–13: Similarly with productions with in left-hand side:

After the elimination in steps 8–13 of productions in which substitutions were made, the following productions, which are now in Greibach normal form, result:

Example 1.35 Language

can be generated by grammar

First, will eliminate the single unit production, and after this we will give an equivalent grammar in Chomsky normal form, which will be transformed in Greibach normal form.

Productions after the elimination of production :

.

We introduce productions , and replace terminals by the corresponding nonterminals:

,

,

.

After introducing two new nonterminals (, ):

,

,

,

,

.

This is now in Chomsky normal form. Replace the nonterminals to be letters as in the algorithm. Then, after applying the replacements

replaced by , replaced by , replaced by , replaced by , replaced by , replaced by , replaced by ,

our grammar will have the productions:

,

,

,

,

.

In steps 3–5 of the algorithm the new productions will occur:

then

, then

.

Therefore

,

,

,

.

Steps 6–7 will be skipped, because we have no left-recursive productions. In steps 8–10 after the appropriate substitutions we have:

,

,

,

,

,

.

Exercises

1.3-1 Give pushdown automata to accept the following languages:

,

,

.

1.3-2 Give a context-free grammar to generate language , and transform it in Chomsky and Greibach normal forms. Give a pushdown automaton which accepts .

1.3-3 What languages are generated by the following context-free grammars?

.

1.3-4 Give a context-free grammar to generate words with an equal number of letters and .

1.3-5 Prove, using the pumping lemma, that a language whose words contains an equal number of letters , and can not be context-free.

1.3-6 Let the grammar , where

,

,

,

Show that word if a then if a then c else c has two different leftmost derivations.

1.3-7 Prove that if is context-free, then is also context-free.

 PROBLEMS 

1-1 Linear grammars

A grammar which has productions only in the form or , where , is called a linear grammar. If in a linear grammar all production are of the form or , then it is called a left-linear grammar. Prove that the language generated by a left-linear grammar is regular.

1-2 Operator grammars

An -free context-free grammar is called operator grammar if in the right-hand side of productions there are no two successive nonterminals. Show that, for all -free context-free grammar an equivalent operator grammar can be built.

1-3 Complement of context-free languages

Prove that the class of context-free languages is not closed on complement.

 CHAPTER NOTES 

In the definition of finite automata instead of transition function we have used the transition graph, which in many cases help us to give simpler proofs.

There exist a lot of classical books on automata and formal languages. We mention from these the following: two books of Aho and Ullman [5], [6] in 1972 and 1973, book of Gécseg and Peák [87] in 1972, two books of Salomaa [221], [222] in 1969 and 1973, a book of Hopcroft and Ullman [118] in 1979, a book of Harrison [108] in 1978, a book of Manna [174], which in 1981 was published also in Hungarian. We notice also a book of Sipser [242] in 1997 and a monograph of Rozenberg and Salomaa [220]. In a book of Lothaire (common name of French authors) [166] on combinatorics of words we can read on other types of automata. Paper of Giammarresi and Montalbano [89] generalise the notion of finite automata. A new monograph is of Hopcroft, Motwani and Ullman [117]. In German we recommend the textbook of Asteroth and Baier [14]. The concise description of the transformation in Greibach normal form is based on this book.

A practical introduction to formal languages is written by Webber [270].

Other books in English: [32], [40], [68], [142], [149], [159], [164], [165], [177], [185], [238], [240], [251], [252].

At the end of the next chapter on compilers another books on the subject are mentioned.

Chapter 2. Compilers

When a programmer writes down a solution of her problems, she writes a program on a special programming language. These programming languages are very different from the proper languages of computers, from the machine languages. Therefore we have to produce the executable forms of programs created by the programmer. We need a software or hardware tool, that translates the source language program – written on a high level programming language – to the target language program, a lower level programming language, mostly to a machine code program.

There are two fundamental methods to execute a program written on higher level language. The first is using an interpreter. In this case, the generated machine code is not saved but executed immediately. The interpreter is considered as a special computer, whose machine code is the high level language. Essentially, when we use an interpreter, then we create a two-level machine; its lower level is the real computer, on which the higher level, the interpreter, is built. The higher level is usually realized by a computer program, but, for some programming languages, there are special hardware interpreter machines.

The second method is using a compiler program. The difference of this method from the first is that here the result of translation is not executed, but it is saved in an intermediate file called target program.

The target program may be executed later, and the result of the program is received only then. In this case, in contrast with interpreters, the times of translation and execution are distinguishable.

In the respect of translation, the two translational methods are identical, since the interpreter and the compiler both generate target programs. For this reason we speak about compilers only. We will deal the these translator programs, called compilers (Figure 2.1).

Figure 2.1. The translator.

The translator.


Our task is to study the algorithms of compilers. This chapter will care for the translators of high level imperative programming languages; the translational methods of logical or functional languages will not be investigated.

First the structure of compilers will be given. Then we will deal with scanners, that is, lexical analysers. In the topic of parsers – syntactic analysers –, the two most successful methods will be studied: the LL and the LALR} parsing methods. The advanced methods of semantic analysis use O-ATG grammars, and the task of code generation is also written by this type of grammars. In this book these topics are not considered, nor we will study such important and interesting problems as symbol table handling, error repairing or code optimising. The reader can find very new, modern and efficient methods for these methods in the bibliography.

2.1 The structure of compilers

A compiler translates the source language program (in short, source program) into a target language program (in short, target program). Moreover, it creates a list by which the programmer can check her private program. This list contains the detected errors, too.

Using the notation program (input)(output) the compiler can be written by

 compiler (source program)(target program, list). 

In the next, the structure of compilers are studied, and the tasks of program elements are described, using the previous notation.

The first program of a compiler transforms the source language program into character stream that is easy to handle. This program is the source handler.

 source handler(source program)(character stream). 

The form of the source program depends from the operating system. The source handler reads the file of source program using a system, called operating system, and omits the characters signed the end of lines, since these characters have no importance in the next steps of compilation. This modified, “poured” character stream will be the input data of the next steps.

The list created by the compiler has to contain the original source language program written by the programmer, instead of this modified character stream. Hence we define a list handler program,

 list handler (source program, errors)(list), 

which creates the list according to the file form of the operating system, and puts this list on a secondary memory.

It is practical to join the source handler and the list handler programs, since they have same input files. This program is the source handler.

 source handler (source program, errors)(character stream, list). 

The target program is created by the compiler from the generated target code. It is located on a secondary memory, too, usually in a transferable binary form. Of course this form depends on the operating system. This task is done by the code handler program.

 code handler (target code)(target program).  

Using the above programs, the structure of a compiler is the following (Figure 2.2):

source handler (source program, errors) (character string, list),

compiler (character stream)(target code, errors),

code handler (target code)(target program).

Figure 2.2.  The structure of compilers.

The structure of compilers.


This decomposition is not a sequence: the three program elements are executed not sequentially. The decomposition consists of three independent working units. Their connections are indicated by their inputs and outputs.

In the next we do not deal with the handlers because of their dependentness on computers, operating system and peripherals – although the outer form, the connection with the user and the availability of the compiler are determined mainly by these programs. The task of the program compiler is the translation. It consists of two main subtasks: analysing the input character stream, and to synthetizing the target code. The first problem of the analysis is to determine the connected characters in the character stream. These are the symbolic items, e.g., the constants, names of variables, keywords, operators. This is done by the lexical analyser, in short, scanner.

From the character stream the scanner makes a series of symbols and during this task it detects lexical errors.

 scanner (character stream)(series of symbols, lexical errors). 

This series of symbols is the input of the syntactic analyser, in short, parser. Its task is to check the syntactic structure of the program. This process is near to the checking the verb, the subject, predicates and attributes of a sentence by a language teacher in a language lesson. The errors detected during this analysis are the syntactic errors. The result of the syntactic analysis is the syntax tree of the program, or some similar equivalent structure.

 parser (series of symbols)(syntactically analysed program, syntactic errors). 

The third program of the analysis is the semantic analyser. Its task is to check the static semantics. For example, when the semantic analyser checks declarations and the types of variables in the expression a + b, it verifies whether the variables a and b are declared, do they are of the same type, do they have values? The errors detected by this program are the semantic errors.

 semantic analyser (syntactically analysed program)(analysed program, semantic errors). 

The output of the semantic analyser is the input of the programs of synthesis. The first step of the synthesis is the code generation, that is made by the code generator:

 code generator (analysed program)(target code). 

The target code usually depends on the computer and the operating system. It is usually an assembly language program or machine code. The next step of synthesis is the code optimisation:

 code optimiser (target code)(target code). 

The code optimiser transforms the target code on such a way that the new code is better in many respect, for example running time or size.

Figure 2.3.  The programs of the analysis and the synthesis.

The programs of the analysis and the synthesis.


As it follows from the considerations above, a compiler consists of the next components (the structure of the compiler program is in the Figure 2.3):

source handler (source program, errors)(character stream, list),

scanner (character stream)(series of symbols, lexical errors),

parser (series of symbols)(syntactically analysed program, syntactic errors),

semantic analyser (syntactically analysed program)(analysed program, semantic errors),

code generator (analysed program)(target code),

code optimiser (target code)(target code),

code handler (target code)(target program).

The algorithm of the part of the compiler, that performs analysis and synthesis, is the next:

Compiler

  1  determine the symbolic items in the text of source program 
  2  check the syntactic correctness of the series of symbols 
  3  check the semantic correctness of the series of symbols 
  4  generate the target code 
  5  optimise the target code 

The objects written in the first two points will be analysed in the next sections.

Exercises

2.1-1 Using the above notations, give the structure of interpreters.

2.1-2 Take a programming language, and write program details in which there are lexical, syntactic and semantic errors.

2.1-3 Give respects in which the code optimiser can create better target code than the original.

2.2 Lexical analysis

The source-handler transforms the source program into a character stream. The main task of lexical analyser (scanner) is recognising the symbolic units in this character stream. These symbolic units are named symbols.

Unfortunately, in different programming languages the same symbolic units consist of different character streams, and different symbolic units consist of the same character streams. For example, there is a programming language in which the 1. and .10 characters mean real numbers. If we concatenate these symbols, then the result is the 1..10 character stream. The fact, that a sign of an algebraic function is missing between the two numbers, will be detected by the next analyser, doing syntactic analysis. However, there are programming languages in which this character stream is decomposited into three components: 1 and 10 are the lower and upper limits of an interval type variable.

The lexical analyser determines not only the characters of a symbol, but the attributes derived from the surrounded text. Such attributes are, e.g., the type and value of a symbol.

The scanner assigns codes to the symbols, same codes to the same sort of symbols. For example the code of all integer numbers is the same; another unique code is assigned to variables.

The lexical analyser transforms the character stream into the series of symbol codes and the attributes of a symbols are written in this series, immediately after the code of the symbol concerned.

The output information of the lexical analyser is not “readable”: it is usually a series of binary codes. We note that, in the viewpoint of the compiler, from this step of the compilation it is no matter from which characters were made the symbol, i.e. the code of the if symbol was made form English if or Hungarian ha or German wenn characters. Therefore, for a program language using English keywords, it is easy to construct another program language using keywords of another language. In the compiler of this new program language the lexical analysis would be modified only, the other parts of the compiler are unchanged.

2.2.1 The automaton of the scanner

The exact definition of symbolic units would be given by regular grammar, regular expressions or deterministic finite automaton. The theories of regular grammars, regular expressions and deterministic finite automata were studied in previous chapters.

Practically the lexical analyser may be a part of the syntactic analysis. The main reason to distinguish these analysers is that a lexical analyser made from regular grammar is much more simpler than a lexical analyser made from a context-free grammar. Context-free grammars are used to create syntactic analysers.

One of the most popular methods to create the lexical analyser is the following:

  1. describe symbolic units in the language of regular expressions, and from this information construct the deterministic finite automaton which is equivalent to these regular expressions,

  2. implement this deterministic finite automaton.

We note that, in writing of symbols regular expressions are used, because they are more comfortable and readable then regular grammars. There are standard programs as the lex of UNIX systems, that generate a complete syntactical analyser from regular expressions. Moreover, there are generator programs that give the automaton of scanner, too.

A very trivial implementation of the deterministic finite automaton uses multidirectional cASE instructions. The conditions of the branches are the characters of state transitions, and the instructions of a branch represent the new state the automaton reaches when it carries out the given state transition.

The main principle of the lexical analyser is building a symbol from the longest series of symbols. For example the string ABC is a three-letters symbol, rather than three one-letter symbols. This means that the alternative instructions of the cASE branch read characters as long as they are parts of a constructed symbol.

Functions can belong to the final states of the automaton. For example, the function converts constant symbols into an inner binary forms of constants, or the function writes identifiers to the symbol table.

The input stream of the lexical analyser contains tabulators and space characters, since the source-handler expunges the carriage return and line feed characters only. In most programming languages it is possible to write a lot of spaces or tabulators between symbols. In the point of view of compilers these symbols have no importance after their recognition, hence they have the name white spaces.

Expunging white spaces is the task of the lexical analyser. The description of the white space is the following regular expression:

where space and the tab tabulator are the characters which build the white space symbols and is the symbol for the or function. No actions have to make with this white space symbols, the scanner does not pass these symbols to the syntactic analyser.

Some examples for regular expression:

Example 2.1 Introduce the following notations: Let be an arbitrary digit, and let be an arbitrary letter,

the not-visible characters are denoted by their short names, and let be the name of the empty character stream. denotes a character distinct from . The regular expressions are:

  1. real number: ,

  2. positive integer and real number: ,

  3. identifier: ,

  4. comment: ,

  5. comment terminated by : ,

  6. string of characters: .

Figure 2.4.  The positive integer and real number.

The positive integer and real number.


Deterministic finite automata constructed from regular expressions 2 and 3 are in Figures 2.4 and 2.5.

Figure 2.5.  The identifier.

The identifier.


The task of lexical analyser is to determine the text of symbols, but not all the characters of a regular expression belong to the symbol. As is in the 6th example, the first and the last " characters do not belong to the symbol. To unravel this problem, a buffer is created for the scanner. After recognising of a symbol, the characters of these symbols will be in the buffer. Now the deterministic finite automaton is supplemented by a transfer function, where means that the character is inserted into the buffer.

Figure 2.6.  A comment.

A comment.

Example 2.2 The 4th and 6th regular expressions of the example 2.1 are supplemented by the function, automata for these expressions are in Figures 2.6 and 2.7. The automaton of the 4th regular expression has none function, since it recognises comments. The automaton of the 6th regular expression recognises This is a “string” from the character string ”This is a “”string”””.

Figure 2.7.  The character string.

The character string.


Now we write the algorithm of the lexical analyser given by deterministic finite automaton. (The state of the set of one element will be denoted by the only element of the set).

Let be the deterministic finite automaton, which is the scanner. We augment the alphabet with a new notion: let others be all the characters not in . Accordingly, we modify the transition function :

The algorithm of parsing, using the augmented automaton , follows:

Lex-analyse()

  1  ,  first character of  
  2   
  3  WHILE  and  
  4    DO IF  
  5       THEN  
  6           next character of  
  7       ELSE  
  8  IF  and  
  9    THEN  
 10  ELSE  
 11  RETURN  

The algorithm has two parameters: the first one is the input character string terminated by , the second one is the automaton of the scanner. In the line 1 the state of the scanner is set to , to the start state of the automaton, and the first character of the input string is determined. The variable indicates that the algorithm is analysing the input string, the text analysing is set in this variable in the line 2. In the line 5 a state-transition is executed. It can be seen that the above augmentation is needed to terminate in case of unexpected, invalid character. In line 8–10 the O.K. means that the analysed character string is correct, and the ERROR signs that a lexical error was detected. In the case of successful termination the variable contains the character, at erroneous termination it contains the invalid character.

We note that the algorithm LEX-ANALYSE recognise one symbol only, and then it is terminated. The program written in a programming language consists of a lot of symbols, hence after recognising a symbol, the algorithm have to be continued by detecting the next symbol. The work of the analyser is restarted at the state of the automaton. We propose the full algorithm of the lexical analyser as an exercise (see Problem 2-1).

Example 2.3 The automaton of the identifier in the point 3 of example 2.1 is in Figure 2.5. The start state is 0, and the final state is 1. The transition function of the automaton follows:

The augmented transition function of the automaton:

The algorithm LEX-ANALYSE gives the series of states and sign O.K. to the input string abc123#, it gives sign ERROR to the input sting 9abc#, and the series and sign ERROR to the input string abc 123.

2.2.2 Special problems

In this subsection we investigate the problems emerged during running of lexical analyser, and supply solutions for these problems.

Keywords, standard words.

All of programming languages allows identifiers having special names and predefined meanings. They are the keywords. Keywords are used only in their original notions. However there are identifiers which also have predefined meaning but they are alterable in the programs. These words are called standard words.

The number of keywords and standard words in programming languages are vary. For example, there is a program language, in which three keywords are used for the zero value: zero, zeros és zeroes.

Now we investigate how does the lexical analyser recognise keywords and standard words, and how does it distinguish them from identifiers created by the programmers.

The usage of a standard word distinctly from its original meaning renders extra difficulty, not only to the compilation process but also to the readability of the program, such as in the next example:

if if then else = then;

or if we declare procedures which have names begin and end:

       begin
          begin; begin end; end; begin end; 
       end;

Recognition of keywords and standard words is a simple task if they are written using special type characters (for example bold characters), or they are between special prefix and postfix characters (for example between apostrophes).

We give two methods to analyse keywords.

  1. All keywords is written as a regular expression, and the implementation of the automaton created to this expression is prepared. The disadvantage of this method is the size of the analyser program. It will be large even if the description of keywords, whose first letter are the same, are contracted.

  2. Keywords are stored in a special keyword-table. The words can be determined in the character stream by a general identifier- recogniser. Then, by a simple search algorithm, we check whether this word is in the keyword- table. If this word is in the table then it is a keyword. Otherwise it is an identifier defined by the user. This method is very simple, but the efficiency of search depends on the structure of keyword-table and on the algorithm of search. A well-selected mapping function and an adequate keyword-table should be very effective.

If it is possible to write standard words in the programming language, then the lexical analyser recognises the standard words using one of the above methods. But the meaning of this standard word depends of its context. To decide, whether it has its original meaning or it was overdefined by the programmer, is the task of syntactic analyser.

Look ahead.

Since the lexical analyser creates a symbol from the longest character stream, the lexical analyser has to look ahead one or more characters for the allocation of the right-end of a symbol. There is a classical example for this problem, the next two FORTRAN statements:

DO 10 I = 1.1000

DO 10 I = 1,1000

In the FORTRAN programming language space-characters are not important characters, they do not play an important part, hence the character between 1 and 1000 decides that the statement is a DO cycle statement or it is an assignment statement for the DO10I identifier.

To sign the right end of the symbol, we introduce the symbol / into the description of regular expressions. Its name is lookahead operator. Using this symbol the description of the above DO keyword is the next

This definition means that the lexical analyser says that the first two D and O letters are the DO keyword, if looking ahead, after the O letter, there are letters or digits, then there is an equal sign, and after this sign there are letters or digits again, and finally, there is a “,” character. The lookahead operator implies that the lexical analyser has to look ahead after the DO characters. We remark that using this lookahead method the lexical analyser recognises the DO keyword even if there is an error in the character stream, such as in the DO2A=3B, character stream, but in a correct assignment statement it does not detect the DO keyword.

In the next example we concern for positive integers. The definition of integer numbers is a prefix of the definition of the real numbers, and the definition of real numbers is a prefix of the definition of real numbers containing explicit power-part.

The automaton for all of these three expressions is the automaton of the longest character stream, the real number containing explicit power-part.

The problem of the lookahead symbols is resolved using the following algorithm. Put the character into a buffer, and put an auxiliary information aside this character. This information is “it is invalid”. If the character string, using this red character, is not correct; otherwise we put the type of the symbol into here. If the automaton is in a final-state, then the automaton recognises a real number with explicit power-part. If the automaton is in an internal state, and there is no possibility to read a next character, then the longest character stream which has valid information is the recognised symbol.

Example 2.4 Consider the 12.3e+f# character stream, where the character # is the endsign of the analysed text. If in this character stream there was a positive integer number in the place of character f, then this character stream should be a real number. The content of the puffer of lexical analyser:

The recognised symbol is the 12.3 real number. The lexical analysing is continued at the text e+f.

The number of lookahead-characters may be determined from the definition of the program language. In the modern languages this number is at most two.

The symbol table.

There are programming languages, for example C, in which small letters and capital letters are different. In this case the lexical analyser uses characters of all symbols without modification. Otherwise the lexical analyser converts all characters to their small letter form or all characters to capital letter form. It is proposed to execute this transformation in the source handler program.

At the case of simpler programming languages the lexical analyser writes the characters of the detected symbol into the symbol table, if this symbol is not there. After writing up, or if this symbol has been in the symbol table already, the lexical analyser returns the table address of this symbol, and writes this information into its output. These data will be important at semantic analysis and code generation.

Directives.

In programming languages the directives serve to control the compiler. The lexical analyser identifies directives and recognises their operands, and usually there are further tasks with these directives.

If the directive is the if of the conditional compilation, then the lexical analyser has to detect all of parameters of this condition, and it has to evaluate the value of the branch. If this value is false, then it has to omit the next lines until the else or endif directive. It means that the lexical analyser performs syntactic and semantic checking, and creates code-style information. This task is more complicate if the programming language gives possibility to write nested conditions.

Other types of directives are the substitution of macros and including files into the source text. These tasks are far away from the original task of the lexical analyser.

The usual way to solve these problems is the following. The compiler executes a pre-processing program, and this program performs all of the tasks written by directives.

Exercises

2.2-1 Give a regular expression to the comments of a programming language. In this language the delimiters of comments are and , and inside of a comment may occurs and characters, but is forbidden.

2.2-2 Modify the result of the previous question if it is supposed that the programming language has possibility to write nested comments.

2.2-3 Give a regular expression for positive integer numbers, if the pre- and post-zero characters are prohibited. Give a deterministic finite automaton for this regular expression.

2.2-4 Write a program, which re-creates the original source program from the output of lexical analyser. Pay attention for nice an correct positions of the re-created character streams.

2.3 Syntactic analysis

The perfect definition of a programming language includes the definition of its syntax and semantics.

The syntax of the programming languages cannot be written by context free grammars. It is possible by using context dependent grammars, two-level grammars or attribute grammars. For these grammars there are not efficient parsing methods, hence the description of a language consists of two parts. The main part of the syntax is given using context free grammars, and for the remaining part a context dependent or an attribute grammar is applied. For example, the description of the program structure or the description of the statement structure belongs to the first part, and the type checking, the scope of variables or the correspondence of formal and actual parameters belong to the second part.

The checking of properties written by context free grammars is called syntactic analysis or parsing. Properties that cannot be written by context free grammars are called form the static semantics. These properties are checked by the semantic analyser.

The conventional semantics has the name run-time semantics or dynamic semantics. The dynamic semantics can be given by verbal methods or some interpreter methods, where the operation of the program is given by the series of state-alterations of the interpreter and its environment.

We deal with context free grammars, and in this section we will use extended grammars for the syntactic analysis. We investigate on methods of checking of properties which are written by context free grammars. First we give basic notions of the syntactic analysis, then the parsing algorithms will be studied.

Definition 2.1 Let be a grammar. If and then is a sentential form. If and then is a sentence of the language defined by the grammar.

The sentence has an important role in parsing. The program written by a programmer is a series of terminal symbols, and this series is a sentence if it is correct, that is, it has not syntactic errors.

Definition 2.2 Let be a grammar and is a sentential form (). We say that is a phrase of , if there is a symbol , which and . We say that is a simple phrase of , if .

We note that every sentence is phrase. The leftmost simple phrase has an important role in parsing; it has its own name.

Definition 2.3 The leftmost simple phase of a sentence is the handle.

The leaves of the syntax tree of a sentence are terminal symbols, other points of the tree are nonterminal symbols, and the root symbol of the tree is the start symbol of the grammar.

In an ambiguous grammar there is at least one sentence, which has several syntax trees. It means that this sentence has more than one analysis, and therefore there are several target programs for this sentence. This ambiguity raises a lot of problems, therefore the compilers translate languages generated by unambiguous grammars only.

We suppose that the grammar has properties as follows:

  1. the grammar is cycle free, that is, it has not series of derivations rules (),

  2. the grammar is reduced, that is, there are not “unused symbols” in the grammar, all of nonterminals happen in a derivation, and from all nonterminals we can derive a part of a sentence. This last property means that for all it is true that , where and ().

As it has shown, the lexical analyser translates the program written by a programmer into series of terminal symbols, and this series is the input of syntactic analyser. The task of syntactic analyser is to decide if this series is a sentence of the grammar or it is not. To achieve this goal, the parser creates the syntax tree of the series of symbols. From the known start symbol and the leaves of the syntax tree the parser creates all vertices and edges of the tree, that is, it creates a derivation of the program. If this is possible, then we say that the program is an element of the language. It means that the program is syntactically correct.

Hence forward we will deal with left to right parsing methods. These methods read the symbols of the programs left to right. All of the real compilers use this method.

To create the inner part of the syntax tree there are several methods. One of these methods builds the syntax tree from its start symbol . This method is called top-down method. If the parser goes from the leaves to the symbol , then it uses the bottom-up parsing method.

We deal with top-down parsing methods in Subsection 2.3.1. We investigate bottom-up parsers in Subsection 2.3.2; now these methods are used in real compilers.

2.3.1 parser

If we analyse from top to down then we start with the start symbol. This symbol is the root of syntax tree; we attempt to construct the syntax tree. Our goal is that the leaves of tree are the terminal symbols.

First we review the notions that are necessary in the top-down parsing. Then the table methods and the recursive descent method will be analysed.

grammars

Our methods build the syntax tree top-down and read symbols of the program left to right. For this end we try to create terminals on the left side of sentential forms.

Definition 2.4 If then the leftmost direct derivation of the sentential form () is , and

Definition 2.5 If all of direct derivations in () are leftmost, then this derivation is said to be leftmost derivation, and

In a leftmost derivation terminal symbols appear at the left side of the sentential forms. Therefore we use leftmost derivations in all of top-down parsing methods. Hence if we deal with top-down methods, we do not write the text “leftmost” at the arrows.

One might as well say that we create all possible syntax trees. Reading leaves from left to right, we take sentences of the language. Then we compare these sentences with the parseable text and if a sentence is same as the parseable text, then we can read the steps of parsing from the syntax tree which is belongs to this sentence. But this method is not practical; generally it is even impossible to apply.

A good idea is the following. We start at the start symbol of the grammar, and using leftmost derivations we try to create the text of the program. If we use a not suitable derivation at one of steps of parsing, then we find that, at the next step, we can not apply a proper derivation. At this case such terminal symbols are at the left side of the sentential form, that are not same as in our parseable text.

For the leftmost terminal symbols we state the theorem as follows.

Theorem 2.6 If () és , then .

The proof of this theorem is trivial. It is not possible to change the leftmost terminal symbols of sentential forms using derivation rules of a context free grammar.

This theorem is used during the building of syntax tree, to check that the leftmost terminals of the tree are same as the leftmost symbols of the parseable text. If they are different then we created wrong directions with this syntax tree. At this case we have to make a backtrack, and we have to apply an other derivation rule. If it is impossible (since for example there are no more derivation rules) then we have to apply a backtrack once again.

General top-down methods are realized by using backtrack algorithms, but these backtrack steps make the parser very slow. Therefore we will deal only with grammars such that have parsing methods without backtracks.

The main properties of grammars are the following. If, by creating the leftmost derivation (), we obtain the sentential form () at some step of this derivation, and our goal is to achieve , then the next step of the derivation for nonterminal is determinable unambiguously from the first symbols of .

To look ahead symbols we define the function .

Definition 2.7 Let be the set as follows.

The set consists of the first symbols of ; for , it consists the full . If , then .

Definition 2.8 The grammar is a grammar , if for derivations

() the equality

implies

Using this definition, if a grammar is a grammar then the symbol after the parsed determine the next derivation rule unambiguously (Figure 2.8).

Figure 2.8.  LL(k) grammar.grammar.

LL(k) grammar.

One can see from this definition that if a grammar is an grammar then for all it is also an grammar. If we speak about grammar then we also mean that is the least number such that the properties of the definition are true.

Example 2.5 The next grammar is a grammar. Let be a grammar whose derivation rules are:

We have to use the derivation for the start symbol if the next symbol of the parseable text is or . We use the derivation if the next symbol is the mark #.

Example 2.6 The next grammar is a grammar. Let be a grammar whose the derivation rules are:

One can see that at the last step of derivations

and

if we look ahead one symbol, then in both derivations we obtain the symbol . The proper rule for symbol is determined to look ahead two symbols ( or ).

There are context free grammars such that are not grammars. For example the next grammar is not grammar for any .

Example 2.7 Let be a grammar whose the derivation rules are:

consists of sentences és . If we analyse the sentence , then at the first step we can not decide by looking ahead symbols whether we have to use the derivation or , since for all .

By the definition of the grammar, if we get the sentential form using leftmost derivations, then the next symbol determines the next rule for symbol . This is stated in the next theorem.

Theorem 2.9 Grammar is a grammar iff

implies

If there is a rule in the grammar, then the set consists the length prefixes of terminal series generated from . It implies that, for deciding the property , we have to check not only the derivation rules, but also the infinite derivations.

We can give good methods, that are used in the practice, for grammars only. We define the follower-series, which follow a symbol or series of symbols.

Definition 2.10 , and if , then () .

The second part of the definition is necessary because if there are no symbols after the in the derivation , that is , then the next symbol after is the mark # only.

() consists of terminal symbols that can be immediately after the symbol in the derivation

Theorem 2.11 The grammar is a grammar iff, for all nonterminal and for all derivation rules ,

In this theorem the expression means that we have to concatenate to the elements of set separately, and for all elements of this new set we have to apply the function .

It is evident that Theorem 2.11 is suitable to decide whether a grammar is or it is not.

Hence forward we deal with languages determined by grammars, and we investigate the parsing methods of languages. For the sake of simplicity, we omit indexes from the names of functions és .

The elements of the set are determined using the next algorithm.

First()

  1  IF  
  2    THEN  
  3  IF , where  
  4    THEN  
  5  IF , where  
  6    THEN IF  
  7       THEN  
  8       ELSE  
  9       FOR all   
 10          DO  
 11             FOR  TO  
 12                DO IF  
 13                   THEN  
 14             IF  
 15                THEN  
 16  IF  
 17    THEN  
 18       FOR  TO  
 19          DO IF  
 20             THEN  
 21       IF  
 22          THEN  
 23  RETURN   

In lines 1–4 the set is given for and a terminal symbol . In lines 5–15 we construct the elements of this set for a nonterminal . If is derivated from then we put symbol into the set in lines 6–7 and 14–15. If the argument is a symbol stream then the elements of the set are constructed in lines 16–22. We notice that we can terminate the fOR cycle in lines 11 and 18 if , since in this case it is not possible to derive symbol from .

In Theorem 2.11 and hereafter, it is necessary to know the elements of the set . The next algorithm constructs this set.

Follow()

  1  IF  
  2    THEN  
  3    ELSE  
  4  FOR all rules  
  5    DO IF  
  6       THEN  
  7          IF  
  8             THEN  
  9       ELSE  
 10  RETURN  

The elements of the set get into the set . In lines 4–9 we check that, if the argumentum is at the right side of a derivation rule, what symbols may stand immediately after him. It is obvious that no is in this set, and the symbol is in the set only if the argumentum is the rightmost symbol of a sentential form.

Parsing with table

Suppose that we analyse a series of terminal symbols and the part has already been analysed without errors. We analyse the text with a top-down method, so we use leftmost derivations. Suppose that our sentential form is , that is, it has form or () (Figure 2.9).

Figure 2.9.  The sentential form and the analysed text.

The sentential form and the analysed text.


In the first case the next step is the substitution of symbol . We know the next element of the input series, this is the terminal , therefore we can determine the correct substitution of symbol . This substitution is the rule for which . If there is such a rule then, according to the definition of grammar, there is exactly one. If there is not such a rule, then a syntactic error was found.

In the second case the next symbol of the sentential form is the terminal symbol , thus we look out for the symbol as the next symbol of the analysed text. If this comes true, that is, , then the symbol is a correct symbol and we can go further. We put the symbol into the already analysed text. If , then here is a syntactic error. We can see that the position of the error is known, and the erroneous symbol is the terminal symbol .

The action of the parser is the following. Let # be the sign of the right end of the analysed text, that is, the mark # is the last symbol of the text. We use a stack through the analysing, the bottom of the stack is signed by mark #, too. We give serial numbers to derivation rules and through the analysing we write the number of the applied rule into a list. At the end of parsing we can write the syntax tree from this list (Figure 2.10).

We sign the state of the parser using triples . The symbol is the text not analysed yet. is the part of the sentential form corresponding to the not analysed text; this information is in the stack, the symbol is at the top of the stack. is the list of the serial numbers of production rules.

If we analyse the text then we observe the symbol at the top of the stack, and the symbol that is the first symbol of the not analysed text. The name of the symbol is actual symbol. There are pointers to the top of the stack and to the actual symbol.

Figure 2.10.  The structure of the The structure of the LL(1) parser. parser.

The structure of the LL(1) parser.


We use a top down parser, therefore the initial content of the stack is . If the initial analysed text is , then the initial state of the parsing process is the triple , where is the sign of the empty list.

We analyse the text, the series of symbols using a parsing table. The rows of this table sign the symbols at the top of the stack, the columns of the table sign the next input symbols, and we write mark # to the last row and the last column of the table. Hence the number of rows of the table is greater by one than the number of symbols of the grammar, and the number of columns is greater by one than the number of terminal symbols of the grammar.

The element of the table is as follows.

We fill in the parsing table using the following algorithm.

LL(1)-Table-Fill-in()

  1  FOR all  
  2    DO IF  the -th rule 
  3       THEN FOR all -ra 
  4          DO  
  5          IF  
  6             THEN FOR all  
  7                DO  
  8  FOR all  
  9    DO  
 10   
 11  FOR all  and all  
 12    DO IF ”empty” 
 13       THEN  
 14  RETURN  

At the line 10 we write the text accept into the right lower corner of the table. At the lines 8–9 we write the text pop into the main diagonal of the square labelled by terminal symbols. The program in lines 1–7 writes a tuple in which the first element is the right part of a derivation rule and the second element is the serial number of this rule. In lines 12–13 we write error texts into the empty positions.

The actions of the parser are written by state-transitions. The initial state is , where the initial text is , and the parsing process will be finished if the parser goes into the state , this state is the final state. If the text is in an intermediate step, and the symbol is at the top of stack, then the possible state-transitions are as follows.

The letters O.K. mean that the analysed text is syntactically correct; the text ERROR means that a syntactic error is detected.

The actions of this parser are written by the next algorithm.

LL(1)-Parser()

  1  ,  
  2  REPEAT 
  3    IF  és  
  4       THEN  
  5       ELSE IF  
  6          THEN        Then . 
  7          ELSE IF  
  8             THEN       Then . 
  9             ELSE       Then . 
 10  UNTIL OR  
 11  RETURN  

The input parameters of this algorithm are the text and the parsing table . The variable describes the state of the parser: its value is analyse, during the analysis, and it is either O.K. or ERROR. at the end. The parser determines his action by the actual symbol and by the symbol at the top of the stack, using the parsing table . In the line 3–4 the parser builds the syntax tree using the derivation rule . In lines 5–6 the parser executes a shift action, since there is a symbol at the top of the stack. At lines 8–9 the algorithm finishes his work if the stack is empty and it is at the end of the text, otherwise a syntactic error was detected. At the end of this work the result is O.K. or ERROR in the variable , and, as a result, there is the triple at the output of this algorithm. If the text was correct, then we can create the syntax tree of the analysed text from the third element of the triple. If there was an error, then the first element of the triple points to the position of the erroneous symbol.

Example 2.8 Let be a grammar , where the set of derivation rules:

From these rules we can determine the sets. To fill in the parsing table, the following sets are required:

,

,

,

,

,

,

,

.

The parsing table is as follows. The empty positions in the table mean errors

Example 2.9 Using the parsing table of the previous example, analyse the text .

The syntax tree of the analysed text is the Figure 2.11.

Figure 2.11.  The syntax tree of the sentence The syntax tree of the sentence i+i*i ..

The syntax tree of the sentence i+i*i .

Recursive-descent parsing method

There is another frequently used method for the backtrackless top-down parsing. Its essence is that we write a real program for the applied grammar. We create procedures to the symbols of grammar, and using these procedures the recursive procedure calls realize the stack of the parser and the stack management. This is a top-down parsing method, and the procedures call each other recursively; it is the origin of the name of this method, that is, recursive-descent method.

To check the terminal symbols we create the procedure Check. Let the parameter of this procedure be the, that is the leftmost unchecked terminal symbol of the sentential form, and let the actual symbol be the symbol which is analysed in that moment.

       procedure Check(a);
       begin
          if actual_symbol = a
             then Next_symbol
             else Error_report
       end;

The procedure Next_symbol reads the next symbol, it is a call for the lexical analyser. This procedure determines the next symbol and put this symbol into the actual_symbol variable. The procedure Error_report creates an error report and then finishes the parsing.

We create procedures to symbols of the grammar as follows. The procedure of the nonterminal symbol is the next.

       procedure A;
       begin
          T(A)
       end;

where T(A) is determined by symbols of the right part of derivation rule having symbol in its left part.

The grammars which are used for syntactic analysis are reduced grammars. It means that no unnecessary symbols in the grammar, and all of symbols occur at the left side at least one reduction rule. Therefore, if we consider the symbol , there is at least one production rule.

  1. If there is only one production rule for the symbol ,

    1. let the program of the rule is as follows: Check(a),

    2. for the rule we give the procedure call B,

    3. for the rule we give the next block:

    4. begin

      T(X_1);

      T(X_2);

      ...

      T(X_n)

      end;

  2. If there are more rules for the symbol :

    1. If the rules are -free, that is from () it is not possible to deduce , then T(A)

      case actual_symbol of

      First(alpha_1) : T(alpha_1);

      First(alpha_2) : T(alpha_2);

      ...

      First(alpha_n) : T(alpha_n)

      end;

      where First(alpha_i) is the sign of the set .

      We note that this is the first point of the method of recursive-descent parser where we use the fact that the grammar is an grammar.

    2. We use the grammar to write a programming language, therefore it is not comfortable to require that the grammar is a -free grammar. For the rules we create the next T(A) program:

      case actual_symbol of

      First(alpha_1) : T(alpha_1);

      First(alpha_2) : T(alpha_2);

      ...

      First(alpha_(n-1)) : T(alpha_(n-1));

      Follow(A) : skip

      end;

      where Follow(A) is the set .

      In particular, if the rules for some , that is , then the -th row of the case statement is

      Follow(A) : skip

In the program T(A), if it is possible, we use if-then-else or while statement instead of the statement case.

The start procedure, that is the main program of this parsing program, is the procedure which is created for the start symbol of the grammar.

We can create the recursive-descent parsing program with the next algorithm. The input of this algorithm is the grammar , and the result is parsing program . In this algorithm we use a WRITE-PROGRAM procedure, which concatenates the new program lines to the program . We will not go into the details of this algorithm.

Create-Rec-Desc()

  1   
  2  WRITE-PROGRAM( 
  3    procedure Check(a); 
  4    begin 
  5    if actual_symbol = a 
  6       then Next_symbol 
  7       else Error_report 
  8    end; 
  9    ) 
 10  FOR all symbol  of the grammar  
 11    DO IF  
 12       THEN WRITE-PROGRAM( 
 13          program S; 
 14          begin 
 15             REC-DESC-STAT  
 16          end; 
 17          ) 
 18       ELSEWRITE-PROGRAM( 
 19          procedure A; 
 20          begin 
 21          REC-DESC-STAT  
 22          end; 
 23          ) 
 24  RETURN  

The algorithm creates the Check procedure in lines 2–9. Then, for all nonterminals of grammar , it determines their procedures using the algorithm REC-DESC-STAT. In the lines 11–17, we can see that for the start symbol we create the main program. The output of the algorithm is the parsing program.

Rec-Desc-Stat()

  1  IF there is only one rule  
  2    THEN REC-DESC-STAT1        . 
  3    ELSE REC-DESC-STAT2        . 
  4  RETURN  

The form of the statements of the parsing program depends on the derivation rules of the symbol . Therefore the algorithm REC-DESC-STAT divides the next tasks into two parts. The algorithm REC-DESC-STAT1 deals with the case when there is only one derivation rule, and the algorithm REC-DESC-STAT2 creates the program for the alternatives.

Rec-Desc-Stat1()

  1  IF  
  2    THEN WRITE-PROGRAM( 
  3       Check(a) 
  4       ) 
  5  IF  
  6    THEN WRITE-PROGRAM( 
  7       B 
  8       ) 
  9  IF   
 10    THEN WRITE-PROGRAM( 
 11       begin 11  
 12          REC-DESC-STAT1 ; 
 13          REC-DESC-STAT1 ; 
 14            
 15          REC-DESC-STAT1  
 16       end; 
 17  RETURN  

Rec-Desc-Stat2()

  1  IF the rules  are - free 
  2    THEN WRITE-PROGRAM( 
  3       case actual_symbol of 
  4          First(alpha_1) : REC-DESC-STAT1 ; 
  5          ... 
  6          First(alpha_n) : REC-DESC-STAT1  
  7       end; 
  8       ) 
  9  IF there is a -rule,   
 10    THEN WRITE-PROGRAM( 
 11       case actual_symbol of 
 12          First(alpha_1) : REC-DESC-STAT1 ; 
 13          ... 
 14          First(alpha_(i-1)) : REC-DESC-STAT1 ; 
 15          Follow(A) : skip; 
 16          First(alpha_(i+1)) : REC-DESC-STAT1  ; 
 17          ... 
 18          First(alpha_n) : REC-DESC-STAT1  
 19       end; 
 20       ) 
 21  RETURN  

These two algorithms create the program described above.

Checking the end of the parsed text is achieved by the recursive- descent parsing method with the next modification. We generate a new derivation rule for the end mark #. If the start symbol of the grammar is , then we create the new rule , where the new symbol is the start symbol of our new grammar. The mark # is considered as terminal symbol. Then we generate the parsing program for this new grammar.

Example 2.10 We augment the grammar of the Example 2.8 in the above manner. The production rules are as follows.

In the example 2.8 we give the necessary and sets. We use the next sets:

,

,

,

,

,

.

In the comments of the program lines we give the using of these sets. The first characters of the comment are the character pair --.

The program of the recursive-descent parser is the following.

       program S';
       begin
          E;
          Check(#)
       end.
       procedure E;
       begin
          T;
          E'
       end;
       procedure E';
       begin
          case actual_symbol of
          + : begin -- First(+TE')
             Check(+);
             T;
             E'
          end;
          ),# : skip -- Follow(E')
          end
       end;
       procedure T;
       begin
          F;
          T'
       end;
       procedure T';
       begin
          case actual_symbol of
          * : begin -- First(*FT')
             Check(*);
             F;
             T'
          end;
          +,),# : skip -- Follow(T')
          end
       end;
       procedure F;
       begin
          case actual_symbol of
          ( : begin -- First((E))
             Check(();
             E;
             Check())
          end;
          i : Check(i) -- First(i)
          end
       end;

We can see that the main program of this parser belongs to the symbol .

2.3.2 parsing

If we analyse from bottom to up, then we start with the program text. We search the handle of the sentential form, and we substitute the nonterminal symbol that belongs to the handle, for this handle. After this first step, we repeat this procedure several times. Our goal is to achieve the start symbol of the grammar. This symbol will be the root of the syntax tree, and by this time the terminal symbols of the program text are the leaves of the tree.

First we review the notions which are necessary in the parsing.

To analyse bottom-up, we have to determine the handle of the sentential form. The problem is to create a good method which finds the handle, and to find the best substitution if there are more than one possibilities.

Definition 2.12 If , then the rightmost substitution of the sentential form () is , that is

Definition 2.13 If the derivation () all of the substitutions were rightmost substitution, then this derivation is a rightmost derivation,

In a rightmost derivation, terminal symbols are at the right side of the sentential form. By the connection of the notion of the handle and the rightmost derivation, if we apply the steps of a rightmost derivation backwards, then we obtain the steps of a bottom-up parsing. Hence the bottom-up parsing is equivalent with the “inverse” of a rightmost derivation. Therefore, if we deal with bottom-up methods, we will not write the text “rightmost” at the arrows.

General bottom-up parsing methods are realized by using backtrack algorithms. They are similar to the top-down parsing methods. But the backtrack steps make the parser very slow. Therefore we only deal with grammars such that have parsing methods without backtracks.

Hence forward we produce a very efficient algorithm for a large class of context-free grammars. This class contains the grammars for the programming languages.

The parsing is called parsing; the grammar is called grammar. means the “Left to Right” method, and means that if we look ahead symbols then we can determine the handles of the sentential forms. The parsing method is a shift-reduce method.

We deal with parsing only, since for all grammar there is an equivalent grammar. This fact is very important for us since, using this type of grammars, it is enough to look ahead one symbol in all cases.

Creating parsers is not an easy task. However, there are such standard programs (for example the yacc in UNIX systems), that create the complete parsing program from the derivation rules of a grammar. Using these programs the task of writing parsers is not too hard.

After studying the grammars we will deal with the parsing method. This method is used in the compilers of modern programming languages.

grammars.

As we did previously, we write a mark # to the right end of the text to be analysed. We introduce a new nonterminal symbol and a new rule into the grammar.

Definition 2.14 Let be the augmented grammar belongs to grammar , where augmented grammar

Assign serial numbers to the derivation rules of grammar, and let be the 0th rule. Using this numbering, if we apply the 0th rule, it means that the parsing process is concluded and the text is correct.

We notice that if the original start symbol does not happen on the right side of any rules, then there is no need for this augmentation. However, for the sake of generality, we deal with augmented grammars only.

Definition 2.15 The augmented grammar is an LR(k) grammar , if for derivations

() the equality

implies

The feature of grammars is that, in the sentential form , looking ahead symbol from unambiguously decides if is or is not the handle. If the handle is , then we have to reduce the form using the rule , that results the new sentential form is . Its reason is the following: suppose that, for sentential forms and , (their prefixes are same), , and we can reduce to and to . In this case, since the grammar is a grammar, and hold. Therefore in this case either the handle is or never is the handle.

Figure 2.12.  The The LR(k) grammar. grammar.

The LR(k) grammar.

Example 2.11 Let be a grammar and let the derivation rules be as follows.

This grammar is not an grammar, since using notations of the definition, in the derivations

it holds that , and .

Example 2.12 The next grammar is a grammar. , the derivation rules are:

.

In the next example we show that there is a context-free grammar, such that is not grammar for any .

Example 2.13 Let be a grammar and let the derivation rules be

Now for all

and

but

It is not sure that, for a grammar, we can find an equivalent grammar. However, grammars have this nice property.

Theorem 2.16 For all grammar there is an equivalent grammar.

The great significance of this theorem is that it makes sufficient to study the grammars instead of grammars.

canonical sets.

Now we define a very important notion of the parsings.

Definition 2.17 If is the handle of the () sentential form, then the prefixes of are the viable prefixes of .

Example 2.14 Let be a grammar and the derivation rule as follows.

is a sentential form, and the first is the handle. The viable prefixes of this sentential form are .

By the above definition, symbols after the handle are not parts of any viable prefix. Hence the task of finding the handle is the task of finding the longest viable prefix.

For a given grammar, the set of viable prefixes is determined, but it is obvious that the size of this set is not always finite.

The significance of viable prefixes are the following. We can assign states of a deterministic finite automaton to viable prefixes, and we can assign state transitions to the symbols of the grammar. From the initial state we go to a state along the symbols of a viable prefix. Using this property, we will give a method to create an automaton that executes the task of parsing.

Definition 2.18 If is a rule of a grammar, then let

be a LR(1)-item, where is the core of the -item, and is the lookahead symbol of the -item.

Figure 2.13.  The The \left[A\rightarrow\alpha.\beta,a\right] LR(1) -item. The \left[A\rightarrow\alpha.\beta,a\right] LR(1) -item.-item.

The \left[A\rightarrow\alpha.\beta,a\right] LR(1) -item.


The lookahead symbol is instrumental in reduction, i.e. it has form . It means that we can execute reduction only if the symbol follows the handle .

Definition 2.19 The -item is valid for the viable prefix if

and is the first symbol of or if then .

Example 2.15 Let a grammar and the derivation rules as follows.

Using these rules, we can derive . Here is a viable prefix, and is valid for this viable prefix. Similarly, , and -item is valid for viable prefix .

Creating a parser, we construct the canonical sets of -items. To achieve this we have to define the closure and read functions.

Definition 2.20 Let the set be a set of -items for a given grammar. The set closure() consists of the next -items:

1. every element of the set is an element of the set ,

2. if , and is a derivation rule of the grammar, then for all ,

3. the set is needed to expand using the step 2 until no more items can be added to it.

By definitions, if the -item is valid for the viable prefix , then the -item is valid for the same viable prefix in the case of . (Figure 2.14). It is obvious that the function closure creates all of -items which are valid for viable prefix .

Figure 2.14.  The function The function closure(\left[A\rightarrow\alpha.B\beta,a\right]) ..

The function closure(\left[A\rightarrow\alpha.B\beta,a\right]) .


We can define the function , i.e. the closure of set by the following algorithm. The result of this algorithm is the set .

Closure-Set-of-Items()

  1   
  2  FOR all  -item 
  3    DO  
  4  RETURN  

Closure-Item()

  1   
  2  IF the -item  has form  
  3    THEN  
  4        
  5       REPEAT 
  6          FOR for all  which have form  
  7             DO FOR for all rules  
  8                DO FOR for all symbols  
  9                   DO  
 10           
 11          IF  
 12             THEN   
 13                 
 14       UNTIL  
 15  RETURN  

The algorithm CLOSURE-ITEM creates , the closure of item . If, in the argument , the “point” is followed by a terminal symbol, then the result is this item only (line 1). If in the “point” is followed by a nonterminal symbol , then we can create new items from every rule having the symbol at their left side (line 9). We have to check this condition for all new items, too, the rEPEAT cycle is in line 5–14. These steps are executed until no more items can be added (line 14). The set contains the items to be checked, the set contains the new items. We can find the operation in line 10.

Definition 2.21 Let be a set of -items for the grammar . Then the set read() () consists of the following -items.

1. if , then all items of the set are in ,

2. the set is extended using step 1 until no more items can be added to it.

The function “reads symbol ” in items of , and after this operation the sign "point" in the items gets to the right side of . If the set contains the valid -items for the viable prefix then the set contains the valid -items for the viable prefix .

The algorithm READ-SET-OF-ITEMS executes the function read. The result is the set .

Read-Set()

  1   
  2  FOR all  
  3    DO  
  4  RETURN  

Read-Item()

  1  IF  and  
  2    THEN  
  3    ELSE  
  4  RETURN  

Using these algorithms we can create all of items which writes the state after reading of symbol .

Now we introduce the following notation for -items, to give shorter descriptions. Let

be a notation for items

Example 2.16 The -item is an item of the grammar in the example 2.15. For this item

We can create the canonical sets of -items or shortly the -canonical sets with the following method.

Definition 2.22 Canonical sets of -items are the following.

,

Create the set for a symbol . If this set is not empty and it is not equal to canonical set then it is the next canonical set .

Repeat this operation for all possible terminal and nonterminal symbol . If we get a nonempty set which is not equal to any of previous sets then this set is a new canonical set, and its index is greater by one as the maximal index of previously generated canonical sets.

repeat the above operation for all previously generated canonical sets and for all symbols of the grammar until no more items can be added to it.

The sets

are the canonical sets of -items of the grammar .

The number of elements of -items for a grammar is finite, hence the above method is terminated in finite time.

The next algorithm creates canonical sets of the grammar .

Create-Canonical-Sets()

  1   
  2   
  3   
  4  REPEAT 
  5     
  6    FOR all -re 
  7       DO  
  8          FOR all -re 
  9             DO  
 10                IF  and  
 11                   THEN   
 12                       
 13                       
 14                       
 15  UNTIL   
 16  RETURN  

The result of the algorithm is . The first canonical set is the set in the line 2. Further canonical sets are created by functions CLOSURE-SET-OF-ITEMS(READ-SET) in the line 9. The program in the line 10 checks that the new set differs from previous sets, and if the answer is true then this set will be a new set in lines 11–12. The fOR cycle in lines 6–14 guarantees that these operations are executed for all sets previously generated. In lines 3–14 the rEPEAT cycle generate new canonical sets as long as it is possible.

Example 2.17 The canonical sets of -items for the Example 2.15 are as follows.

The automaton of the parser is in Figure 2.15.

Figure 2.15.  The automaton of the Example 2.15.

The automaton of the Example 2.15.

parser.

If the canonical sets of -items

were created, then assign the state of an automaton to the set . Relation between the states of the automaton and the canonical sets of -items is stated by the next theorem. This theorem is the ”great”' theorem of the -parsing.

Theorem 2.23 The set of the -items being valid for a viable prefix can be assigned to the automaton-state such that there is path from the initial state to state labeled by .

This theorem states that we can create the automaton of the parser using canonical sets. Now we give a method to create this parser from canonical sets of -items.

The deterministic finite automaton can be described with a table, that is called parsing table. The rows of the table are assigned to the states of the automaton.

The parsing table has two parts. The first is the action table. Since the operations of parser are determined by the symbols of analysed text, the action table is divided into columns labeled by the terminal symbols. The action table contains information about the action performing at the given state and at the given symbol. These actions can be shifts or reductions. The sign of a shift operation is , where is the next state. The sign of the reduction is , where is the serial number of the applied rule. The reduction by the rule having the serial number zero means the termination of the parsing and that the parsed text is syntactically correct; for this reason we call this operation accept.

The second part of the parsing table is the goto table. In this table are informations about shifts caused by nonterminals. (Shifts belong to terminals are in the action table.)

Let be the set of states of the automata. The -th row of the table is filled in from the -items of canonical set .

The -th row of the action table:

  • if and then ,

  • if and , then , where is the -th rule of the grammar,

  • if , then .

The method of filling in the goto table:

  • if , then .

  • In both table we have to write the text error into the empty positions.

These action and goto tables are called canonical parsing tables.

Theorem 2.24 The augmented grammar is grammar iff we can fill in the parsing tables created for this grammar without conflicts.

We can fill in the parsing tables with the next algorithm.

Fill-in-LR(1)-Table()

  1  FOR all LR(1) canonical sets  
  2    DO FOR all LR(1)-items 
  3       IF  and  
  4          THEN  
  5       IF  and  and  the -th rule 
  6          THEN  
  7       IF  
  8          THEN  
  9       IF  
 10          THEN  
 11    FOR all  
 12       DO IF “empty” 
 13          THEN  
 14    FOR all  
 15       DO IF ”empty” 
 16          THEN   
 17  RETURN action, goto  

We fill in the tables its line-by-line. In lines 2–6 of the algorithm we fill in the action table, in lines 9–10 we fill in the goto table. In lines 11–13 we write the error into the positions which remained empty.

Figure 2.16.  The structure of the The structure of the LR(1) parser. parser.

The structure of the LR(1) parser.


Now we deal with the steps of the parsing. (Figure 2.16).

The state of the parsing is written by configurations. A configuration of the parser consists of two parts, the first is the stack and the second is the unexpended input text.

The stack of the parsing is a double stack, we write or read two data with the operations push or pop. The stack consists of pairs of symbols, the first element of pairs there is a terminal or nonterminal symbol, and the second element is the serial number of the state of automaton. The content of the start state is .

The start configuration is , where means the unexpected text.

The parsing is successful if the parser moves to final state. In the final state the content of the stack is , and the parser is at the end of the text.

Suppose that the parser is in the configuration . The next move of the parser is determined by .

State transitions are the following.

  • If , i.e. the parser executes a shift, then the actual symbol and the new state are written into the stack. That is, the new configuration is

  • If , then we execute a reduction by the -th rule . In this step we delete rows, i.e. we delete elements from the stack, and then we determine the new state using the goto table. If after the deletion there is the state at the top of the stack, then the new state is .

    where .

  • If , then the parsing is completed, and the analysed text was correct.

  • If , then the parsing terminates, and a syntactic error was discovered at the symbol .

The parser is often named canonical parser.

Denote the action and goto tables together by . We can give the following algorithm for the steps of parser.

LR(1)-Parser()

  1  ,  
  2  REPEAT 
  3     
  4    IF  
  5       THEN  
  6       ELSE IF  and  is the -th rule and 
  7           and  
  8          THEN  
  9          ELSE IF  
 10             THEN  
 11             ELSE  
 12  UNTIL  OR  
 13  RETURN  

The input parameters of the algorithm are the text and table . The variable indicates the action of the parser. It has value parsing in the intermediate states, and its value is O.K. or ERROR at the final states. In line 3 we detail the configuration of the parser, that is necessary at lines 6–8. Using the action table, the parser determines its move from the symbol at the top of the stack and from the actual symbol . In lines 4–5 we execute a shift step, in lines 6–8 a reduction. The algorithm is completed in lines 9–11. At this moment, if the parser is at the end of text and the state 0 is at the top of stack, then the text is correct, otherwise a syntax error was detected. According to this, the output of the algorithm is O.K. or ERROR, and the final configuration is at the output, too. In the case of error, the first symbol of the second element of the configuration is the erroneous symbol.

Example 2.18 The action and goto tables of the parser for the grammar of Example 2.15 are as follows. The empty positions denote errors.

Example 2.19 Using the tables of the previous example, analyse the text .

The syntax tree of the sentence is in Figure 2.11.

Figure 2.17.  The syntax tree of the sentence The syntax tree of the sentence aab ..

The syntax tree of the sentence aab .

parser.

Our goal is to decrease the number of states of the parser, since not only the size but the speed of the compiler is dependent on the number of states. At the same time, we wish not to cut radically the set of grammars and languages, by using our new method.

There are a lot of -items in the canonical sets, such that are very similar: their core are the same, only their lookahead symbols are different. If there are two or more canonical sets in which there are similar items only, then we merge these sets.

If the canonical sets és a are mergeable, then let .

Execute all of possible merging of canonical sets. After renumbering the indexes we obtain sets ; these are the merged canonical sets or canonical sets.

We create the parser from these united canonical sets.

Example 2.20 Using the canonical sets of the example 2.17, we can merge the next canonical sets:

and ,

and ,

and .

In the Figure 2.15 it can be seen that mergeable sets are in equivalent or similar positions in the automaton.

There is no difficulty with the function read if we use merged canonical sets. If

and

then

We can prove this on the following way. By the definition of function read, the set depends on the core of -items in only, and it is independent of the lookahead symbols. Since the cores of -items in the sets are the same, the cores of -items of

are also the same. It follows that these sets are mergeable into a set , thus .

However, after merging canonical sets of -items, elements of this set can raise difficulties. Suppose that .

  • After merging there are not shift-shift conflicts. If

    and

    then there is a shift for the symbol and we saw that the function read does not cause problem, i.e. the set is equal to the set .

  • If there is an item

    in the canonical set and there is an item

    in the set a , then the merged set is an inadequate set with the symbol , i.e. there is a shift-reduce conflict in the merged set.

    But this case never happens. Both items are elements of the set and of the set . These sets are mergeable sets, thus they are different in lookahead symbols only. It follows that there is an item in the set . Using the Theorem 2.24 we get that the grammar is not a grammar; we get shift-reduce conflict from the set for the parser, too.

  • However, after merging reduce-reduce conflict may arise. The properties of grammar do not exclude this case. In the next example we show such a case.

Example 2.21 Let be a grammar, and the derivation rules are as follows.

This grammar is a grammar. For the viable prefix the -items

for the viable prefix the -items

create two canonical sets.

After merging these two sets we get a reduce-reduce conflict. If the input symbol is or then the handle is , but we cannot decide that if we have to use the rule or the rule for reducing.

Now we give the method for creating a parsing table. First we give the canonical sets of -items

then we merge canonical sets in which the sets constructed from the core of the items are identical ones. Let

be the canonical sets.

For the calculation of the size of the action and goto tables and for filling in these tables we use the sets . The method is the same as it was in the parsers. The constructed tables are named by parsing tables.

Definition 2.25 If the filling in the parsing tables do not produce conflicts then the grammar is said to be an grammar.

The run of parser is the same as it was in parser.

Example 2.22 Denote the result of merging canonical sets and by . Let be the state which belonging to this set.

The canonical sets of the grammar of Example 2.15 were given in the Example2.17 and the mergeable sets were seen in the example 2.20. For this grammar we can create the next parsing tables.

The filling in the tables are conflict free, therefore the grammar is an grammar. The automaton of this parser is in Figure 2.18.

Figure 2.18.  The automaton of the Example 2.22.

The automaton of the Example 2.22.

Example 2.23 Analyse the text using the parsing table of the previous example.

The syntax tree of the parsed text is in the Figure 2.17.

As it can be seen from the previous example, the grammars are grammars. The converse assertion is not true. In Example 2.21 there is a grammar which is , but it is not grammar.

Programming languages can be written by grammars. The most frequently used methods in compilers of programming languages is the method. The advantage of the parser is that the sizes of parsing tables are smaller than the size of parsing tables.

For example, the parsing tables for the Pascal language have a few hundreds of lines, whilst the parsers for this language have a few thousands of lines.

Exercises

2.3-1 Find the grammars among the following grammars (we give their derivation rules only).

1.

2.

3.

4.

2.3-2 Prove that the next grammars are grammars (we give their derivation rules only).

1.

2.

3.

2.3-3 Prove that the next grammars are not grammars (we give their derivation rules only).

1.

2.

3.

2.3-4 Show that a language has only one sentence.

2.3-5 Prove that the next grammars are grammars (we give their derivation rules only).

1.

2.

2.3-6 Prove that the next grammars are grammars. (we give their derivation rules only).

1 .

2.

2.3-7 Prove that the next grammars are not grammars for any (we give their derivation rules only).

1.

2.

2.3-8 Prove that the next grammars are but are not grammars (we give their derivation rules only).

1.

2.

2.3-9 Create parsing table for the above grammars.

2.3-10 Using the recursive descent method, write the parsing program for the above grammars.

2.3-11 Create canonical sets and the parsing tables for the above grammars.

2.3-12 Create merged canonical sets and the parsing tables for the above grammars.

 PROBLEMS 

2-1 Lexical analysis of a program text

The algorithm LEX-ANALYSE in Section 2.2 gives a scanner for the text that is described by only one regular expression or deterministic finite automaton, i.e. this scanner is able to analyse only one symbol. Create an automaton which executes total lexical analysis of a program language, and give the algorithm LEX-ANALYSE-LANGUAGE for this automaton. Let the input of the algorithm be the text of a program, and the output be the series of symbols. It is obvious that if the automaton goes into a finite state then its new work begins at the initial state, for analysing the next symbol. The algorithm finishes his work if it is at the end of the text or a lexical error is detected.

2-2 Series of symbols augmented with data of symbols

Modify the algorithm of the previous task on such way that the output is the series of symbols augmented with the appropriate attributes. For example, the attribute of a variable is the character string of its name, or the attribute of a number is its value and type. It is practical to write pointers to the symbols in places of data.

2-3 parser from canonical sets

If we omit lookahead symbols from the -items then we get -items. We can define functions closure and read for -items too, doing not care for lookahead symbols. Using a method similar to the method of , we can construct canonical sets

One can observe that the number of merged canonical sets is equal to the number of canonical sets, since the cores of -items of the merged canonical sets are the same as the items of the canonical sets. Therefore the number of states of parser is equal to the number of states of its parser.

Using this property, we can construct canonical sets from canonical sets, by completing the items of the canonical sets with lookahead symbols. The result of this procedure is the set of canonical sets.

It is obvious that the right part of an -item begins with symbol point only if this item was constructed by the function closure. (We notice that there is one exception, the item of the canonical set .) Therefore it is no need for all items of canonical sets. Let the kernel of the canonical set be the -item , and let the kernel of any other canonical set be the set of the -items such that there is no point at the first position on the right side of the item. We give an canonical set by its kernel, since all of items can be construct from the kernel using the function closure.

If we complete the items of the kernel of canonical sets then we get the kernel of the merged canonical sets. That is, if the kernel of an canonical set is , then from it with completions we get the kernel of the canonical set, .

If we know then we can construct easily. If , and , then . For -items, if , and then we have to determine also the lookahead symbols, i.e. the symbols such that .

If and then it is sure that . In this case, we say that the lookahead symbol was spontaneously generated for this item of canonical set . The symbol do not play important role in the construction of the lookahead symbol.

If then is an element of the set , and the lookahead symbol is . In this case we say that the lookahead symbol is propagated from into the item of the set .

If the kernel of an canonical set is given then we construct the propagated and spontaneously generated lookahead symbols for items of by the following algorithm.

For all items we construct the set , where is a dummy symbol,

  • if and then and the symbol is spontaneously generated into the item of the set ,

  • if then , and the symbol is propagated from into the item of the set .

The kernel of the canonical set has only one element. The core of this element is . For this item we can give the lookahead symbol directly. Since the core of the kernel of all canonical sets are given, using the above method we can calculate all of propagated and spontaneously generated symbols.

Give the algorithm which constructs canonical sets from canonical sets using the methods of propagation and spontaneously generation.

 CHAPTER NOTES 

The theory and practice of compilers, computers and program languages are of the same age. The construction of first compilers date back to the 1950's. The task of writing compilers was a very hard task at that time, the first Fortran compiler took 18 man-years to implement [6]. From that time more and more precise definitions and solutions have been given to the problems of compilation, and better and better methods and utilities have been used in the construction of translators.

The development of formal languages and automata was a great leap forward, and we can say that this development was urged by the demand of writing of compilers. In our days this task is a simple routine project. New results, new discoveries are expected in the field of code optimisation only.

One of the earliest nondeterministic and backtrack algorithms appeared in the 1960's. The first two dynamic programming algorithms were the CYK (Cocke-Younger-Kasami) algorithm from 1965–67 and the Earley-algorithm from 1965. The idea of precedence parsers is from the end of 1970's and from the beginning of 1980's. The grammars was defined by Knuth in 1965; the definition of grammars is dated from the beginning of 1970's. grammars were studied by De Remer in 1971, the elaborating of parsing methods were finished in the beginning of 1980's [7][5][6].

To the middle of 1980's it became obvious that the parsing methods are the real efficient methods and since than the methods are used in compilers [7].

A lot of very excellent books deal with the theory and practice of compiles. Perhaps the most successful of them was the book of Gries [100]; in this book there are interesting results for precedence grammars. The first successful book which wrote about the new algorithms was of Aho and Ullman [5], we can find here also the CYK and the Early algorithms. It was followed by the “dragon book” of Aho and Ullman[6]; the extended and corrected issue of it was published in 1986 by authors Aho, Ullman and Sethi [7].

Without completeness we notice the books of Fischer and LeBlanc [74], Tremblay and Sorenson [258], Waite and Goos [267], Hunter [121], Pittman [206] and Mak [171]. Advanced achievements are in recently published books, among others in the book of Muchnick [189], Grune, Bal, Jacobs and Langendoen [102], in the book of Cooper and Torczon [52] and in a chapter of the book by Louden [169].

Chapter 3. Compression and Decompression

Algorithms for data compression usually proceed as follows. They encode a text over some finite alphabet into a sequence of bits, hereby exploiting the fact that the letters of this alphabet occur with different frequencies. For instance, an “e” occurs more frequently than a “q” and will therefore be assigned a shorter codeword. The quality of the compression procedure is then measured in terms of the average codeword length.

So the underlying model is probabilistic, namely we consider a finite alphabet and a probability distribution on this alphabet, where the probability distribution reflects the (relative) frequencies of the letters. Such a pair—an alphabet with a probability distribution—is called a source. We shall first introduce some basic facts from Information Theory. Most important is the notion of entropy, since the source entropy characterises the achievable lower bounds for compressibility.

The source model to be best understood, is the discrete memoryless source. Here the letters occur independently of each other in the text. The use of prefix codes, in which no codeword is the beginning of another one, allows to compress the text down to the entropy of the source. We shall study this in detail. The lower bound is obtained via Kraft's inequality, the achievability is demonstrated by the use of Huffman codes, which can be shown to be optimal.

There are some assumptions on the discrete memoryless source, which are not fulfilled in most practical situations. Firstly, usually this source model is not realistic, since the letters do not occur independently in the text. Secondly, the probability distribution is not known in advance. So the coding algorithms should be universal for a whole class of probability distributions on the alphabet. The analysis of such universal coding techniques is much more involved than the analysis of the discrete memoryless source, such that we shall only present the algorithms and do not prove the quality of their performance. Universal coding techniques mainly fall into two classes.

Statistical coding techniques estimate the probability of the next letters as accurately as possible. This process is called modelling of the source. Having enough information about the probabilities, the text is encoded, where usually arithmetic coding is applied. Here the probability is represented by an interval and this interval will be encoded.

Dictionary-based algorithms store patterns, which occurred before in the text, in a dictionary and at the next occurrence of a pattern this is encoded via its position in the dictionary. The most prominent procedure of this kind is due to Ziv and Lempel.

We shall also present a third universal coding technique which falls in neither of these two classes. The algorithm due to Burrows and Wheeler has become quite prominent in recent years, since implementations based on it perform very well in practice.

All algorithms mentioned so far are lossless, i. e., there is no information lost after decoding. So the original text will be recovered without any errors. In contrast, there are lossy data compression techniques, where the text obtained after decoding does not completely coincide with the original text. Lossy compression algorithms are used in applications like image, sound, video, or speech compression. The loss should, of course, only marginally effect the quality. For instance, frequencies not realizable by the human eye or ear can be dropped. However, the understanding of such techniques requires a solid background in image, sound or speech processing, which would be far beyond the scope of this paper, such that we shall illustrate only the basic concepts behind image compression algorithms such as JPEG.

We emphasise here the recent developments such as the Burrows-Wheeler transform and the context–tree weighting method. Rigorous proofs will only be presented for the results on the discrete memoryless source which is best understood but not a very realistic source model in practice. However, it is also the basis for more complicated source models, where the calculations involve conditional probabilities. The asymptotic computational complexity of compression algorithms is often linear in the text length, since the algorithms simply parse through the text. However, the running time relevant for practical implementations is mostly determined by the constants as dictionary size in Ziv-Lempel coding or depth of the context tree, when arithmetic coding is applied. Further, an exact analysis or comparison of compression algorithms often heavily depends on the structure of the source or the type of file to be compressed, such that usually the performance of compression algorithms is tested on benchmark files. The most well-known collections of benchmark files are the Calgary Corpus and the Canterbury Corpus.

3.1 Facts from information theory

3.1.1 The Discrete Memoryless Source

The source model discussed throughout this chapter is the Discrete Memoryless Source (DMS). Such a source is a pair , where , is a finite alphabet and is a probability distribution on . A discrete memoryless source can also be described by a random variable , where for all . A word is the realization of the random variable , where the are identically distributed and independent of each other. So the probability is the product of the probabilities of the single letters.

Estimations for the letter probabilities in natural languages are obtained by statistical methods. If we consider the English language and choose for the latin alphabet with an additional symbol for Space and punctuation marks, the probability distribution can be derived from the frequency table in 3.1, which is obtained from the copy–fitting tables used by professional printers. So , etc.

Figure 3.1.  Frequency of letters in Frequency of letters in 1000 characters of English. characters of English.

Frequency of letters in 1000 characters of English.


Observe that this source model is often not realistic. For instance, in English texts e.g. the combination `th' occurs more often than `ht'. This could not be the case, if an English text was produced by a discrete memoryless source, since then .

In the discussion of the communication model it was pointed out that the encoder wants to compress the original data into a short sequence of binary digits, hereby using a binary code, i. e., a function . To each element a codeword is assigned. The aim of the encoder is to minimise the average length of the codewords. It turns out that the best possible data compression can be described in terms of the entropy of the probability distribution . The entropy is given by the formula

where the logarithm is to the base 2. We shall also use the notation according to the interpretation of the source as a random variable.

3.1.2 Prefix codes

A code (of variable length) is a function , . Here is the set of codewords, where for the codeword is where denotes the length of , i. e., the number of bits used to present .

A code is uniquely decipherable (UDC), if every word in is representable by at most one sequence of codewords.

A code is a prefix code, if no codeword is prefix of another one, i. e., for any two codewords and , , with holds . So in at least one of the first components and differ.

Messages encoded using a prefix code are uniquely decipherable. The decoder proceeds by reading the next letter until a codeword is formed. Since cannot be the beginning of another codeword, it must correspond to the letter . Now the decoder continues until another codeword is formed. The process may be repeated until the end of the message. So after having found the codeword the decoder instantaneously knows that is the next letter of the message. Because of this property a prefix code is also denoted as instantaneous code.

The criterion for data compression is to minimise the average length of the codewords. So if we are given a source , where and is a probability distribution on , the average length is defined by

The following prefix code for English texts has average length .

We can still do better, if we do not encode single letters, but blocks of letters for some . In this case we replace the source by for some . Remember that for a word , since the source is memoryless. If e.g. we are given an alphabet with two letters, and , , then the code defined by , has average length . Obviously we cannot find a better code. The combinations of two letters now have the following probabilities:

The prefix code defined by

has average length . So could be interpreted as the average length the code requires per letter of the alphabet . When we encode blocks of letters we are interested in the behaviour of

It follows from the Noiseless Coding Theorem, which is stated in the next section, that the entropy of the source .

In our example for the English language we have . So the code presented above, where only single letters are encoded, is already nearly optimal in respect of . Further compression is possible, if we consider the dependencies between the letters.

3.1.3 Kraft's inequality and noiseless coding theorem

We shall now introduce a necessary and sufficient condition for the existence of a prefix code with prescribed word lengths .

Theorem 3.1 (Kraft's inequality) Let . A uniquely decipherable code with word lengths exists, if and only if

Proof. The central idea is to interpret the codewords as nodes of a rooted binary tree with depth . The tree is required to be complete (every path from the root to a leaf has length ) and regular (every inner node has outdegree 2). The example in Figure 3.2 for may serve as an illustration.

Figure 3.2.  Example of a code tree.

Example of a code tree.


So the nodes with distance from the root are labeled with the words . The upper successor of is labeled , its lower successor is labeled .

The shadow of a node labeled by is the set of all the leaves which are labeled by a word (of length ) beginning with . In other words, the shadow of consists of the leaves labeled by a sequence with prefix . In our example is the shadow of the node labeled by .

Now suppose we are given positive integers . We further assume that . As first codeword is chosen. Since , we have (otherwise only one letter has to be encoded). Hence there are left some nodes on the level, which are not in the shadow of . We pick the first of these remaining nodes and go back steps in direction to the root. Since we shall find a node labeled by a sequence of bits, which is not a prefix of . So we can choose this sequence as . Now again, either , and we are ready, or by the hypothesis and we can find a node on the level, which is not contained in the shadows of and . We find the next codeword as shown above. The process can be continued until all codewords are assigned.

Conversely, observe that , where is the number of codewords with length in the uniquely decipherable prefix code and again denotes the maximal word length.

The power of this term can be expanded as

Here is the total number of messages whose coded representation is of length .

Since the code is uniquely decipherable, to every sequence of letters corresponds at most one possible message. Hence and . Taking –th root this yields .

Since this inequality holds for any and , we have the desired result

Theorem 3.2 (Noiseless Coding Theorem) For a source , it is always possible to find a uniquely decipherable code with average length

Proof. Let denote the codeword lengths of an optimal uniquely decipherable code. Now we define a probability distribution by for , where . By Kraft's inequality .

For two probability distributions and on the I-divergence is defined by

I-divergence is a good measure for the distance of two probability distributions. Especially, always the I-divergence . So for any probability distribution

From this it follows that

Since , and hence .

In order to prove the right-hand side of the Noiseless Coding Theorem for we define . Observe that and hence .

So and from Kraft's Inequality we know that there exists a uniquely decipherable code with word lengths . This code has average length

3.1.4 Shannon-Fano-Elias-codes

In the proof of the Noiseless Coding Theorem it was explicitly shown how to construct a prefix code to a given probability distribution . The idea was to assign to each a codeword of length by choosing an appropriate vertex in the tree introduced. However, this procedure does not always yield an optimal code. If e.g. we are given the probability distribution , we would encode , , and thus achieve an average codeword length . But the code with , , has only average length .

Shannon gave an explicit procedure for obtaining codes with codeword lengths using the binary representation of cumulative probabilities (Shannon remarked this procedure was originally due to Fano). The elements of the source are ordered according to increasing probabilities . Then the codeword consists of the first bits of the binary expansion of the sum . This procedure was further developed by Elias. The elements of the source now may occur in any order. The Shannon-Fano-Elias-code has as codewords the first bits of the binary expansion of the sum .

We shall illustrate these procedures with the example in Figure 3.3.

Figure 3.3.  Example of Shannon code and Shannon-Fano-Elias-code.

Example of Shannon code and Shannon-Fano-Elias-code.


A more efficient procedure is also due to Shannon and Fano. The Shannon-Fano-algorithm will be illustrated by the same example in Figure 3.4.

Figure 3.4.  Example of the Shannon-Fano-algorithm.

Example of the Shannon-Fano-algorithm.


The messages are first written in order of nonincreasing probabilities. Then the message set is partitioned into two most equiprobable subsets and . A is assigned to each message contained in one subset and a to each of the remaining messages. The same procedure is repeated for subsets of and ; that is, will be partitioned into two subsets and . Now the code word corresponding to a message contained in will start with and that corresponding to a message in will begin with . This procedure is continued until each subset contains only one message.

However, this algorithm neither yields an optimal code in general, since the prefix code , , , , , , has average length .

3.1.5 The Huffman coding algorithm

The Huffman coding algorithm is a recursive procedure, which we shall illustrate with the same example as for the Shannon-Fano-algorithm in Figure 3.5 with and . The source is successively reduced by one element. In each reduction step we add up the two smallest probabilities and insert their sum in the increasingly ordered sequence , thus obtaining a new probability distribution with . Finally we arrive at a source with two elements ordered according to their probabilities. The first element is assigned a , the second element a . Now we again “blow up” the source until the original source is restored. In each step and are obtained by appending or , respectively, to the codeword corresponding to .

Figure 3.5.  Example of a Huffman code.

Example of a Huffman code.

Correctness.

The following theorem demonstrates that the Huffman coding algorithm always yields a prefix code optimal with respect to the average codeword length.

Theorem 3.3 We are given a source , where and the probabilities are ordered non–increasingly . A new probability distribution is defined by

Let be an optimal prefix code for . Now we define a code for the distribution by

Then is an optimal prefix code for and , where denotes the length of an optimal prefix code for probability distribution .

Proof. For a probability distribution on with there exists an optimal prefix code with

i)

ii)

iii) and differ exactly in the last position.

This holds, since:

i) Assume that there are with and . Then the code obtained by interchanging the codewords and has average length , since

ii) Assume we are given a code with . Because of the prefix property we may drop the last bits and thus obtain a new code with .

iii) If no two codewords of maximal length agree in all places but the last, then we may drop the last digit of all such codewords to obtain a better code.

Now we are ready to prove the statement from the theorem. From the definition of and we have

Now let be an optimal prefix code with the properties ii) and iii) from the preceding lemma. We define a prefix code for

by for and is obtained by dropping the last bit of or .

Now

and hence , since .

Analysis.

If denotes the size of the source alphabet, the Huffman coding algorithm needs additions and code modifications (appending or ). Further we need insertions, such that the total complexity can be roughly estimated to be . However, observe that with the Noiseless Coding Theorem, the quality of the compression rate can only be improved by jointly encoding blocks of, say, letters, which would result in a Huffman code for the source of size . So, the price for better compression is a rather drastic increase in complexity. Further, the codewords for all letters have to be stored. Encoding a sequence of letters can since be done in steps.

Exercises

3.1-1 Show that the code with and is uniquely decipherable but not instantaneous for any .

3.1-2 Compute the entropy of the source , with and .

3.1-3 Find the Huffman-codes and the Shannon-Fano-codes for the sources with as in the previous exercise for and calculate their average codeword lengths.

3.1-4 Show that always .

3.1-5 Show that the redundancy of a prefix code for a source with probability distribution can be expressed as a special I–divergence.

3.1-6 Show that the I-divergence for all probability distributions and over some alphabet with equality exactly if but that the I-divergence is not a metric.

3.2 Arithmetic coding and modelling

In statistical coding techniques as Shannon-Fano- or Huffman-coding the probability distribution of the source is modelled as accurately as possible and then the words are encoded such that a higher probability results in a shorter codeword length.

We know that Huffman-codes are optimal with respect to the average codeword length. However, the entropy is approached by increasing the block length. On the other hand, for long blocks of source symbols, Huffman-coding is a rather complex procedure, since it requires the calculation of the probabilities of all sequences of the given block length and the construction of the corresponding complete code.

For compression techniques based on statistical methods often arithmetic coding is preferred. Arithmetic coding is a straightforward extension of the Shannon-Fano-Elias-code. The idea is to represent a probability by an interval. In order to do so, the probabilities have to be calculated very accurately. This process is denoted as modelling of the source. So statistical compression techniques consist of two stages: modelling and coding. As just mentioned, coding is usually done by arithmetic coding. The different algorithms like, for instance, DCM (Discrete Markov Coding) and PPM (Prediction by Partial Matching) vary in the way of modelling the source. We are going to present the context-tree weighting method, a transparent algorithm for the estimation of block probabilities due to Willems, Shtarkov, and Tjalkens, which also allows a straightforward analysis of the efficiency.

3.2.1 Arithmetic coding

The idea behind arithmetic coding is to represent a message by interval , where is the sum of the probabilities of those sequences which are smaller than in lexicographic order.

A codeword assigned to message also corresponds to an interval. Namely, we identify codeword of length with interval , where is the binary expansion of the nominator in the fraction . The special choice of codeword will be obtained from and as follows:

So message is encoded by a codeword , whose interval is inside interval .

Let us illustrate arithmetic coding by the following example of a discrete memoryless source with and .

At first glance it may seem that this code is much worse than the Huffman code for the same source with codeword lengths () we found previously. On the other hand, it can be shown that arithmetic coding always achieves an average codeword length , which is only two bits apart from the lower bound in the noiseless coding theorem. Huffman coding would usually yield an even better code. However, this “negligible” loss in compression rate is compensated by several advantages. The codeword is directly computed from the source sequence, which means that we do not have to store the code as in the case of Huffman coding. Further, the relevant source models allow to easily compute and from , usually by multiplication by . This means that the sequence to be encoded can be parsed sequentially bit by bit, unlike in Huffman coding, where we would have to encode blockwise.

Encoding.

The basic algorithm for encoding a sequence by arithmetic coding works as follows. We assume that , (in the case for all the discrete memoryless source arises, but in the section on modelling more complicated formulae come into play) and hence

Starting with and the first letters of the text to be compressed determine the current interval . These current intervals are successively refined via the recursions

is usually denoted as augend. The final interval will then be encoded by interval as described above. So the algorithm looks as follows.

Arithmetic-Encoder()

  1   
  2   
  3  FOR  TO  
  4    DO   
  5        
  6   
  7   
  8  RETURN  

We shall illustrate the encoding procedure by the following example from the literature. Let the discrete, memoryless source be given with ternary alphabet and , , . The sequence has to be encoded. Observe that and for all . Further , , and .

The above algorithm yields

Hence and . From this can be calculated that and finally whose binary representation is codeword .

Decoding.

Decoding is very similar to encoding. The decoder recursively “undoes” the encoder's recursion. We divide the interval into subintervals with bounds defined by . Then we find the interval in which codeword can be found. This interval determines the next symbol. Then we subtract and rescale by multiplication by .

Arithmetic-Decoder()

  1  FOR TO  
  2    DO  
  3       WHILE  
  4          DO  
  5              
  6              
  7  RETURN  

Observe that when the decoder only receives codeword he does not know when the decoding procedure terminates. For instance can be the codeword for , , , etc. In the above pseudocode it is implicit that the number of symbols has also been transmitted to the decoder, in which case it is clear what the last letter to be encoded was. Another possibility would be to provide a special end-of-file (EOF)-symbol with a small probability, which is known to both the encoder and the decoder. When the decoder sees this symbol, he stops decoding. In this case line 1 would be replaced by

  1  WHILE (EOF) 

(and would have to be increased). In our above example, the decoder would receive the codeword , the binary expansion of up to bits. This number falls in the interval which belongs to the letter , hence the first letter . Then he calculates . Again this number is in the interval and the second letter is . In order to determine the calculation must be performed. Again such that also . Finally . Since , the last letter of the sequence must be .

Correctness.

Recall that message is encoded by a codeword , whose interval is inside interval . This follows from .

Obviously a prefix code is obtained, since a codeword can only be a prefix of another one, if their corresponding intervals overlap – and the intervals are obviously disjoint for different -s.

Further, we mentioned already that arithmetic coding compresses down to the entropy up to two bits. This is because for every sequence it is . It can also be shown that the additional transmission of block length or the introduction of the EOF symbol only results in a negligible loss of compression.

However, the basic algorithms we presented are not useful in order to compress longer files, since with increasing block length the intervals are getting smaller and smaller, such that rounding errors will be unavoidable. We shall present a technique to overcome this problem in the following.

Analysis.

The basic algorithm for arithmetic coding is linear in the length of the sequence to be encoded. Usually, arithmetic coding is compared to Huffman coding. In contrast to Huffman coding, we do not have to store the whole code, but can obtain the codeword directly from the corresponding interval. However, for a discrete memoryless source, where the probability distribution is the same for all letters, this is not such a big advantage, since the Huffman code will be the same for all letters (or blocks of letters) and hence has to be computed only once. Huffman coding, on the other hand, does not use any multiplications which slow down arithmetic coding.

For the adaptive case, in which the 's may change for different letters to be encoded, a new Huffman code would have to be calculated for each new letter. In this case, usually arithmetic coding is preferred. We shall investigate such situations in the section on modelling. For implementations in practice floating point arithmetic is avoided. Instead, the subdivision of the interval is represented by a subdivision of the integer range , say, with proportions according to the source probabilities. Now integer arithmetic can be applied, which is faster and more precise.

Precision problem.

In the basic algorithms for arithmetic encoding and decoding the shrinking of the current interval would require the use of high precision arithmetic for longer sequences. Further, no bit of the codeword is produced until the complete sequence has been read in. This can be overcome by coding each bit as soon as it is known and then double the length of the current interval , say, so that this expansion represents only the unknown part of the interval. This is the case when the leading bits of the lower and upper bound are the same, i. e. the interval is completely contained either in or in . The following expansion rules guarantee that the current interval does not become too small.

Case 1 (): , .

Case 2 (): , .

Case 3 (): , .

The last case called underflow (or follow) prevents the interval from shrinking too much when the bounds are close to . Observe that if the current interval is contained in with , we do not know the next output bit, but we do know that whatever it is, the following bit will have the opposite value. However, in contrast to the other cases we cannot continue encoding here, but have to wait (remain in the underflow state and adjust a counter to the number of subsequent underflows, i. e. ) until the current interval falls into either or . In this case we encode the leading bit of this interval – for and for – followed by many inverse bits and then set . The procedure stops, when all letters are read in and the current interval does not allow any further expansion.

Arithmetic-Precision-Encoder()

  1   
  2   
  3   
  4   
  5  FOR  TO  
  6    DO  
  7        
  8        
  9       WHILE  AND NOT ( AND ) 
 10          DO IF  
 11             THEN  many s 
 12                  
 13                  
 14                  
 15             ELSE IF  
 16                THEN ,  many 0s 
 17                     
 18                     
 19                    
 20                ELSE IF  AND  
 21                THEN  
 22                     
 23                 
 24  IF  
 25    THEN ,  many 1s) 
 26  RETURN  

We shall illustrate the encoding algorithm in Figure3.6 by our example – the encoding of the message with alphabet and probability distribution . An underflow occurs in the sixth row: we keep track of the underflow state and later encode the inverse of the next bit, here this inverse bit is the in the ninth row. The encoded string is .

Figure 3.6.  Example of arithmetic encoding with interval expansion.

Example of arithmetic encoding with interval expansion.


Precision-decoding involves the consideration of a third variable besides the interval bounds LO and HI.

3.2.2 Modelling

Modelling of memoryless sources.

In this section we shall only consider binary sequences to be compressed by an arithmetic coder. Further, we shortly write instead of in order to allow further subscripts and superscripts for the description of the special situation. will denote estimated probabilities, weighted probabilities, and probabilities assigned to a special context .

The application of arithmetic coding is quite appropriate if the probability distribution of the source is such that can easily be calculated from . Obviously this is the case, when the source is discrete and memoryless, since then .

Even when the underlying parameter of a binary, discrete memoryless source is not known, there is an efficient way due to Krichevsky and Trofimov to estimate the probabilities via

where and denote the number of s and s, respectively, in the sequence . So given the sequence with many s and many s, the probability that the next letter will be a is estimated as . The estimated block probability of a sequence containing zeros and ones then is

with initial values and as in Figure 3.7, where the values of the Krichevsky-Trofimov–estimator for small are listed.

Figure 3.7.  Table of the first values for the Krichevsky-Trofimov-estimator.

Table of the first values for the Krichevsky-Trofimov-estimator.


Note that the summand in the nominator guarantees that the probability for the next letter to be a is positive even when the symbol did not occur in the sequence so far. In order to avoid infinite codeword length, this phenomenon has to be carefully taken into account when estimating the probability of the next letter in all approaches to estimate the parameters, when arithmetic coding is applied.

Modells with known context tree.

In most situations the source is not memoryless, i. e., the dependencies between the letters have to be considered. A suitable way to represent such dependencies is the use of a suffix tree, which we denote as context tree. The context of symbol is suffix preceding . To each context (or leaf in the suffix tree) there corresponds a parameter , which is the probability of the occurrence of a 1 when the last sequence of past source symbols is equal to context (and hence is the probability for a in this case). We are distinguishing here between the model (the suffix tree) and the parameters ().

Example 3.1 Let and , , and . The corresponding suffix tree jointly with the parsing process for a special sequence can be seen in Figure 3.8.

Figure 3.8.  An example for a tree source.

An example for a tree source.


The actual probability of the sequence ' ' given the past ' ' is , since the first letter is preceded by suffix , the second letter is preceded by suffix , etc.

Suppose the model is known, but not the parameters . The problem now is to find a good coding distribution for this case. The tree structure allows to easily determine which context precedes a particular symbol. All symbols having the same context (or suffix) form a memoryless source subsequence whose probability is determined by the unknown parameter . In our example these subsequences are ' ' for , ' ' for and ' ' for . One uses the Krichevsky-Trofimov-estimator for this case. To each node in the suffix tree, we count the numbers of zeros and of ones preceded by suffix . For the children and of parent node obviously and must be satisfied.

In our example for the root , and , . Further , , , , , , , , , and . These last numbers are not relevant for our special source but will be important later on, when the source model or the corresponding suffix tree, respectively, is not known in advance.

Example 3.2 Let as in the previous example. Encoding a subsequence is done by successively updating the corresponding counters for and . For example, when we encode the sequence ' ' given the past ' ' using the above suffix tree and Krichevsky-Trofimov-estimator we obtain

where , and are the probabilities of the subsequences ' ', ' ' and ' ' in the context of the leaves. These subsequences are assumed to be memoryless.

The context-tree weighting method.

Suppose we have a good coding distribution for source 1 and another one, , for source 2. We are looking for a good coding distribution for both sources. One possibility is to compute and and then 1 bit is needed to identify the best model which then will be used to compress the sequence. This method is called selecting. Another possibility is to employ the weighted distribution, which is

We shall present now the context-tree weighting algorithm. Under the assumption that a context tree is a full tree of depth , only and , i. e. the number of zeros and ones in the subsequence of bits preceded by context , are stored in each node of the context tree.

Further, to each node is assigned a weighted probability which is recursively defined as

where describes the length of the (binary) string and is the estimated probability using the Krichevsky-Trofimov-estimator.

Example 3.3 After encoding the sequence ' ' given the past ' ' we obtain the context tree of depth 3 in Figure 3.9. The weighted probability of the root node finally yields the coding probability corresponding to the parsed sequence.

Figure 3.9.  Weighted context tree for source sequence ' Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 .' with past Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 .. The pair Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 . denotes Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 . zeros and Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 . ones preceded by the corresponding context Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 .. For the contexts Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 . it is Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 ..

Weighted context tree for source sequence ' 0100111 ' with past \dots010 . The pair (a_{s},b_{s}) denotes a_{s} zeros and b_{s} ones preceded by the corresponding context s . For the contexts s=111,101,110,000 it is P_{w}^{s}=P_{e}(0,0)=1 .


Recall that for the application in arithmetic coding it is important that probabilities and can be efficiently calculated from . This is possible with the context-tree weighting method, since the weighted probabilities only have to be updated, when is changing. This just occurs for the contexts along the path from the root to the leaf in the context tree preceding the new symbol —namely the contexts for and the root . Along this path, has to be performed, when , and has to be performed, when , and the corresponding probabilities and have to be updated.

This suggests the following algorithm for updating the context tree when reading the next letter . Recall that to each node of the tree we store the parameters , and . These parameters have to be updated in order to obtain . We assume the convention that the ordered pair denotes the root .

Update-Context-Tree()

  1   
  2  IF  
  3    THEN  
  4        
  5    ELSE  
  6        
  7  FOR  TO  
  8    DO  
  9       IF  
 10          THEN  
 11              
 12          ELSE  
 13              
 14     
 15  RETURN  

The probability assigned to the root in the context tree will be used for the successive subdivisions in arithmetic coding. Initially, before reading , the parameters in the context tree are , , and for all contexts in the tree. In our example the updates given the past would yield the successive probabilities : for , for , for , for , for , for , for , and finally for .

Correctness.

Recall that the quality of a code concerning its compression capability is measured with respect to the average codeword length. The average codeword length of the best code comes as close as possible to the entropy of the source. The difference between the average codeword length and the entropy is denoted as the redundancy of code , hence

which obviously is the weighted (by ) sum of the individual redundancies

The individual redundancy of sequences given the (known) source for all for , is bounded by

The individual redundancy of sequences using the context–tree weighting algorithm (and hence a complete tree of all possible contexts as model ) is bounded by

Comparing these two formulae, we see that the difference of the individual redundancies is bits. This can be considered as the cost of not knowing the model, i.e. the model redundancy. So, the redundancy splits into the parameter redundancy, i. e. the cost of not knowing the parameter, and the model redundancy. It can be shown that the expected redundancy behaviour of the context-tree weighting method achieves the asymptotic lower bound due to Rissanen who could demonstrate that about bits per parameter is the minimum possible expected redundancy for .

Analysis.

The computational complexity is proportional to the number of nodes that are visited when updating the tree, which is about . Therefore, the number of operations necessary for processing symbols is linear in . However, these operations are mainly multiplications with factors requiring high precision.

As for most modelling algorithms, the backlog of implementations in practice is the huge amount of memory. A complete tree of depth has to be stored and updated. Only with increasing the estimations of the probabilities are becoming more accurate and hence the average codeword length of an arithmetic code based on these estimations would become shorter. The size of the memory, however, depends exponentially on the depth of the tree.

We presented the context--tree weighting method only for binary sequences. Note that in this case the cumulative probability of a binary sequence can be calculated as

For compression of sources with larger alphabets, for instance ASCII-files, we refer to the literature.

Exercises

3.2-1 Compute the arithmetic codes for the sources , with and and compare these codes with the corresponding Huffman-codes derived previously.

3.2-2 For the codes derived in the previous exercise compute the individual redundancies of each codeword and the redundancies of the codes.

3.2-3 Compute the estimated probabilities for the sequence and all its subsequences using the Krichevsky-Trofimov-estimator.

3.2-4 Compute all parameters and the estimated probability for the sequence given the past , when the context tree is known. What will be the codeword of an arithmetic code in this case?

3.2-5 Compute all parameters and the estimated probability for the sequence given the past , when the context tree is not known, using the context-tree weighting algorithm.

3.2-6 Based on the computations from the previous exercise, update the estimated probability for the sequence given the past .

Show that for the cumulative probability of a binary sequence it is

3.3 Ziv-Lempel-coding

In 1976–1978 Jacob Ziv and Abraham Lempel introduced two universal coding algorithms, which in contrast to statistical coding techniques, considered so far, do not make explicit use of the underlying probability distribution. The basic idea here is to replace a previously seen string with a pointer into a history buffer (LZ77) or with the index of a dictionary (LZ78). LZ algorithms are widely used—”zip” and its variations use the LZ77 algorithm. So, in contrast to the presentation by several authors, Ziv-Lempel-coding is not a single algorithm. Originally, Lempel and Ziv introduced a method to measure the complexity of a string—like in Kolmogorov complexity. This led to two different algorithms, LZ77 and LZ78. Many modifications and variations have been developed since. However, we shall present the original algorithms and refer to the literature for further information.

3.3.1 LZ77

The idea of LZ77 is to pass a sliding window over the text to be compressed. One looks for the longest substring in this window representing the next letters of the text. The window consists of two parts: a history window of length , say, in which the last bits of the text considered so far are stored, and a lookahead window of length containing the next bits of the text. In the simplest case and are fixed. Usually, is much bigger than . Then one encodes the triple (offset, length, letter). Here the offset is the number of letters one has to go back in the text to find the matching substring, the length is just the length of this matching substring, and the letter to be stored is the letter following the matching substring. Let us illustrate this procedure with an example. Assume the text to be compressed is , the window is of size 15 with letters history and letters lookahead buffer. Assume, the sliding window now arrived at

i. e., the history window contains the 10 letters and the lookahead window contains the five letters . The longest substring matching the first letters of the lookahead window is of length , which is found nine letters back from the right end of the history window. So we encode , since is the next letter (the string is also found five letters back, in the original LZ77 algorithm one would select the largest offset). The window then is moved letters forward

The next codeword is , since the longest matching substring is of length found letters backwards and is the letter following this substring in the lookahead window. We proceed with

and encode . Further

Here we encode . Observe that the match can extend into the lookahead window.

There are many subtleties to be taken into account. If a symbol did not appear yet in the text, offset and length are set to . If there are two matching strings of the same length, one has to choose between the first and the second offset. Both variations have advantages. Initially one might start with an empty history window and the first letters of the text to be compressed in the lookahead window - there are also further variations.

A common modification of the original scheme is to output only the pair (offset, length) and not the following letter of the text. Using this coding procedure one has to take into consideration the case in which the next letter does not occur in the history window. In this case, usually the letter itself is stored, such that the decoder has to distinguish between pairs of numbers and single letters. Further variations do not necessarily encode the longest matching substring.

3.3.2 LZ78

LZ78 does not use a sliding window but a dictionary which is represented here as a table with an index and an entry. LZ78 parses the text to be compressed into a collection of strings, where each string is the longest matching string seen so far plus the symbol following in the text to be compressed. The new string is added into the dictionary. The new entry is coded as , where is the index of the existing table entry and is the appended symbol.

As an example, consider the string “ ”. It is divided by LZ78 into strings as shown below. String is here the empty string.

Since we are not using a sliding window, there is no limit for how far back strings can reach. However, in practice the dictionary cannot continue to grow infinitely. There are several ways to manage this problem. For instance, after having reached the maximum number of entries in the dictionary, no further entries can be added to the table and coding becomes static. Another variation would be to replace older entries. The decoder knows how many bits must be reserved for the index of the string in the dictionary, and hence decompression is straightforward.

Correctness.

Ziv-Lempel coding asymptotically achieves the best possible compression rate which again is the entropy rate of the source. The source model, however, is much more general than the discrete memoryless source. The stochastic process generating the next letter, is assumed to be stationary (the probability of a sequence does not depend on the instant of time, i. e. for all and all sequences ). For stationary processes the limit exists and is defined to be the entropy rate.

If denotes the number of strings in the parsing process of LZ78 for a text generated by a stationary source, then the number of bits required to encode all these strings is . It can be shown that converges to the entropy rate of the source. However, this would require that all strings can be stored in the dictionary.

Analysis.

If we fix the size of the sliding window or the dictionary, the running time of encoding a sequence of letters will be linear in . However, as usually in data compression, there is a tradeoff between compression rate and speed. A better compression is only possible with larger memory. Increasing the size of the dictionary or the window will, however, result in a slower performance, since the most time consuming task is the search for the matching substring or the position in the dictionary.

Decoding in both LZ77 and LZ78 is straightforward. Observe that with LZ77 decoding is usually much faster than encoding, since the decoder already obtains the information at which position in the history he can read out the next letters of the text to be recovered, whereas the encoder has to find the longest matching substring in the history window. So algorithms based on LZ77 are useful for files which are compressed once and decompressed more frequently.

Further, the encoded text is not necessarily shorter than the original text. Especially in the beginning of the encoding the coded version may expand a lot. This expansion has to be taken into consideration.

For implementation it is not optimal to represent the text as an array. A suitable data structure will be a circular queue for the lookahead window and a binary search tree for the history window in LZ77, while for LZ78 a dictionary tree should be used.

Exercises

3.3-1 Apply the algorithms LZ77 and LZ78 to the string “abracadabra”.

3.3-2 Which type of files will be well compressed with LZ77 and LZ78, respectively? For which type of files are LZ77 and LZ78 not so advantageous?

3.3-3 Discuss the advantages of encoding the first or the last offset, when several matching substrings are found in LZ77.

3.4 The Burrows-Wheeler-transform

The Burrows-Wheeler-transform will best be demonstrated by an example. Assume that our original text is “WHEELER”. This text will be mapped to a second text and an index according to the following rules.

1) We form a matrix consisting of all cyclic shifts of the original text . In our example

2) From we obtain a new matrix by simply ordering the rows in lexicographically. Here this yields the matrix

3) The transformed string then is just the last column of the matrix and the index is the number of the row of , in which the original text is contained. In our example “HELWEER” and – we start counting the the rows with row no. .

This gives rise to the following pseudocode. We write here instead of and instead of , since the purpose of the vector notation is only to distinguish the vectors from the letters in the text.

BWT-Encoder()

  1  FOR  TO  
  2    DO  
  3  FOR  TO  
  4    DO FOR  TO  
  5          DO  
  6  FOR  TO  
  7    DO row  of  row  of  in lexicographic order 
  8  FOR  TO  
  9    DO  
 10    
 11  WHILE (row  of  row  of ) 
 12    DO  
 13    
 14  RETURN  and  

It can be shown that this transformation is invertible, i. e., it is possible to reconstruct the original text from its transform and the index . This is because these two parameters just yield enough information to find out the underlying permutation of the letters. Let us illustrate this reconstruction using the above example again. From the transformed string we obtain a second string by simply ordering the letters in in ascending order. Actually, is the first column of the matrix above. So, in our example

Now obviously the first letter of our original text is the letter in position of the sorted string , so here . Then we look at the position of the letter just considered in the string – here there is only one W, which is letter no. 3 in . This position gives us the location of the next letter of the original text, namely . is found in position no. 0 in , hence . Now there are three E–s in the string and we take the first one not used so far, here the one in position no. 1, and hence . We iterate this procedure and find , , .

This suggests the following pseudocode.

BWT-Decoder()

  1   sort  
  2   
  3  FOR  TO  
  4    DO  
  5       WHILE ( OR  is a component of ) 
  6          DO  
  7               
  8              
  9  RETURN  

This algorithm implies a more formal description. Since the decoder only knows , he has to sort this string to find out . To each letter from the transformed string record the position in from which it was jumped to by the process described above. So the vector in our pseudocode yields a permutation such that for each row it is in matrix . In our example . This permutation can be used to reconstruct the original text of length via , where and for .

Observe that so far the original data have only been transformed and are not compressed, since string has exactly the same length as the original string . So what is the advantage of the Burrows-Wheeler transformation? The idea is that the transformed string can be much more efficiently encoded than the original string. The dependencies among the letters have the effect that in the transformed string there appear long blocks consisting of the same letter.

In order to exploit such frequent blocks of the same letter, Burrows and Wheeler suggested the following move-to-front-code, which we shall illustrate again with our example above.

We write down a list containing the letters used in our text in alphabetic order indexed by their position in this list.

Then we parse through the transformed string letter by letter, note the index of the next letter and move this letter to the front of the list. So in the first step we note 1—the index of the H, move H to the front and obtain the list

Then we note 1 and move E to the front,

note 2 and move L to the front,

note 4 and move W to the front,

note 2 and move E to the front,

note 0 and leave E at the front,

note 4 and move R to the front,

So we obtain the sequence as our move-to-front-code. The pseudocode may look as follows, where is a list of the letters occuring in the string .

Move-To-Front()

  1   list of  letters occuring in  ordered alphabetically  
  2  FOR  TO  
  3    DO   
  4       WHILE () 
  5           
  6        
  7  FOR  TO  
  8    DO  
  9  RETURN  

The move-to-front-code will finally be compressed, for instance by Huffman-coding.

Correctness.

The compression is due to the move-to-front-code obtained from the transformed string . It can easily be seen that this move-to-front coding procedure is invertible, so one can recover the string from the code obtained as above.

Now it can be observed that in the move-to-front-code small numbers occur more frequently. Unfortunately, this will become obvious only with much longer texts than in our example—in long strings it was observed that even about 70 per cent of the numbers are . This irregularity in distribution can be exploited by compressing the sequence obtained after move-to-front-coding, for instance by Huffman-codes or run-length codes.

The algorithm performed very well in practice regarding the compression rate as well as the speed. The asymptotic optimality of compression has been proven for a wide class of sources.

Analysis.

The most complex part of the Burrows-Wheeler transform is the sorting of the block yielding the transformed string . Due to fast sorting procedures, especially suited for the type of data to be compressed, compression algorithms based on the Burrows-Wheeler transform are usually very fast. On the other hand, compression is done blockwise. The text to be compressed has to be divided into blocks of appropriate size such that the matrices and still fit into the memory. So the decoder has to wait until the whole next block is transmitted and cannot work sequentially bit by bit as in arithmetic coding or Ziv-Lempel coding.

Exercises

3.4-1 Apply the Burrows-Wheeler-transform and the move-to-front code to the text “abracadabra”.

3.4-2 Verify that the transformed string and the index of the position in the sorted text (containing the first letter of the original text to be compressed) indeed yield enough information to reconstruct the original text.

3.4-3 Show how in our example the decoder would obtain the string ”HELWEER” from the move-to-front code and the letters E, H, L, W, R occuring in the text. Describe the general procedure for decoding move-to-front codes.

3.4-4 We followed here the encoding procedure presented by Burrows and Wheeler. Can the encoder obtain the transformed string even without constructing the two matrices and ?

3.5 Image compression

The idea of image compression algorithms is similar to the one behind the Burrows-Wheeler-transform. The text to be compressed is transformed to a format which is suitable for application of the techniques presented in the previous sections, such as Huffman coding or arithmetic coding. There are several procedures based on the type of image (for instance, black/white, greyscale or colour image) or compression (lossless or lossy). We shall present the basic steps—representation of data, discrete cosine transform, quantisation, coding—of lossy image compression procedures like the standard JPEG.

3.5.1 Representation of data

A greyscale image is represented as a two-dimensional array , where each entry represents the intensity (or brightness) at position of the image. Each is either a signed or an unsigned -bit integers, i. e., or .

A position in a colour image is usually represented by three greyscale values , , and per position corresponding to the intensity of the primary colours red, green and blue.

In order to compress the image, the three arrays (or channels) , , are first converted to the luminance/chrominance space by the -transform (performed entry–wise)

is the luminance or intensity channel, where the coefficients weighting the colours have been found empirically and represent the best possible approximation of the intensity as perceived by the human eye. The chrominance channels and contain the colour information on red and blue as the differences from . The information on green is obtained as big part in the luminance .

A first compression for colour images commonly is already obtained after application of the -transform by removing irrelevant information. Since the human eye is less sensitive to rapid colour changes than to changes in intensity, the resolution of the two chrominance channels and is reduced by a factor of in both vertical and horizontal direction, which results after sub-sampling in arrays of of the original size.

The arrays then are subdivided into blocks, on which successively the actual (lossy) data compression procedure is applied.

Let us consider the following example based on a real image, on which the steps of compression will be illustrated. Assume that the block of -bit unsigned integers below is obtained as a part of an image.

3.5.2 The discrete cosine transform

Each block , say, is transformed into a new block . There are several possible transforms, usually the discrete cosine transform is applied, which here obeys the formula

The cosine transform is applied after shifting the unsigned integers to signed integers by subtraction of .

DCT()

  1  FOR  TO 7 
  2    DO FOR  TO 7 
  3       DO  DCT - coefficient of matrix  
  4  RETURN  

The coefficients need not be calculated according to the formula above. They can also be obtained via a related Fourier transform (see Exercises) such that a Fast Fourier Transform may be applied. JPEG also supports wavelet transforms, which may replace the discrete cosine transform here.

The discrete cosine transform can be inverted via

where and are normalisation constants.

In our example, the transformed block is

where the entries are rounded.

The discrete cosine transform is closely related to the discrete Fourier transform and similarly maps signals to frequencies. Removing higher frequencies results in a less sharp image, an effect that is tolerated, such that higher frequencies are stored with less accuracy.

Of special importance is the entry , which can be interpreted as a measure for the intensity of the whole block.

3.5.3 Quantisation

The discrete cosine transform maps integers to real numbers, which in each case have to be rounded to be representable. Of course, this rounding already results in a loss of information. However, the transformed block will now be much easier to manipulate. A quantisation takes place, which maps the entries of to integers by division by the corresponding entry in a luminance quantisation matrix . In our example we use

The quantisation matrix has to be carefully chosen in order to leave the image at highest possible quality. Quantisation is the lossy part of the compression procedure. The idea is to remove information which should not be “visually significant”. Of course, at this point there is a tradeoff between the compression rate and the quality of the decoded image. So, in JPEG the quantisation table is not included into the standard but must be specified (and hence be encoded).

Quantisation()

  1  FOR  TO 7 
  2    DO FOR  TO 7 
  3       DO  
  4  RETURN  

This quantisation transforms block to a new block with , where is the closest integer to . This block will finally be encoded. Observe that in the transformed block besides the entry all other entries are relatively small numbers, which has the effect that mainly consists of s .

Coefficient , in this case , deserves special consideration. It is called DC term (direct current), while the other entries are denoted AC coefficients (alternate current).

3.5.4 Coding

Matrix will finally be encoded by a Huffman code. We shall only sketch the procedure. First the DC term will be encoded by the difference to the DC term of the previously encoded block. For instance, if the previous DC term was 12, then will be encoded as .

After that the AC coefficients are encoded according to the zig-zag order , , , , , , , etc.. In our example, this yields the sequence followed by 55 zeros. This zig–zag order exploits the fact that there are long runs of successive zeros. These runs will be even more efficiently represented by application of run-length coding, i. e., we encode the number of zeros before the next nonzero element in the sequence followed by this element.

Integers are written in such a way that small numbers have shorter representations. This is achieved by splitting their representation into size (number of bits to be reserved) and amplitude (the actual value). So, has size , and have size . , , , and have size , etc.

In our example this yields the sequence for the DC term followed by , , , , , and a final as an end-of-block symbol indicating that only zeros follow from now on. , for instance, means that there is 1 zero followed by an element of size and amplitude .

These pairs are then assigned codewords from a Huffman code. There are different Huffman codes for the pairs (run, size) and for the amplitudes. These Huffman codes have to be specified and hence be included into the code of the image.

In the following pseudocode for the encoding of a single -block we shall denote the different Huffman codes by encode-1, encode-2, encode-3.

Run-Length-Code()

  1   
  2   
  3   
  4  FOR  TO 14 
  5    DO FOR  TO l 
  6       DO IF  
  7          THEN  
  8          ELSE  
  9       IF  
 10          THEN  
 11          ELSE  
 12              
 13              
 14  IF  
 15    THEN  
 16  RETURN  

At the decoding end matrix will be reconstructed. Finally, by multiplication of each entry by the corresponding entry from the quantisation matrix we obtain an approximation to the block , here

To the inverse cosine transform is applied. This allows to decode the original –block of the original image – in our example as

Exercises

3.5-1 Find size and amplitude for the representation of the integers , , and .

3.5-2 Write the entries of the following matrix in zig – zag order

How would this matrix be encoded if the difference of the DC term to the previous one was ?

3.5-3 In our example after quantising the sequence , , , , , , has to be encoded. Assume the Huffman codebooks would yield to encode the difference from the preceding block's DC, , , and for the amplitudes , , and , respectively, and , , , and for the pairs , , , and , respectively. What would be the bitstream to be encoded for the block in our example? How many bits would hence be necessary to compress this block?

3.5-4 What would be matrices , and , if we had used

for quantising after the cosine transform in the block of our example?

3.5-5 What would be the zig-zag-code in this case (assuming again that the DC term would have difference from the previous DC term)?

3.5-6 For any sequence define a new sequence by

This sequence can be expanded to a Fourier-series via

Show how the coefficients of the discrete cosine transform

arise from this Fourier series.

 PROBLEMS 

3-1 Adaptive Huffman-codes

Dynamic and adaptive Huffman-coding is based on the following property. A binary code tree has the sibling property if each node has a sibling and if the nodes can be listed in order of nonincreasing probabilities with each node being adjacent to its sibling. Show that a binary prefix code is a Huffman-code exactly if the corresponding code tree has the sibling property.

Generalisations of Kraft's inequality

In the proof of Kraft's inequality it is essential to order the lengths . Show that the construction of a prefix code for given lengths is not possible if we are not allowed to order the lengths. This scenario of unordered lengths occurs with the Shannon-Fano-Elias-code and in the theory of alphabetic codes, which are related to special search problems. Show that in this case a prefix code with lengths exists if and only if

If we additionally require the prefix codes to be also suffix-free i. e., no codeword is the end of another one, it is an open problem to show that Kraft's inequality holds with the on the right–hand side replaced by 3/4, i. e.,

Redundancy of Krichevsky-Trofimov-estimator

Show that using the Krichevsky-Trofimov-estimator, when parameter of a discrete memoryless source is unknown, the individual redundancy of sequence is at most for all sequences and all .

Alternatives to move-to-front-codes

Find further procedures which like move-to-front-coding prepare the text for compression after application of the Burrows-Wheeler-transform.

 CHAPTER NOTES 

The frequency table of the letters in English texts is taken from [272]. The Huffman coding algorithm was introduced by Huffman in [119]. A pseudocode can be found in [54], where the Huffman coding algorithm is presented as a special Greedy algorithm. There are also adaptive or dynamic variants of Huffman-coding, which adapt the Huffman-code if it is no longer optimal for the actual frequency table, for the case that the probability distribution of the source is not known in advance. The “3/4-conjecture” on Kraft's inequality for fix-free-codes is due to Ahlswede, Balkenhol, and Khachatrian [4].

Arithmetic coding has been introduced by Rissanen [212] and Pasco [199]. For a discussion of implementation questions see [158],[277]. In the section on modelling we are following the presentation of Willems, Shtarkov and Tjalkens in[275]. The exact calculations can be found in their original paper [274] which received the Best Paper Award of the IEEE Information Theory Society in 1996. The Krichevsky-Trofimov-estimator had been introduced in [153].

We presented the two original algorithms LZ77 and LZ78 [281],[282] due to Lempel and Ziv. Many variants, modifications and extensions have been developed since that – concerning the handling of the dictionary, the pointers, the behaviour after the dictionary is complete, etc. For a description, see, for instance, [25] or [26]. Most of the prominent tools for data compression are variations of Ziv-Lempel-coding. For example “zip” and “gzip” are based on LZ77 and a variant of LZ78 is used by the program “compress”.

The Burrows-Wheeler transform was introduced in the technical report [36]. It became popular in the sequel, especially because of the Unix compression tool “bzip” based on the Burrows-Wheeler-transform, which outperformed most dictionary—based tools on several benchmark files. Also it avoids arithmetic coding, for which patent rights have to be taken into consideration. Further investigations on the Burrows-Wheeler-transform have been carried out, for instance in [21],[70],[155].

We only sketched the basics behind lossy image compression, especially the preparation of the data for application of techniques as Huffman coding. For a detailed discussion we refer to [256], where also the new JPEG2000 standard is described. Our example is taken from [268].

JPEG—short for Joint Photographic Experts Group—is very flexible. For instance, it also supports lossless data compression. All the topics presented in the section on image compression are not unique. There are models involving more basic colours and further transforms besides the -transform (for which even different scaling factors for the chrominance channels were used, the formula presented here is from [256]). The cosine transform may be replaced by another operation like a wavelet transform. Further, there is freedom to choose the quantisation matrix, responsible for the quality of the compressed image, and the Huffman code. On the other hand, this has the effect that these parameters have to be explicitly specified and hence are part of the coded image.

The ideas behind procedures for video and sound compression are rather similar to those for image compression. In principal, they follow the same steps. The amount of data in these cases, however, is much bigger. Again information is lost by removing irrelevant information not realizable by the human eye or ear (for instance by psychoacoustic models) and by quantising, where the quality should not be reduced significantly. More refined quantising methods are applied in these cases.

Most information on data compression algorithms can be found in literature on Information Theory, for instance [55],[104], since the analysis of the achievable compression rates requires knowledge of source coding theory. Recently, there have appeared several books on data compression, for instance [26],[106],[191],[224],[225], to which we refer to further reading. The benchmark files of the Calgary Corpus and the Canterbury Corpus are available under [37] or [38].

The book of I. Csiszár and J. Körner [58] analyses different aspects of information theory including the problems of data compression too.

Chapter 4. Reliable Computation

Any planned computation will be subject to different kinds of unpredictable influences during execution. Here are some examples:

  • (1) Loss or change of stored data during execution.

  • (2) Random, physical errors in the computer.

  • (3) Unexpected interactions between different parts of the system working simultaneously, or loss of connections in a network.

  • (4) Bugs in the program.

  • (5) Malicious attacks.

Up to now, it does not seem that the problem of bugs can be solved just with the help of appropriate algorithms. The discipline of software engineering addresses this problem by studying and improving the structure of programs and the process of their creation.

Malicious attacks are addressed by the discipline of computer security. A large part of the recommended solutions involves cryptography.

Problems of kind (3) are very important and a whole discipline, distributed computing has been created to deal with them.

The problem of storage errors is similar to the problems of reliable communication, studied in information theory: it can be viewed as communication from the present to the future. In both cases, we can protect against noise with the help of error-correcting codes (you will see some examples below).

In this chapter, we will discuss some sample problems, mainly from category (2). In this category, distinction should also be made between permanent and transient errors. An error is permanent when a part of the computing device is damaged physically and remains faulty for a long time, until some outside intervention by repairmen to it. It is transient if it happens only in a single step: the part of the device in which it happened is not damaged, in the next step it operates correctly again. For example, if a position in memory turns from 0 to 1 by accident, but a subsequent write operation can write a 0 again then a transient error happened. If the bit turned to 1 and the computer cannot change it to 0 again, this is a permanent error.

Some of these problems, especially the ones for transient errors, are as old as computing. The details of any physical errors depend on the kind of computer it is implemented on (and, of course, on the kind of computation we want to carry out). But after abstracting away from a lot of distracting details, we are left with some clean but challenging theoretical formulations, and some rather pleasing solutions. There are also interesting connections to other disciplines, like statistical physics and biology.

The computer industry has been amazingly successful over the last five decades in making the computer components smaller, faster, and at the same time more reliable. Among the daily computer horror stories seen in the press, the one conspicuously missing is where the processor wrote a 1 in place of a 0, just out of caprice. (It undisputably happens, but too rarely to become the identifiable source of some visible malfunction.) On the other hand, the generality of some of the results on the correction of transient errors makes them applicable in several settings. Though individual physical processors are very reliable (error rate is maybe once in every executions), when considering a whole network as performing a computation, the problems caused by unreliable network connections or possibly malicious network participants is not unlike the problems caused by unreliable processors.

The key idea for making a computation reliable is redundancy, which might be formulated as the following two procedures:

(i) Store information in such a form that losing any small part of it is not fatal: it can be restored using the rest of the data. For example, store it in multiple copies.

(ii) Perform the needed computations repeatedly, to make sure that the faulty results can be outvoted.

Our chapter will only use these methods, but there are other remarkable ideas which we cannot follow up here. For example, method (ii) seems especially costly; it is desireable to avoid a lot of repeated computation. The following ideas target this dilemma.

(A) Perform the computation directly on the information in its redundant form: then maybe recomputations can be avoided.

(B) Arrange the computation into “segments” such a way that those partial results that are to be used later, can be cheaply checked at each “milestone” between segments. If the checking finds error, repeat the last segment.

4.1 Probability theory

The present chapter does not require great sophistication in probability theory but there are some facts coming up repeatedly which I will review here. If you need additional information, you will find it in any graduate probability theory text.

4.1.1 Terminology

A probability space is a triple where is the set of elementary events, is a set of subsets of called the set of events and is a function. For , the value is called the probability of event . It is required that and that implies . Further, if a (possibly infinite) sequence of sets is in then so is their union. Also, it is assumed that and that if are disjoint then

For , the conditional probability of given is defined as

Events are independent if for any sequence we have

Example 4.1 Let where is the set of all subsets of and . This is an example of a discrete probability space: one that has a countable number of elements.

More generally, a discrete probability space is given by a countable set , and a sequence with , . The set of events is the set of all subsets of , and for an event we define .

A random variable over a probability space is a function from to the real numbers, with the property that every set of the form is an event: it is in . Frequently, random variables are denoted by capital letters , possibly with indices, and the argument is omitted from . The event is then also written as . This notation is freely and informally extended to more complicated events. The distribution of a random variable is the function . We will frequently only specify the distribution of our variables, and not mention the underlying probability space, when it is clear from the context that it can be specified in one way or another. We can speak about the joint distribution of two or more random variables, but only if it is assumed that they can be defined as functions on a common probability space. Random variables with a joint distribution are independent if every -tuple of events of the form is independent.

The expected value of a random variable taking values with probabilities is defined as

It is easy to see that the expected value is a linear function of the random variable:

even if are not independent. On the other hand, if variables are independent then the expected values can also be multiplied:

There is an important simple inequality called the Markov inequality, which says that for an arbitrary nonnegative random variable and any value we have

4.1.2 The law of large numbers (with “large deviations”)

In what follows the bounds

will be useful. Of these, the well-known upper bound holds since the graph of the function is below its tangent line drawn at the point . The lower bound is obtained from the identity and

Consider independent random variables that are identically distributed, with

Let

We want to estimate the probability for any constant . The “law of large numbers” says that if then this probability converges fast to 0 as while if then it converges fast to 1. Let

where the inequality (useful for small and ) comes via and from (4.3). Using the concavity of logarithm, it can be shown that is always nonnegative, and is 0 only if (see Exercise 4.1-1).

Theorem 4.1 (Large deviations for coin-toss) If then

This theorem shows that if then converges to 0 exponentially fast. Inequality (4.5) will allow the following simplification:

useful for small and .

Proof. For a certain real number (to be chosen later), let be the random variable that is if and if , and let : then

Applying the Markov inequality (4.2) and (4.1), we get

where . Let us choose , this is if . Then we get , and hence

This theorem also yields some convenient estimates for binomial coefficients. Let

This is sometimes called the entropy of the probability distribution (measured in logarithms over base instead of base 2). From inequality (4.3) we obtain the estimate

which is useful for small .

Corollary 4.2 We have, for :

In particular, taking with gives

Proof. Theorem 4.1 says for the case :

Substituting , and noting the symmetries , and (4.7) gives (4.8).

Remark 4.3 Inequality (4.6) also follows from the trivial estimate combined with (4.9).

Exercises

4.1-1 Prove that the statement made in the main text that is always nonnegative, and is 0 only if .

4.1-2 For , derive from Theorem 4.1 the useful bound

Hint. Let , and use the Taylor formula: , where .

4.1-3 Prove that in Theorem 4.1, the assumption that are independent and identically distributed can be weakened: replaced by the single inequality

4.2 Logic circuits

In a model of computation taking errors into account, the natural assumption is that errors occur everywhere. The most familiar kind of computer, which is separated into a single processor and memory, seems extremely vulnerable under such conditions: while the processor is not “looking”, noise may cause irreparable damage in the memory. Let us therefore rather consider computation models that are parallel: information is being processed everywhere in the system, not only in some distinguished places. Then error correction can be built into the work of every part of the system. We will concentrate on the best known parallel computation model: Boolean circuits.

4.2.1 Boolean functions and expressions

Let us look inside a computer, (actually inside an integrated circuit, with a microscope). Discouraged by a lot of physical detail irrelevant to abstract notions of computation, we will decide to look at the blueprints of the circuit designer, at the stage when it shows the smallest elements of the circuit still according to their computational functions. We will see a network of lines that can be in two states (of electric potential), “high” or “low”, or in other words “true” or “false”, or, as we will write, 1 or 0. The points connected by these lines are the familiar logic components: at the lowest level of computation, a typical computer processes bits. Integers, floating-point numbers, characters are all represented as strings of bits, and the usual arithmetical operations can be composed of bit operations.

Definition 4.4 A Boolean vector function is a mapping . Most of the time, we will take and speak of a Boolean function.

The variables in are sometimes called Boolean variables, Boolean variables or bits.

Example 4.2 Given an undirected graph with nodes, suppose we want to study the question whether it has a Hamiltonian cycle (a sequence listing all vertices of such that is an edge for each and also is an edge). This question is described by a Boolean function as follows. The graph can be described with Boolean variables (): is 1 if and only if there is an edge between nodes and . We define if there is a Hamiltonian cycle in and 0 otherwise.

Example 4.3 [Boolean vector function] Let , let the input be two integers , written as -bit strings: . The output of the function is their product (written in binary): if , then .

Figure 4.1.  AND, OR and NOT gate.

AND, OR and NOT gate.


There are only four one-variable Boolean functions: the identically 0, identically 1, the identity and the negation: . We mention only the following two-variable Boolean functions: the operation of conjunction (logical AND):

this is the same as multiplication. The operation of disjunction, or logical OR:

It is easy to see that : in other words, disjunction can be expressed using the functions and the operation of composition. The following two-argument Boolean functions are also frequently used:

A finite number of Boolean functions is sufficent to express all others: thus, arbitrarily complex Boolean functions can be “computed” by “elementary” operations. In some sense, this is what happens inside computers.

Definition 4.5 A set of Boolean functions is a complete basis if every other Boolean function can be obtained by repeated composition from its elements.

Claim 4.6 The set forms a complete basis; in other words, every Boolean function can be represented by a Boolean expression using only these connectives.

The proof can be found in all elementary introductions to propositional logic. Note that since can be expressed using , this latter set is also a complete basis (and so is ).

From now on, under a Boolean expression (formula), we mean an expression built up from elements of some given complete basis. If we do not mention the basis then the complete basis will be meant.

In general, one and the same Boolean function can be expressed in many ways as a Boolean expression. Given such an expression, it is easy to compute the value of the function. However, most Boolean functions can still be expressed only by very large Boolean expression (see Exercise 4.2-4).

4.2.2 Circuits

A Boolean expression is sometimes large since when writing it, there is no possibility for reusing partial results. (For example, in the expression

the part occurs twice.) This deficiency is corrected by the following more general formalism.

A Boolean circuit is essentially an acyclic directed graph, each of whose nodes computes a Boolean function (from some complete basis) of the bits coming into it on its input edges, and sends out the result on its output edges (see Figure 4.2). Let us give a formal definition.

Figure 4.2.  The assignment (values on nodes, configuration) gets propagated through all the gates. This is the “computation”.

The assignment (values on nodes, configuration) gets propagated through all the gates. This is the “computation”.

Definition 4.7 Let be a complete basis of Boolean functions. For an integer let be a set of nodes. A Boolean circuit over is given by the following tuple:

For every node there is a natural number showing its number of inputs. The sources, nodes with , are called input nodes: we will denote them, in increasing order, as

To each non-input node a Boolean function

from the complete basis is assigned: it is called the gate of node . It has as many arguments as the number of entering edges. The sinks of the graph, nodes without outgoing edges, will be called output nodes: they can be denoted by

(Our Boolean circuits will mostly have just a single output node.) To every non-input node and every belongs a node (the node sending the value of input variable of the gate of ). The circuit defines a graph whose set of edges is

We require for each (we identified the with the natural numbers ): this implies that the graph is acyclic. The size

of the circuit is the number of nodes. The depth of a node is the maximal length of directed paths leading from an input node to . The depth of a circuit is the maximum depth of its output nodes.

Definition 4.8 An input assignment, or input configuration to our circuit is a vector with giving value to node

for , . The function can be extended to a unique configuration on all other nodes of the circuit as follows. If gate has arguments then

For example, if , and () are the input nodes to then . The process of extending the configuration by the above equation is also called the computation of the circuit. The vector of the values for is the result of the computation. We say that the Boolean circuit computes the vector function

The assignment procedure can be performed in stages: in stage , all nodes of depth receive their values.

We assign values to the edges as well: the value assigned to an edge is the one assigned to its start node.

4.2.3 Fast addition by a Boolean circuit

The depth of a Boolean circuit can be viewed as the shortest time it takes to compute the output vector from the input vector by this circuit. Az an example application of Boolean circuits, let us develop a circuit that computes the sum of its input bits very fast. We will need this result later in the present chapter for error-correcting purposes.

Figure 4.3.  Naive parallel addition.

Naive parallel addition.

Definition 4.9 We will say that a Boolean circuit computes a near-majority if it outputs a bit with the following property: if of all input bits is equal to then .

The depth of our circuit is clearly , since the output must have a path to the majority of inputs. In order to compute the majority, we will also solve the task of summing the input bits.

Theorem 4.10

(a) Over the complete basis consisting of the set of all 3-argument Boolean functions, for each there is a Boolean circuit of input size and depth whose output vector represents the sum of the input bits as a binary number.

(b) Over this same complete basis, for each there is a Boolean circuit of input size and depth computing a near-majority.

Proof. First we prove (a). For simplicity, assume : if is not of this form, we may add some fake inputs. The naive approach would be proceed according to Figure 4.3: to first compute . Then, to compute , and so on. Then will indeed be computed in stages.

It is somewhat troublesome that here is a number, not a bit, and therefore must be represented by a bit vector, that is by group of nodes in the circuit, not just by a single node. However, the general addition operation

when performed in the naive way, will typically take more than a constant number of steps: the numbers have length up to and therefore the addition may add to the depth, bringing the total depth to .

The following observation helps to decrease the depth. Let be three numbers in binary notation: for example, . There are simple parallel formulas to represent the sum of these three numbers as the sum of two others, that is to compute where are numbers also in binary notation

Since both formulas are computed by a single 3-argument gate, 3 numbers can be reduced to 2 (while preserving the sum) in a single parallel computation step. Two such steps reduce 4 numbers to 2. In steps therefore they reduce a sum of terms to a sum of 2 numbers of length . Adding these two numbers in the regular way increases the depth by : we found that bits can be be added in steps.

To prove (b), construct the circuit as in the proof of (a), but without the last addition: the output is two -bit numbers whose sum we are interested in. The highest-order nonzero bit of these numbers is at some position . If the sum is more than then one these numbers has a nonzero bit at position or . We can determine this in two applications of 3-input gates.

Exercises

4.2-1 Show that is a complete basis.

4.2-2 Show that the function forms a complete basis by itself.

4.2-3 Let us fix the complete basis . Prove Proposition 4.6 (or look up its proof in a textbook). Use it to give an upper bound for an arbitrary Boolean function of variables, on:

  • (a) the smallest size of a Boolean expression for ;

  • (b) the smallest size of a Boolean circuit for ;

  • (c) the smallest depth of a Boolean circuit for ;

4.2-4 Show that for every there is a Boolean function of variables such that every Boolean circuit in the complete basis computing contains nodes.

Hint. For a constant , upperbound the number of Boolean circuits with at most nodes and compare it with the number of Boolean functions over variables.]

4.2-5 Consider a circuit with inputs, whose single output bit is computed from the inputs by levels of 3-input majority gates. Show that there is an input vector which is 1 in only positions but with which outputs 1. Thus a small minority of the inputs, when cleverly arranged, can command the result of this circuit.

4.3 Expensive fault-tolerance in Boolean circuits

Let be a Boolean circuit as given in Definition 4.7. When noise is allowed then the values

will not be determined by the formula (4.11) anymore. Instead, they will be random variables . The random assignment will be called a random configuration.

Definition 4.11 At vertex , let

Figure 4.4.  Failure at a gate.

Failure at a gate.


In other words, if gate is not equal to the value computed by the noise-free gate from its inputs . (See Figure 4.4.) The set of vertices where is non-zero is the set of faults.

Let us call the difference the deviation at node .

Let us impose conditions on the kind of noise that will be allowed. Each fault should occur only with probability at most , two specific faults should only occur with probability at most , and so on.

Definition 4.12 For an , let us say that the random configuration is -admissible if

(a) for .

(b) For every set of non-input nodes, we have

In other words, in an -admissible random configuration, the probability of having faults at different specific gates is at most . This is how we require that not only is the fault probability low but also, faults do not “conspire”. The admissibility condition is satisfied if faults occur independently with probability .

Our goal is to build a circuit that will work correctly, with high probability, despite the ever-present noise: in other words, in which errors do not accumulate. This concept is formalized below.

Definition 4.13 We say that the circuit with output node is -resilient if for all inputs , all -admissible configurations , we have .

Let us explore this concept. There is no -resilient circuit with , since even the last gate can fail with probability . So, let us, a little more generously, allow . Clearly, for each circuit and for each we can choose small enough so that is -resilient. But this is not what we are after: hopefully, one does not need more reliable gates every time one builds a larger circuit. So, we hope to find a function

and an with the property that for all , , every Boolean circuit of size there is some -resilient circuit of size computing the same function as . If we achieve this then we can say that we prevented the accumulation of errors. Of course, we want to make relatively small, and large (allowing more noise). The function can be called the redundancy: the factor by which we need to increase the size of the circuit to make it resilient. Note that the problem is nontrivial even with, say, . Unless the accumulation of errors is prevented we will lose gradually all information about the desired output, and no could be guaranteed.

How can we correct errors? A simple idea is this: do “everything” 3 times and then continue with the result obtained by majority vote.

Definition 4.14 For odd natural number , a -input majority gate is a Boolean function that outputs the value equal to the majority of its inputs.

Note that a -input majority can be computed using gates of type AND and NOT.

Why should majority voting help? The following informal discussion helps understanding the benefits and pitfalls. Suppose for a moment that the output is a single bit. If the probability of each of the three independently computed results failing is then the probability that at least of them fails is bounded by . Since the majority vote itself can fail with some probability the total probability of failure is bounded by . We decrease the probability of failure, provided the condition holds.

We found that if is small, then repetition and majority vote can “make it” smaller. Of course, in order to keep the error probability from accumulating, we would have to perform this majority operation repeatedly. Suppose, for example, that our computation has stages. Our bound on the probability of faulty output after stage is . We plan to perform the majority operation after each stage . Let us perform stage three times. The probability of failure is now bounded by

Here, the error probabilities of the different stages accumulate, and even if we only get a bound . So, this strategy will not work for arbitrarily large computations.

Here is a somewhat mad idea to avoid accumulation: repeat everything before the end of stage three times, not only stage itself. In this case, the growing bound (4.15) would be replaced with

Now if and then also , so errors do not accumulate. But we paid an enormous price: the fault-tolerant version of the computation reaching stage is 3 times larger than the one reaching stage . To make stages fault-tolerant this way will cost a factor of in size. This way, the function introduced above may become exponential in .

The theorem below formalizes the above discussion.

Theorem 4.15 Let be a finite and complete basis for Boolean functions. If then every function can be computed by an -resilient circuit over .

Proof. For simplicity, we will prove the result for a complete basis that contains the three-argument majority function and contains not functions with more than three arguments. We also assume that faults occur independently. Let be a noise-free circuit of depth computing function . We will prove that there is an -resilient circuit of depth computing . The proof is by induction on . The sufficient conditions on and will emerge from the proof.

The statement is certainly true for , so suppose . Let be the output gate of the circuit , then . The subcircuits computing the functions have depth . By the inductive assumption, there exist -resilient circuits of depth that compute . Let be a new circuit containing copies of the circuits (with the corresponding input nodes merged), with a new node in which is computed as is applied to the outputs of . Then the probability of error of is at most if since each circuit can err with probability and the node with gate can fail with probability .

Let us now form by taking three copies of (with the inputs merged) and adding a new node computing the majority of the outputs of these three copies. The error probability of is at most . Indeed, error will be due to either a fault at the majority gate or an error in at least two of the three independent copies of . So under condition

the circuit is -resilient. This condition will be satisfied by .

The circuit constructed in the proof above is at least times larger than . So, the redundancy is enormous. Fortunately, we will see a much more economical solution. But there are interesting circuits with small depth, for which the factor is not extravagant.

Theorem 4.16 Over the complete basis consisting of all 3-argument Boolean functions, for all sufficiently small , if then for each there is an -resilient Boolean circuit of input size , depth and size outputting a near-majority (as given in Definition 4.9).

Proof. Apply Theorem 4.15 to the circuit from part (a) of Theorem 4.10: it gives a new, -deep -resilient circuit computing a near-majority. The size of any such circuit with 3-input gates is at most

Exercises

4.3-1 Exercise 4.2-5 suggests that the iterated majority vote is not safe against manipulation. However, it works very well under some circumstances. Let the input to be a vector of independent Boolean random variables with . Denote the (random) output bit of the circuit by . Assuming that our majority gates can fail with probability independently, prove

Hint. Define , , , and prove .

4.3-2 We say that a circuit computes the function in an -input-robust way, if the following holds: For any input vector , for any vector of independent Boolean random variables “perturbing it” in the sense , for the output of circuit on input we have . Show that if the function is computable on an -input-robust circuit then .

4.4 Safeguarding intermediate results

In this section, we will see ways to introduce fault-tolerance that scale up better. Namely, we will show:

Theorem 4.17 There are constants such that for

for all , , for every deterministic computation of size there is an -resilient computation of size with the same result.

Let us introduce a concept that will simplify the error analysis of our circuits, making it independent of the input vector .

Definition 4.18 In a Boolean circuit , let us call a majority gate at a node a correcting majority gate if for every input vector of , all input wires of node have the same value. Consider a computation of such a circuit . This computation will make some nodes and wires of tainted. We define taintedness by the following rules:

– The input nodes are untainted.

– If a node is tainted then all of its output wires are tainted.

– A correcting majority gate is tainted if either it fails or a majority of its inputs are tainted.

– Any other gate is tainted if either it fails or one of its inputs is tainted.

Clearly, if for all -admissible random configurations the output is tainted with probability then the circuit is -resilient.

4.4.1 Cables

So far, we have only made use of redundancy idea (ii) of the introduction to the present chapter: repeating computation steps. Let us now try to use idea (i) (keeping information in redundant form) in Boolean circuits. To protect information traveling from gate to gate, we replace each wire of the noiseless circuit by a “cable” of wires (where will be chosen appropriately). Each wire within the cable is supposed to carry the same bit of information, and we hope that a majority will carry this bit even if some of the wires fail.

Definition 4.19 In a Boolean circuit , a certain set of edges is allowed to be called a cable if in a noise-free computation of this circuit, each edge carries the same Boolean value. The width of the cable is its number of elements. Let us fix an appropriate constant threshold . Consider any possible computation of the noisy version of the circuit , and a cable of width in . This cable will be called -safe if at most of its wires are tainted.

Figure 4.5.  An executive organ.

An executive organ.


Let us take a Boolean circuit that we want to make resilient. As we replace wires of with cables of containing wires each, we will replace each noiseless 2-argument gate at a node by a module called the executive organ of gates, which for each , passes the th wire both incoming cables into the th node of the organ. Each of these nodes contains a gate of one and the same type . The wires emerging from these nodes form the output cable of the executive organ.

The number of tainted wires in this output cable may become too high: indeed, if there were tainted wires in the cable and also in the cable then there could be as many as such wires in the cable (not even counting the possible new taints added by faults in the executive organ). The crucial part of the construction is to attach to the executive organ a so-called restoring organ: a module intended to decrease the taint in a cable.

Figure 4.6.  A restoring organ.

A restoring organ.

4.4.2 Compressors

How to build a restoring organ? Keeping in mind that this organ itself must also work in noise, one solution is to build (for an approriate ) a special -resilient circuit that computes the near-majority of its inputs in independent copies. Theorem 4.16 provides a circuit of size to do this.

It turns out that, at least asymptotically, there is a better solution. We will look for a very simple restoring organ: one whose own noise we can analyse easily. What could be simpler than a circuit having only one level of gates? We fix an odd positive integer constant (for example, ). Each gate of our organ will be a -input majority gate.

Definition 4.20 A multigraph is a graph in which between any two vertices there may be several edges, not just 0 or 1. Let us call a bipartite multigraph with inputs and outputs, -half-regular. if each output node has degree . Such a graph is a -compressor if it has the following property: for every set of at most inputs, the number of those output points connected to at least elements of (with multiplicity) is at most .

The compressor property is interesting generally when . For example, in an -compressor the outputs have degree , and the majority operation in these nodes decreases every error set confined to 10% of all input to just 5% of all outputs. A compressor with the right parameters could serve as our restoring organ: it decreases a minority to a smaller minority and may in this way restore the safety of a cable. But, are there compressors?

Theorem 4.21 For all , all integers with

there is an such that for all integer there is a -compressor.

Note that for , the theorem does not guarantee a compressor with .

Proof. We will not give an explicit construction for the multigraph, we will just show that it exists. We will select a -half-regular multigraph randomly (each such multigraph with the same probability), and show that it will be a -compressor with positive probability. This proof method is called the probabilistic method. Let

Our construction will be somewhat more general, allowing outputs. Let us generate a random bipartite -half-regular multigraph with inputs and outputs in the following way. To each output, we draw edges from random input nodes chosen independently and with uniform distribution over all inputs. Let be an input set of size , let be an output node and let be the event that has or more edges from . Then we have

On the average (in expected value), the event will occur for different output nodes . For an input set , let be the event that the set of nodes for which holds has size . By inequality (4.6) we have

The number of sets of inputs with elements is, using inequality (4.7),

The probability that our random graph is not a compressor is at most as large as the probability that there is at least one input set for which event holds. This can be bounded by

where

As we decrease the first term of this expression dominates. Its coefficient is positive according to the assumption (4.17). We will have if

Example 4.4 Choosing , , the value will work.

We turn a -compressor into a restoring organ , by placing -input majority gates into its outputs. If the majority elements sometimes fail then the output of is random. Assume that at most inputs of are tainted. Then outputs can only be tainted if majority gates fail. Let

be the probability of this event. Assuming that the gates of fail independently with probability , inequality 4.6 gives

Example 4.5 Choose , , as in Example 4.4, further (this will satisfy the inequality (4.19) needed later). With , we get .

The attractively small degree led to an extremely unattractive probability bound on the failure of the whole compressor. This bound does decrease exponentially with cable width , but only an extremely large would make it small.

Example 4.6 Choosing again , but (voting in each gate of the compressor over 41 wires instead of 7), leads to somewhat more realistic results. This choice allows . With , again, we get .

These numbers look less frightening, but we will still need many scores of wires in the cable to drive down the probability of compression failure. And although in practice our computing components fail with frequency much less than , we may want to look at the largest that still can be tolerated.

Figure 4.7.  An executive organ followed by a restoring organ.

An executive organ followed by a restoring organ.

4.4.3 Propagating safety

Compressors allow us to construct a reliable Boolean circuit all of whose cables are safe.

Definition 4.22 Given a Boolean circuit with a single bit of output (for simplicity), a cable width and a Boolean circuit with inputs and outputs, let

be the Boolean circuit that we obtain as follows. The input nodes of are the same as those of . We replace each wire of with a cable of width , and each gate of with an executive organ followed by a restoring organ that is a copy of the circuit . The new circuit has outputs: the outputs of the restoring organ of belonging to the last gate of .

In noise-free computations, on every input, the output of is the same as the output of , but in identical copies.

Lemma 4.23 There are constants and for every cable width a circuit of size and gate size with the following property. For every Boolean circuit of gate size and number of nodes , for every , for every -admissible configuration of , the probability that not every cable of is -safe is .

Proof. We know that there are and with the property that for all a -compressor exists. Let be chosen to satisfy

and define

Let be a restoring organ built from a -compressor. Consider a gate of circuit , and the corresponding executive organ and restoring organ in . Let us estimate the probability of the event that the input cables of this combined organ are -safe but its output cable is not. Assume that the two incoming cables are safe: then at most of the outputs of the executive organ are tainted due to the incoming cables: new taint can still occur due to failures. Let be the event that the executive organ taints at least more of these outputs. Then , using the estimate (4.18). The outputs of the executive organ are the inputs of the restoring organ. If no more than of these are tainted then, in case the organ operates perfectly, it would decrease the number of tainted wires to . Let be the event that the restoring organ taints an additional of these wires. Then again, . If neither nor occur then at most (see (4.19)) tainted wires emerge from the restoring organ, so the outgoing cable is safe. Therefore and hence .

Let be the nodes of the circuit . Since the incoming cables of the whole circuit are safe, the event that there is some cable that is not safe is contained in ; hence the probability is bounded by .

4.4.4 Endgame

Figure 4.8.  Reliable circuit from a fault-free circuit.

Reliable circuit from a fault-free circuit.

Proof. [Proof of Theorem 4.17] We will prove the theorem only for the case when our computation is a Boolean circuit with a single bit of output. The generalization with more bits of output is straightforward. The proof of Lemma 4.23 gives us a circuit whose output cable is safe except for an event of probability . Let us choose in such a way that this becomes :

It remains to add a little circuit to this output cable to extract from it the majority reliably. This can be done using Theorem 4.16, adding a small extra circuit of size that can be called the coda to . Let us call the resulting circuit .

The probability that the output cable is unsafe is . The probability that the output cable is safe but the “coda” circuit fails is bounded by . So, the probability that fails is , by the assumption .

Let us estimate the size of . By 4.21, we can choose cable width . We have , hence

Example 4.7 Take the constants of Example 4.6, with defined in equation (4.20): then , , , , , , giving

so making as small as possible (ignoring that it must be integer), we get . With , this allows . In addition to this truly unpleasant cable size, the size of the “coda” circuit is , which dominates the size of the rest of (though as it becomes asymptotically negligible).

As Example 4.7 shows, the actual price in redundancy computable from the proof is unacceptable in practice. The redundancy sounds good, since it is only logarithmic in the size of the computation, and by choosing a rather large majority gate (41 inputs), the factor in the here also does not look bad; still, we do not expect the final price of reliability to be this high. How much can this redundancy improved by optimization or other methods? Problem 4-6 shows that in a slightly more restricted error model (all faults are independent and have the same probability), with more randomization, better constants can be achieved. Exercises 4.4-1, 4.4-2 and 4.4-5 are concerned with an improved construction for the “coda” circuit. Exercise 4.5-2 shows that the coda circuit can be omitted completely. But none of these improvements bring redundancy to acceptable level. Even aside from the discomfort caused by their random choice (this can be helped), concentrators themselves are rather large and unwieldy. The problem is probably with using circuits as a model for computation. There is no natural way to break up a general circuit into subunits of non-constant size in order to deal with the reliability problem in modular style.

4.4.5 The construction of compressors

This subsection is sketchier than the preceding ones, and assumes some knowledge of linear algebra.

We have shown that compressors exist. How expensive is it to find a -compressor, say, with , , , as in Example 4.6? In a deterministic algorithm, we could search through all the approximately -half-regular bipartite graphs. For each of these, we could check all possible input sets of size : as we know, their number is . The cost of checking each subset is , so the total number of operations is . Though this number is exponential in , recall that in our error-correcting construction, for the size of the noiseless circuit: therefore the total number of operations needed to find a compressor is polynomial in .

The proof of Theorem 4.21 shows that a randomly chosen -half-regular bipartite graph is a compressor with large probability. Therefore there is a faster, randomized algorithm for finding a compressor. Pick a random -half-regular bipartite graph, check if it is a compressor: if it is not, repeat. We will be done in a constant expected number of repetititons. This is a faster algorithm, but is still exponential in , since each checking takes operations.

Is it possible to construct a compressor explicitly, avoiding any search that takes exponential time in ? The answer is yes. We will show here only, however, that the compressor property is implied by a certain property involving linear algebra, which can be checked in polynomial time. Certain explicitly constructed graphs are known that possess this property. These are generally sought after not so much for their compressor property as for their expander property (see the section on reliable storage).

For vectors , let denote their inner product. A -half-regular bipartite multigraph with nodes can be defined by an incidence matrix , where is the number of edges connecting input to output . Let be the vector . Then , so is an eigenvector of with eigenvalue . Moreover, is the largest eigenvalue of . Indeed, denoting by for any row vector , we have .

Theorem 4.24 Let be a multigraph defined by the matrix . For all , and

there is an such that if the second largest eigenvalue of the matrix is then is a -compressor.

Proof. The matrix has largest eigenvalue . Since it is symmetric, it has a basis of orthogonal eigenvectors of unit length with corresponding nonnegative eigenvalues

where and . Recall that in the orthonormal basis , any vector can be written as . For an arbitrary vector , we can estimate as follows.

Let now be a set of size and where for and 0 otherwise. Then, coordinate of counts the number of edges coming from the set to the node . Also, , the number of elements of . We get

Suppose that there are nodes with , then this says

Since (4.22) implies , it follows that is a -compressor for small enough .

It is actually sufficient to look for graphs with large and where are constants. To see this, let us define the product of two bipartite multigraphs with vertices by the multigraph belonging to the product of the corresponding matrices.

Suppose that is symmetric: then its second largest eigenvalue is and the ratio of the two largest eigenvalues of is . Therefore using for a sufficiently large as our matrix, the condition (4.22) can be satisfied. Unfortunately, taking the power will increase the degree , taking us probably even farther away from practical realizability.

We found that there is a construction of a compressor with the desired parameters as soon as we find multigraphs with arbitrarily large sizes , with symmetric matrices and with a ratio of the two largest eigenvalues of bounded by a constant independent of . There are various constructions of such multigraphs (see the references in the historical overview). The estimation of the desired eigenvalue quotient is never very simple.

Exercises

4.4-1 The proof of Theorem 4.17 uses a “coda” circuit of size . Once we proved this theorem we could, of course, apply it to the computation of the final majority itself: this would reduce the size of the coda circuit to . Try out this approach on the numerical examples considered above to see whether it results in a significant improvement.

4.4-2 The proof of Theorem 4.21 provided also bipartite graphs with the compressor property, with inputs and outputs. An idea to build a smaller “coda” circuit in the proof of Theorem 4.17 is to concatenate several such compressors, decreasing the number of cables in a geometric series. Explore this idea, keeping in mind, however, that as decreases, the “exponential” error estimate in inequality 4.18 becomes weaker.

4.4-3 In a noisy Boolean circuit, let if the gate at vertex fails and otherwise. Further, let if is tainted, and 0 otherwise. Suppose that the distribution of the random variables does not depend on the Boolean input vector. Show that then the joint distribution of the random variables is also independent of the input vector.

4.4-4 This exercise extends the result of Exercise 4.3-1 to random input vectors: it shows that if a random input vector has only a small number of errors, then the iterated majority vote of Exercise 4.2-5 may still work for it, if we rearrange the input wires randomly. Let , and let be a vector of integers . We define a Boolean circuit as follows. This circuit takes input vector , computes the vector where (in other words, just leads a wire from input node to an “intermediate node” ) and then inputs into the circuit .

Denote the (possibly random) output bit of by . For any fixed input vector , assuming that our majority gates can fail with probability independently, denote . Assume that the input is a vector of (not necessarily independent) Boolean random variables, with . Denoting , assume . Prove that there is a choice of the vector for which

The choice may depend on the distribution of the random vector .

Hint. Choose the vector (and hence the circuit ) randomly, as a random vector where the variables are independent and uniformly distributed over , and denote . Then prove

For this, interchange the averaging over and . Then note that is the probability of when the “wires” are chosen randomly “on the fly” during the computation of the circuit.

4.4-5 Taking the notation of Exercise 4.4-3 suppose, like there, that the random variables are independent of each other, and their distribution does not depend on the Boolean input vector. Take the Boolean circuit introduced in Definition 4.22, and define the random Boolean vector where if and only if the th output node is tainted. Apply Exercise 4.4-4 to show that there is a circuit that can be attached to the output nodes to play the role of the “coda” circuit in the proof of Theorem 4.17. The size of is only linear in , not as for the coda circuit in the proof there. But, we assumed a little more about the fault distribution, and also the choice of the “wiring”' depends on the circuit .

4.5 The reliable storage problem

There is hardly any simpler computation than not doing anything, just keeping the input unchanged. This task does not fit well, however, into the simple model of Boolean circuits as introduced above.

4.5.1 Clocked circuits

Figure 4.9.  A shift register.

A shift register.


An obvious element of ordinary computations is missing from the above described Boolean circuit model: repetition. If we want to repeat some computation steps, then we need to introduce timing into the work of computing elements and to store the partial results between consecutive steps. Let us look at the drawings of the circuit designer again. We will see components like in Figure 4.9, with one ingoing edge and no operation associated with them; these will be called shift registers. The shift registers are controlled by one central clock (invisible on the drawing). At each clock pulse, the assignment value on the incoming edge jumps onto the outgoing edges and “stays in” the register. Figure 4.10 shows how shift registers may be used inside a circuit.

Definition 4.25 A clocked circuit over a complete basis is given by a tuple just like a Boolean circuit in 4.10. Also, the circuit defines a graph similarly. Recall that we identified nodes with the natural numbers . To each non-input node either a gate is assigned as before, or a shift register: in this case (there is only one argument). We do not require the graph to be acyclic, but we do require every directed cycle (if there is any) to pass through at least one shift register.

Figure 4.10.  Part of a circuit which computes the sum of two binary numbers Part of a circuit which computes the sum of two binary numbers x,y . We feed the digits of x and y beginning with the lowest-order ones, at the input nodes. The digits of the sum come out on the output edge. A shift register holds the carry.. We feed the digits of Part of a circuit which computes the sum of two binary numbers x,y . We feed the digits of x and y beginning with the lowest-order ones, at the input nodes. The digits of the sum come out on the output edge. A shift register holds the carry. and Part of a circuit which computes the sum of two binary numbers x,y . We feed the digits of x and y beginning with the lowest-order ones, at the input nodes. The digits of the sum come out on the output edge. A shift register holds the carry. beginning with the lowest-order ones, at the input nodes. The digits of the sum come out on the output edge. A shift register holds the carry.

Part of a circuit which computes the sum of two binary numbers x,y . We feed the digits of x and y beginning with the lowest-order ones, at the input nodes. The digits of the sum come out on the output edge. A shift register holds the carry.


The circuit works in a sequence of clock cycles. Let us denote the input vector at clock cycle by , the shift register states by , and the output vector by . The part of the circuit going from the inputs and the shift registers to the outputs and the shift registers defines two Boolean vector functions and . The operation of the clocked circuit is described by the following equations (see Figure 4.11, which does not show any inputs and outputs).

Figure 4.11.  A “computer” consists of some memory (shift registers) and a Boolean circuit operating on it. We can define the size of computation as the size of the computer times the number of steps.

A “computer” consists of some memory (shift registers) and a Boolean circuit operating on it. We can define the size of computation as the size of the computer times the number of steps.


Frequently, we have no inputs or outputs during the work of the circuit, so the equations (4.23) can be simplified to

How to use a clocked circuit described by this equation for computation? We write some initial values into the shift registers, and propagate the assignment using the gates, for the given clock cycle. Now we send a clock pulse to the register, causing it to write new values to their output edges (which are identical to the input edges of the circuit). After this, the new assignment is computed, and so on.

How to compute a function with the help of such a circuit? Here is a possible convention. We enter the input (only in the first step), and then run the circuit, until it signals at an extra output edge when desired result can be received from the other output nodes.

Example 4.8 This example uses a convention different from the above described one: new input bits are supplied in every step, and the output is also delivered continuously. For the binary adder of Figure 4.10, let and be the two input bits in cycle , let be the content of the carry, and be the output in the same cycle. Then the equations (4.23) now have the form

where is the majority operation.

4.5.2 Storage

A clocked circuit is an interesting parallel computer but let us pose now a task for it that is trivial in the absence of failures: information storage. We would like to store a certain amount of information in such a way that it can be recovered after some time, despite failures in the circuit. For this, the transition function introduced in (4.24) cannot be just the identity: it will have to perform some error-correcting operations. The restoring organs discussed earlier are natural candidates. Indeed, suppose that we use memory cells to store a bit of information. We can call the content of this -tuple safe when the number of memory cells that dissent from the correct value is under some treshold . Let the rest of the circuit be a restoring organ built on a -compressor with . Suppose that the input cable is safe. Then the probability that after the transition, the new output cable (and therefore the new state) is not safe is for some constant . Suppose we keep the circuit running for steps. Then the probability that the state is not safe in some of these steps is which is small as long as is significantly smaller than . When storing bits of information, the probability that any of the bits loses its safety in some step is .

To make this discussion rigorous, an error model must be introduced for clocked circuits. Since we will only consider simple transition functions like the majority vote above, with a single computation step between times and , we will make the model also very simple.

Definition 4.26 Consider a clocked circuit described by equation (4.24), where at each time instant , the configuration is described by the bit vector . Consider a sequence of random bit vectors for . Similarly to (4.13) we define

Thus, says that a failure occurs at the space-time point . The sequence will be called -admissible if (4.14) holds for every set of space-time points with .

By the just described construction, it is possible to keep bits of information for steps in

memory cells. More precisely, the cable will be safe with large probability in any admissible evolution ().

Cannot we do better? The reliable information storage problem is related to the problem of information transmission: given a message , a sender wants to transmit it to a receiver throught a noisy channel. Only now sender and receiver are the same person, and the noisy channel is just the passing of time. Below, we develop some basic concepts of reliable information transmission, and then we will apply them to the construction of a reliable data storage scheme that is more economical than the above seen naive, repetition-based solution.

4.5.3 Error-correcting codes

Error detection.

To protect information, we can use redundancy in a way more efficient than repetition. We might even add only a single redundant bit to our message. Let , be the word we want to protect. Let us create the error check bit

For example, . Our codeword will be subject to noise and it turns into a new word, . If differs from in a single changed (not deleted or added) bit then we will detect this, since then violates the error check relation

We will not be able to correct the error, since we do not know which bit was corrupted.

Correcting a single error.

To also correct corrupted bits, we need to add more error check bits. We may try to add two more bits:

Then an uncorrupted word must satisfy the error check relations

or, in matrix notation , where

Note . The matrix is called the error check matrix, or parity check matrix.

Another way to write the error check relations is

Now if is corrupted, even if only in a single position, unfortunately we still cannot correct it: since , the error could be in position 1 or 5 and we could not tell the difference. If we choose our error-check matrix in such a way that the colum vectors are all different (of course also from 0), then we can always correct an error, provided there is only one. Indeed, if the error was in position 3 then

Since all vectors are different, if we see the vector we can imply that the bit is corrupted. This code is called the Hamming code. For example, the following error check matrix defines the Hamming code of size 7:

In general, if we have error check bits then our code can have size , hence the number of bits left to store information, the information bits is . So, to protect bits of information from a single error, the Hamming code adds error check bits. This is much better than repeating every bit 3 times.

Codes.

Let us summarize the error-correction scenario in general terms.

Figure 4.12.  Transmission through a noisy channel.

Transmission through a noisy channel.


In order to fight noise, the sender encodes the message by an encoding function into a longer string which, for simplicity, we also assume to be binary. This codeword will be changed by noise into a string . The receiver gets and applies to it a decoding function .

Definition 4.27 The pair of functions and is called a code if holds for all . The strings are called messages, words of the form are called codewords. (Sometimes the set of all codewords by itself is also called a code.) For every message , the set of words is called the decoding set of . (Of course, different decoding sets are disjoint.) The number

is called the rate of the code.

We say that our code that corrects errors if for all possible messages , if the received word differs from the codeword in at most positions, then .

If the rate is then the -bit codewords carry bits of useful information. In terms of decoding sets, a code corrects errors if each decoding set contains all words that differ from in at most symbols (the set of these words is a kind of “ball” of radius ).

The Hamming code corrects a single error, and its rate is close to 1. One of the important questions connected with error-correcting codes is how much do we have to lower the rate in order to correct more errors.

Having a notion of codes, we can formulate the main result of this section about information storage.

Theorem 4.28 (Network information storage)There are constants with the following property. For all sufficiently large , there is a code with message length and codeword length , and a Boolean clocked circuit of size with inputs and outputs, such that the following holds. Suppose that at time 0, the memory cells of the circuit contain string . Suppose further that the evolution of the circuit has -admissible failures. Then we have

This theorem shows that it is possible to store bits information for time , in a clocked circuit of size

As long as the storage time is below the exponential bound for a certain constant , this circuit size is only a constant times larger than the amount of information it stores. (In contrast, in (4.26) we needed an extra factor when we used a separate restoring organ for each bit.)

The theorem says nothing about how difficult it is to compute the codeword at the beginning and how difficult it is to carry out the decoding at the end. Moreover, it is desireable to perform these two operations also in a noise-tolerant fashion. We will return to the problem of decoding later.

Linear algebra.

Since we will be dealing more with bit matrices, it is convenient to introduce the algebraic structure

which is a two-element field. Addition and multiplication in are defined modulo 2 (of course, for multiplication this is no change). It is also convenient to vest the set of binary strings with the structure of an -dimensional vector space over the field . Most theorems and algorithms of basic linear algebra apply to arbitrary fields: in particular, one can define the row rank of a matrix as the maximum number of linearly independent rows, and similarly the column rank. Then it is a theorem that the row rank is equal to the colum rank. From now on, in algebraic operations over bits or bit vectors, we will write in place of unless this leads to confusion. To save space, we will frequently write column vectors horizontally: we write

where denotes the transpose of matrix . We will write

for the identity matrix over the vector space .

Linear codes.

Let us generalize the idea of the Hamming code.

Definition 4.29 A code with message length and codeword length is linear if, when viewing the message and code vectors as vectors over the field , the encoding function can be computed according to the formula

with an matrix called the generator matrix of the code. The number is called the the number of information bits in the code, the number

the number of error-check bits.

Example 4.9 The matrix in (4.27) can be written as , wit

Then the error check relation can be written as

This shows that the bits can be taken to be the message bits, or “information bits”, of the code, making the Hamming code a linear code with the generator matrix . (Of course, over the field .)

The following statement is proved using standard linear algebra, and it generalizes the relation between error check matrix and generator matrix seen in Example 4.9.

Claim 4.30 Let be given with .

(a) For every matrix of rank over there is a matrix of rank with the property

(b) For every matrix of rank over there is an matrix of rank with property (4.28).

Definition 4.31 For a vector , let denote the number of its nonzero elements: we will also call it the weight of .

In what follows it will be convenient to define a code starting from an error-check matrix . If the matrix has rank then the code has rate

We can fix any subset of linearly independent columns, and call the indices error check bits and the indices the information bits. (In Example 4.9, we chose .) Important operations can performed over a code, however, without fixing any separation into error-check bits and information bits.

4.5.4 Refreshers

Correcting a single error was not too difficult; finding a similar scheme to correct 2 errors is much harder. However, in storing bits, typically (much more than 2) of those bits will be corrupted in every step. There are ingenious and quite efficient codes of positive rate (independent of ) correcting even this many errors. When applied to information storage, however, the error-correction mechanism itself must also work in noise, so we are looking for a particularly simple one. It works in our favor, however, that not all errors need to be corrected: it is sufficient to cut down their number, similarly to the restoring organ in reliable Boolean circuits above.

For simplicity, as gates of our circuit we will allow certain Boolean functions with a large, but constant, number of arguments. On the other hand, our Boolean circuit will have just depth 1, similarly to a restoring organ of Section 4.4. The output of each gate is the input of a memory cell (shift register). For simplicity, we identify the gate and the memory cell and call it a cell. At each clock tick, a cell reads its inputs from other cells, computes a Boolean function on them, and stores the result (till the next clock tick). But now, instead of majority vote among the input values cells, the Boolean function computed by each cell will be slightly more complicated.

Our particular restoring operations will be defined, with the help of a certain parity check matrix . Let be a vector of bits. For some , let (from “vertical”) be the set of those indices with . For integer , let (from “horizontal”) be the set of those indices with . Then the condition can also be expressed by saying that for all , we have . The sets are called the parity check sets belonging to the matrix . From now on, the indices will be called checks, and the indices locations.

Definition 4.32 A linear code is a low-density parity-check code with bounds if the following conditions are satisfied:

(a) For each we have ;

(b) For each we have .

In other words, the weight of each row is at most and the weight of each column is at most .

In our constructions, we will keep the bounds constant while the length of codewords grows. Consider a situation when is a codeword corrupted by some errors. To check whether bit is incorrect we may check all the sums

for all . If all these sums are 0 then we would not suspect to be in error. If only one of these is nonzero, we will know that has some errors but we may still think that the error is not in bit . But if a significant number of these sums is nonzero then we may suspect that is a culprit and may want to change it. This idea suggests the following definition.

Definition 4.33 For a low-density parity-check code with bounds , the refreshing operation associated with the code is the following, to be performed simultaneously for all locations :

Find out whether more than of the sums are nonzero among the ones for . If this is the case, flip .

Let denote the vector obtained from by this operation. For parameters , let us call a -refresher if for each vector of length with weight the weight of the resulting vector decreases thus:

Notice the similarity of refreshers to compressors. The following lemma shows the use of refreshers, and is an example of the advantages of linear codes.

Lemma 4.34 For an -refresher , let be an -vector and a codeword of length with . Then .

Proof. Since is a codeword, , implying . Therefore the error correction flips the same bits in as in : , giving . So, if , then .

Theorem 4.35 There is a parameter and integers such that for all sufficiently large codelength and there is a -refresher with at least information bits.

In particular, we can choose , , .

Figure 4.13.  Using a refresher.

Using a refresher.


We postpone the proof of this theorem, and apply it first.

Proof. Proof of Theorem 4.28 Theorem 4.35 provides us with a device for information storage. Indeed, we can implement the operation using a single gate of at most inputs for each bit of . Now as long as the inequality holds for some codeword , the inequality follows with . Of course, some gates will fail and introduce new deviations resulting in some rather than . Let and . Then just as earlier, the probability that there are more than failures is bounded by the exponentially decreasing expression . With fewer than new deviations, we will still have . The probability that at any time the number of failures is more than is bounded by

Example 4.10 Let . Using the sample values in Theorem 4.35 we can take , , so the information rate is . With the corresponding values of , and , we have . The probability that there are more than failures is bounded by

This is exponentially decreasing with , albeit initially very slowly: it is not really small until . Still, for , it gives .

Decoding?

In order to use a refresher for information storage, first we need to enter the encoded information into it, and at the end, we need to decode the information from it. How can this be done in a noisy environment? We have nothing particularly smart to say here about encoding besides the reference to the general reliable computation scheme discussed earlier. On the other hand, it turns out that if is sufficiently small then decoding can be avoided altogether.

Recall that in our codes, it is possible to designate certain symbols as information symbols. So, in principle it is sufficient to read out these symbols. The question is only how likely it is that any one of these symbols will be corrupted. The following theorem upperbounds the probability for any symbol to be corrupted, at any time.

Theorem 4.36 For parameters , integers , codelength , with , consider a -refresher. Build a Boolean clocked circuit of size with inputs and outputs based on this refresher, just as in the proof of Theorem 4.28. Suppose that at time 0, the memory cells of the circuit contain string . Suppose further that the evolution of the circuit has -admissible failures. Let be the bits stored at time . Then implies

for some depending on .

Remark 4.37 What we are bounding is only the probability of a corrupt symbol in the particular position . Some of the symbols will certainly be corrupt, but any one symbol one points to will be corrupt only with probability .

The upper bound on required in the condition of the theorem is very severe, underscoring the theoretical character of this result.

Proof. As usual, it is sufficient to assume . Let , and let be the set of circuit elements which fail at time . Let us define the following sequence of integers:

It is easy to see by induction

The first members of the sequence are 1,2,3,4,6,8,11,15,18,24,32, and for they are 1,1,1,2,2,3,4,5,6,8,11.

Lemma 4.38 Suppose that . Then either there is a time at which circuit elements failed, or there is a sequence of sets for and with the following properties.

(a) For , every element of shares some error-check with some element of . Also every element of shares some error-check with some element of .

(b) We have for , on the other hand .

(c) We have , , for all , and .

Proof. We will define the sequence recursively, and will say when to stop. If then we set , , and stop. Suppose that is already defined. Let us define (or if ). Let be the set of those which share some error-check with an element of , and let . The refresher property implies that either or

In the former case, there must have been some time with , otherwise could never become larger than . In the latter case, the property implies

Now if then let be any subset of with size (there is one), else let and a set of size (there is one). This construction has the required properties.

For a given , the number of different choices for is bounded by

where we used (4.9). Similarly, the number of different choices for is bounded by

It follows that the number of choices for the whole sequence is bounded by

On the other hand, the probability for a fixed to have is . This way, we can bound the probability that the sequence ends exactly at by

where we used (4.29). For small this gives

Therefore

where we used and the property for . We can bound the last expression by with an appropriate constantb .

We found that the event happens either if there is a time at which circuit elements failed (this has probability bound ) or an event of probability occurs.

Expanders.

We will construct our refreshers from bipartite multigraphs with a property similar to compressors: expanders.

Definition 4.39 Here, we will distinguish the two parts of the bipartite (multi) graphs not as inputs and outputs but as left nodes and right nodes. A bipartite multigraph is -regular if the points of the left set have degree and the points in the right set have degree . Consider such a graph, with the left set having nodes (then the right set has nodes). For a subset of the left set of , let consist of the points connected by some edge to some element of . We say that the graph expands by a factor if we have . For , our graph is an -expander if expands every subset of size of the left set by a factor .

Figure 4.14.  A regular expander.

A regular expander.

Definition 4.40 Given an -regular bipartite multigraph , with left set and right set , we assign to it a parity-check code as follows: if is connected to , and 0 otherwise.

Now for every possible error set , the set describes the set of parity checks that the elements of participate in. Under some conditions, the lower bound on the size of guarantees that a sufficient number of errors will be corrected.

Theorem 4.41 Let be an -expander with integer . Let . Then is a -refresher.

Proof. More generally, for any , let be an -expander with integer . We will prove that is a -refresher. For an -dimensional bit vector with , , assume

Our goal is to show : in other words, that in the corrected vector the number of errors decreases at least by a factor of .

Let be the set of bits in that the error correction operation fails to flip, with , and the set of of bits that were 0 but the operation turns them to 1, with . Our goal is to bound . The key observation is that each element of shares at least half of its neighbors with elements of , and similarly, each element of shares at least half of its neighbors with other elements of . Therefore both and contribute relatively weakly to the expansion of . Since this expansion is assumed strong, the size of must be limited.

Let

By expansion, .

First we show . Assume namely that, on the contrary, , and let be a subset of such that (an integer, according to the assumptions of the theorem). By expansion,

Each bit in has at most neighbors that are not neighbors of ; so,

Combining these:

since . This contradiction with 4.30 shows .

Now implies (recalling that each element of contributes at most new neighbors):

Each must share at least half of its neighbors with others in . Therefore contributes at most neighbors on its own; the contribution of the other must be divided by 2, so the the total contribution of to the neighbors of is at most :

Combining with (4.31):

Random expanders.

Are there expanders good enough for Theorem 4.41? The maximum expansion factor is the degree and we require a factor of It turns out that random choice works here, too, similarly to the one used in the “construction” of compressors.

The choice has to be done in a way that the result is an -regular bipartite multigraph of left size . We will start with left nodes and right nodes . Now we choose a random matching, that is a set of edges with the property that every left node is connected by an edge to exactly one right node. Let us call the resulting graph . We obtain now as follows: we collapse each group of left nodes into a single node: into one node, into another node, and so on. Similarly, we collapse each group of right nodes into a single node: into one node, into another node, and so on. The edges between any pair of nodes in are inherited from the ancestors of these nodes in . This results in a graph with left nodes of degree and right nodes of degree . The process may give multiple edges between nodes of , this is why is a multigraph. Two nodes of will be called cluster neighbors if they are collapsed to the same node of .

Theorem 4.42 Suppose

Then the above random choice gives an -expander with positive probability.

Example 4.11 If , then the inequality in the condition of the theorem becomes

Proof. Let be a set of size in the left set of . We will estimate the probability that has too few neighbors. In the above choice of the graph we might as well start with assigning edges to the nodes of , in some fixed order of the nodes of the preimage of in . There are edges to assign. Let us call a node of the right set of occupied if it has a cluster neighbor already reached by an earlier edge. Let be a random variable that is 1 if the th edge goes to an occupied node and 0 otherwise. There are

choices for the th edge and at most of these are occupied. Therefore

Using the large deviations theorem in the generalization given in Exercise 4.1-3, we have, for :

Now, the number of different neighbors of is , hence

Let us now multiply this with the number

of sets of size :

where in the last step we assumed . This is if

Substituting gives the formula of the theorem.

Proof. Proof of Theorem 4.35 Theorem 4.41 shows how to get a refresher from an expander, and Theorem 4.42 shows the existence of expanders for certain parameters. Example 4.11 shows that the parameters can be chosen as needed for the refreshers.

Exercises

4.5-1 Prove Proposition 4.30.

4.5-2 Apply the ideas of the proof of Theorem 4.36 to the proof of Theorem 4.17, showing that the “coda” circuit is not needed: each wire of the output cable carries the correct value with high probability.

 PROBLEMS 

4-1 Critical value

Consider a circuit like in Exercise 4.2-5, assuming that each gate fails with probability independently of all the others and of the input. Assume that the input vector is all 0, and let be the probability that the circuit outputs a 1. Show that there is a value with the property that for all we have , and for , we have have . Estimate also the speed of convergence in both cases.

4-2 Regular compressor

We defined a compressor as a -halfregular bipartite multigraph. Let us call a compressor regular if it is a -regular multigraph (the input nodes also have degree ). Prove a theorem similar to Theorem 4.21: for each there is an integer and an such that for all integer there is a regular -compressor.

Hint. Choose a random -regular bipartite multigraph by the following process: (1. Replace each vertex by a group of vertices. 2. Choose a random complete matching betwen the new input and output vertices. 3. Merge each group of vertices into one vertex again.) Prove that the probability, over this choice, that a -regular multigraph is a not a compressor is small. For this, express the probability with the help of factorials and estimate the factorials using Stirling's formula.

4-3 Two-way expander

Recall the definition of expanders. Call a -expander regular if it is a -regular multigraph (the input nodes also have degree ). We will call this multigraph a two-way expander if it is an expander in both directions: from to and from to . Prove a theorem similar to the one in Problem 4-2: for all there is an such that for all integers there is a two-way regular -expander.

4-4 Restoring organ from 3-way voting

The proof of Theorem 4.21 did not guarantee a -compressor with any , . If we only want to use 3-way majority gates, consider the following construction. First create a 3-halfregular bipartite graph with inputs and outputs , with a 3-input majority gate in each . Then create new nodes , with a 3-input majority gate in each . The gate of computes the majority of , the gate of computes the majority of , and so on. Calculate whether a random choice of the graph will turn the circuit with inputs and outputs into a restoring organ. Then consider three stages instead of two, where has outputs and see what is gained.

4-5 Restoring organ from NOR gates

The majority gate is not the only gate capable of strengthening the majority. Recall the NOR gate introduced in Exercise 4.2-2, and form . Show that a construction similar to Problem 4-4 can be carried out with used in place of 3-way majority gates.

4-6 More randomness, smaller restoring organs

Taking the notation of Exercise 4.4-3, suppose like there, that the random variables are independent of each other, and their distribution does not depend on the Boolean input vector. Apply the idea of Exercise 4.4-5 to the construction of each restoring organ. Namely, construct a different restoring organ for each position: the choice depends on the circuit preceding this position. Show that in this case, our error estimates can be significantly improved. The improvement comes, just as in Exercise 4.4-5, since now we do not have to multiply the error probability by the number of all possible sets of size of tainted wires. Since we know the distribution of this set, we can average over it.

4-7 Near-sorting with expanders

In this problem, we show that expanders can be used for “near-sorting”. Let be a regular two-way -expander, whose two parts of size are and . According to a theorem of Kőnig, (the edge-set of) every -regular bipartite multigraph is the disjoint union of (the edge-sets of) complete matchings . To such an expander, we assign a Boolean circuit of depth as follows. The circuit's nodes are subdivide into levels . On level we have two disjoint sets of size of nodes , (). The Boolean value on will be and respectively. Denote the vector of values at stage by . If is an edge in the matching , then we put an gate into , and a gate into :

This network is trying to “sort” the 0's to and the 1's to in stages. More generally, the values in the vectors could be arbitrary numbers. Then if still means and means then each vector is a permutation of the vector . Let . Prove that is -sorted in the sense that for all , at least among the smallest values of is in its left half and at least among the largest values are in its right half.

4-8 Restoring organ from near-sorters

Develop a new restoring organ using expanders, as follows. First, split each wire of the input cable , to get two sets . Attach the -sorter of Problem 4-7, getting outputs . Now split the wires of into two sets . Attach the -sorter again, getting outputs . Keep only for the output cable. Show that the Boolean vector circuit leading from to can be used as a restoring organ.

 CHAPTER NOTES 

The large deviation theorem (Theorem 4.1), or theorems similar to it, are sometimes attributed to Chernoff or Bernstein. One of its frequently used variants is given in Exercise 4.1-2.

The problem of reliable computation with unreliable components was addressed by John von Neumann in [192] on the model of logic circuits. A complete proof of the result of that paper (with a different restoring organ) appeare first in the paper [63] of R. L. Dobrushin and S. I. Ortyukov. Our presentation relied on parts of the paper [205] of N. Pippenger.

The lower-bound result of Dobrushin and Ortyukov in the paper [62] (corrected in [203], [211] and [85]), shows that reduncancy of is unavoidable for a general reliable computation whose complexity is . However, this lower bound only shows the necessity of putting the input into a redundantly encoded form (otherwise critical information may be lost in the first step). As shown in [205], for many important function classes, linear redundancy is achievable.

It seems natural to separate the cost of the initial encoding: it might be possible to perform the rest of the computation with much less redundancy. An important step in this direction has been made by D. Spielman in the paper [248] in (essentially) the clocked-circuit model. Spielman takes a parallel computation with time running on elementary components and makes it reliable using only times more processors and running it times longer. The failure probability will be . This is small as long as is not much larger than . So, the redundancy is bounded by some power of the logarithm of the space requirement; the time requirement does not enter explictly. In Boolean circuits no time- and space- complexity is defined separately. The size of the circuit is analogous to the quantity obtained in other models by taking the product of space and time complexity.

Questions more complex than Problem 4-1 have been studied in [204]. The method of Problem 4-2, for generating random -regular multigraphs is analyzed for example in [27]. It is much harder to generate simple regular graphs (not multigraphs) uniformly. See for example [143].

The result of Exercise 4.2-4 is due to C. Shannon, see [234]. The asymptotically best circuit size for the worst functions was found by Lupanov in [170]. Exercise 4.3-1 is based on [63], and Exercise 4.3-2 is based on [62] (and its corrections).

Problem 4-7 is based on the starting idea of the depth sorting networks in [9].

For storage in Boolean circuits we partly relied on A. V. Kuznietsov's paper [156] (the main theorem, on the existence of refreshers is from M. Pinsker). Low density parity check codes were introduced by R. G. Gallager in the book[80], and their use in reliable storage was first suggested by M. G. Taylor in the paper [257]. New, constructive versions of these codes were developed by M. Sipser and D. Spielman in the paper [247], with superfast coding and decoding.

Expanders, invented by Pinsker in [202] have been used extensively in theoretical computer science: see for example[188] for some more detail. This book also gives references on the construction of graphs with large eigenvalue-gap. Exercise 4.4-4 and Problem 4-6 are based on [63].

The use of expanders in the role of refreshers was suggested by Pippenger (private communication): our exposition follows Sipser and Spielman in [243]. Random expanders were found for example by Pinsker. The needed expansion rate ( times the left degree) is larger than what can be implied from the size of the eigenvalue gap. As shown in [202] (see the proof in Theorem 4.42) random expanders have the needed expansion rate. Lately, constructive expanders with nearly maximal expansion rate were announced by Capalbo, Reingold, Vadhan and Wigderson in [39].

Reliable computation is also possible in a model of parallel computation that is much more regular than logic circuits: in cellular automata. We cannot present those results here: see for example the papers [84], [86].

Part II. COMPUTER ALGEBRA

Table of Contents

5. Algebra
5.1 Fields, vector spaces, and polynomials
5.1.1 Ring theoretic concepts
5.1.2 Polynomials
5.2 Finite fields
5.3 Factoring polynomials over finite fields
5.3.1 Square-free factorisation
5.3.2 Distinct degree factorisation
5.3.3 The Cantor-Zassenhaus algorithm
5.3.4 Berlekamp's algorithm
5.4 Lattice reduction
5.4.1 Lattices
5.4.2 Short lattice vectors
5.4.3 Gauss' algorithm for two-dimensional lattices
5.4.4 A Gram-Schmidt orthogonalisation and weak reduction
5.4.5 Lovász-reduction
5.4.6 Properties of reduced bases
5.5 Factoring polynomials in
5.5.1 Preparations
5.5.2 The Berlekamp-Zassenhaus algorithm
5.5.3 The LLL algorithm
6. Computer Algebra
6.1 Data representation
6.2 Common roots of polynomials
6.2.1 Classical and extended Euclidean algorithm
6.2.2 Primitive Euclidean algorithm
6.2.3 The resultant
6.2.4 Modular greatest common divisor
6.3 Gröbner basis
6.3.1 Monomial order
6.3.2 Multivariate division with remainder
6.3.3 Monomial ideals and Hilbert's basis theorem
6.3.4 Buchberger's algorithm
6.3.5 Reduced Gröbner basis
6.3.6 The complexity of computing Gröbner bases
6.4 Symbolic integration
6.4.1 Integration of rational functions
6.4.2 The Risch integration algorithm
6.5 Theory and practice
6.5.1 Other symbolic algorithms
6.5.2 An overview of computer algebra systems
7. Cryptology
7.1 Foundations
7.1.1 Cryptography
7.1.2 Cryptanalysis
7.1.3 Algebra, number theory, and graph theory
7.2 Diffie and Hellman's secret-key agreement protocol
7.3 RSA and factoring
7.3.1 RSA
7.3.2 Digital RSA signatures
7.3.3 Security of RSA
7.4 The protocols of Rivest, Rabi, and Sherman
7.5 Interactive proof systems and zero-knowledge
7.5.1 Interactive proof systems and Arthur-Merlin games
7.5.2 Zero-knowledge protocol for graph isomorphism
8. Complexity Theory
8.1 Foundations
8.2 NP-completeness
8.3 Algorithms for the satisfiability problem
8.3.1 A deterministic algorithm
8.3.2 A randomised algorithm
8.4 Graph isomorphism and lowness
8.4.1 Reducibilities and complexity hierarchies
8.4.2 Graph isomorphism is in the low hierarchy
8.4.3 Graph isomorphism is in SPP

Chapter 5. Algebra

First, in this chapter, we will discuss some of the basic concepts of algebra, such as fields, vector spaces and polynomials (Section 5.1). Our main focus will be the study of polynomial rings in one variable. These polynomial rings play a very important role in constructive applications. After this, we will outline the theory of finite fields, putting a strong emphasis on the problem of constructing them (Section 5.2) and on the problem of factoring polynomials over such fields (Section 5.3). Then we will study lattices and discuss the Lenstra-Lenstra-Lovász algorithm which can be used to find short lattice vectors (Section 5.4). We will present a polynomial time algorithm for the factorisation of polynomials with rational coefficients; this was the first notable application of the Lenstra-Lenstra-Lovász algorithm (Section 5.5).

5.1 Fields, vector spaces, and polynomials

In this section we will overview some important concepts related to rings and polynomials.

5.1.1 Ring theoretic concepts

We recall some definitions introduced in Chapters 31–33 of the textbook Introduction to Algorithms. In the sequel all cross references to Chapters 31–33 refer to results in that book.

A set with at least two elements is called a ring, if it has two binary operations, the addition, denoted by the sign, and the multiplication, denoted by the sign. The elements of form an Abelian group with respect to the addition, and they form a monoid (that is, a semigroup with an identity), whose identity element is denoted by 1, with respect to the multiplication. We assume that . Further, the distributive properties also hold: for arbitrary elements we have

Being an Abelian group with respect to the addition means that the operation is associative, commutative, it has an identity element (denoted by 0), and every element has an inverse with respect to this identity. More precisely, these requirements are the following:

associative property: for all triples we have ;

commutative property: for all pairs we have ;

existence of the identity element: for the zero element of and for all elements of , we have ;

existence of the additive inverse: for all there exists , such that .

It is easy to show that each of the elements in has a unique inverse. We usually denote the inverse of an element by .

Concerning the multiplication, we require that it must be associative and that the multiplicative identity should exist. The identity of a ring is the multiplicative identity of . The usual name of the additive identity is zero. We usually omit the sign when writing the multiplication, for example we usually write instead of .

Example 5.1 Rings.

(i) The set of integers with the usual operations and .

(ii) The set of residue classes modulo with respect to the addition and multiplication modulo .

(iii) The set of -matrices with real entries with respect to the addition and multiplication of matrices.

Let and be rings. A map is said to be a homomorphism, if preserves the operations, in the sense that and holds for all pairs . A homomorphism is called an isomorphism, if is a one-to-one correspondence, and the inverse is also a homomorphism. We say that the rings and are isomorphic, if there is an isomorphism between them. If and are isomorphic rings, then we write . From an algebraic point of view, isomorphic rings can be viewed as identical.

For example the map which maps an integer to its residue modulo 6 is a homomorphism: , etc.

A useful and important ring theoretic construction is the direct sum. The direct sum of the rings and is denoted by . The underlying set of the direct sum is , that is, the set of ordered pairs where . The operations are defined componentwise: for we let

Easy calculation shows that is a ring with respect to the operations above. This construction can easily be generalised to more than two rings. In this case, the elements of the direct sum are the -tuples, where is the number of rings in the direct sum, and the operations are defined componentwise.

Fields.

A ring is said to be a field, if its non-zero elements form an Abelian group with respect to the multiplication. The multiplicative inverse of a non-zero element is usually denoted .

The best-known examples of fields are the the sets of rational numbers, real numbers, and complex numbers with respect to the usual operations. We usually denote these fields by , respectively.

Another important class of fields consists of the fields of -elements where is a prime number. The elements of are the residue classes modulo , and the operations are the addition and the multiplication defined on the residue classes. The distributive property can easily be derived from the distributivity of the integer operations. By Theorem 33.12, is a group with respect to the addition, and, by Theorem 33.13, the set of non-zero elements of is a group with respect to the multiplication. In order to prove this latter claim, we need to use that is a prime number.

Characteristic, prime field.

In an arbitrary field, we may consider the set of elements of the form , that is, the set of elements that can be written as the sum of copies of the multiplicative identity where is a positive integer. Clearly, one of the two possibilities must hold:

(a) none of the elements is zero;

(b) is zero for some .

In case (a) we say that is a field with characteristic zero. In case (b) the characteristic of is the smallest such that . In this case, the number must be a prime, for, if , then , and so either or .

Suppose that denotes the smallest subfield of that contains . Then is said to be the prime field of . In case (a) the subfield consists of the elements where is an integer and is a positive integer. In this case, is isomorphic to the field of rational numbers. The identification is obvious: .

In case (b) the characteristic is a prime number, and is the set of elements where . In this case, is isomorphic to the field of residue classes modulo .

Vector spaces.

Let be a field. An additively written Abelian group is said to be a vector space over , or simply an -vector space, if for all elements and , an element is defined (in other words, acts on ) and the following hold:

Here are arbitrary elements of , the elements are arbitrary in , and the element 1 is the multiplicative identity of .

The space of -matrices over is an important example of vector spaces. Their properties are studied in Chapter 31.

A vector space over a field is said to be finite-dimensional if there is a collection of finitely many elements in such that each of the elements can be written as a linear combination for some . Such a set is called a generating set of . The cardinality of the smallest generating set of is referred to as the dimension of over , denoted . In a finite-dimensional vector space, a generating system containing elements is said to be a basis.

A set of elements of a vector space is said to be linearly independent, if, for , the equation implies . It is easy to show that a basis in is a linearly independent set. An important property of linearly independent sets is that such a set can be extended to a basis of the vector space. The dimension of a vector space coincides with the cardinality of its largest linearly independent set.

A non-empty subset of a vector space is said to be a subspace of , if it is an (additive) subgroup of , and holds for all and . It is obvious that a subspace can be viewed as a vector space.

The concept of homomorphisms can be defined for vector spaces, but in this context we usually refer to them as linear maps. Let and be vector spaces over a common field . A map is said to be linear, if, for all and , we have

The linear mapping is an isomorphism if is a one-to-one correspondence and its inverse is also a homomorphism. Two vector spaces are said to be isomorphic if there is an isomorphism between them.

Lemma 5.1 Suppose that is a linear mapping. Then is a subspace in . If is one-to-one, then . If, in this case, , then and the mapping is an isomorphism.

Proof. As

we obtain that is a subspace. Further, it is clear that the images of the elements of a generating set of form a generating set for . Let us now suppose that is one-to-one. In this case, the image of a linearly independent subset of is linearly independent in . It easily follows from these observations that the image of a basis of is a basis of , and so . If we assume, in addition, that , then a basis of is also a basis of , as it is a linearly independent set, and so it can be extended to a basis of . Thus and the mapping must be a one-to-one correspondence. It is easy to see, and is left to the reader, that is a linear mapping.

The direct sum of vector spaces can be defined similarly to the direct sum of rings. The direct sum of the vector spaces and is denoted by . The underlying set of the direct sum is , and the addition and the action of the field are defined componentwise. It is easy to see that

Finite multiplicative subgroups of fields.

Let be a field and let be a finite multiplicative subgroup of . That is, the set contains finitely many elements of , each of which is non-zero, is closed under multiplication, and the multiplicative inverse of an element of also lies in . We aim to show that the group is cyclic, that is, can be generated by a single element. The main concepts related to cyclic groups can be found in Section 33.3.4. of an element is the smallest positive integer such that .

The cyclic group generated by an element is denoted by . Clearly, , and an element generates the group if and only if and are relatively prime. Hence the group has exactly generators where is Euler's totient function (see Subsection 33.3.2).

The following identity is valid for an arbitrary integer :

Here the summation index runs through all positive divisors of . In order to verify this identity, consider all the rational numbers with . The number of these is exactly . After simplifying these fractions, they will be of the form where is a positive divisor of . A fixed denominator will occur exactly times.

Theorem 5.2 Suppose that is a field and let be a finite multiplicative subgroup of . Then there exists an element such that .

Proof. Suppose that . Lagrange's theorem (Theorem 33.15) implies that the order of an element is a divisor of . We claim, for an arbitrary , that there are at most elements in with order . The elements with order are roots of the polynomial . If has an element with order , then, by Lemma 5.5, (the lemma will be verified later). Therefore all the elements of with order are contained in the group , which, in turn, contains exactly elements of order .

If had no element of order , then the order of each of the elements of would be a proper divisor of . In this case, however, using the identity above and the fact that , we obtain

which is a contradiction.

5.1.2 Polynomials

Suppose that is a field and that are elements of . Recall that an expression of the form

where is an indeterminate, is said to be a polynomial over (see Chapter 32). The scalars are the coefficients of the polynomial . The degree of the zero polynomial is zero, while the degree of a non-zero polynomial is the largest index such that . The degree of is denoted by .

The set of all polynomials over in the indeterminate is denoted by . If

and

are polynomials with degree not larger than , then their sum is defined as the polynomial

whose coefficients are .

The product of the polynomials and is defined as the polynomial

with degree at most whose coefficients are given by the equations . On the right-hand side of these equations, the coefficients with index greater than are considered zero. Easy computation shows that is a commutative ring with respect to these operations. It is also straightforward to show that has no zero divisors, that is, whenever , then either or .

Division with remainder and divisibility.

The ring of polynomials over is quite similar, in many ways, to the ring of integers. One of their similar features is that the procedure of division with remainder can be performed in both rings.

Lemma 5.3 Let be polynomials such that . Then there there exist polynomials and such that

and either or . Moreover, the polynomials and are uniquely determined by these conditions.

Proof. We verify the claim about the existence of the polynomials and by induction on the degree of . If or , then the assertion clearly holds. Let us suppose, therefore, that . Then subtracting a suitable multiple of from , we obtain that the degree of is smaller than . Then, by the induction hypothesis, there exist polynomials and such that

and either or . It is easy to see that, in this case, the polynomials and are as required.

It remains to show that the polynomials and are unique. Let and be polynomials, possibly different from and , satisfying the assertions of the lemma. That is, , and so . If the polynomial on the left-hand side is non-zero, then its degree is at least , while the degree of the polynomial on the right-hand side is smaller than . This, however, is not possible.

Let be a commutative ring with a multiplicative identity and without zero divisors, and set . The ring is said to be a Euclidean ring if there is a function such that , for all ; and, further, if , , then there are elements such that , and if , then . The previous lemma shows that is a Euclidean ring where the role of the function is played by the degree function.

The concept of divisibility in can be defined similarly to the definition of the corresponding concept in the ring of integers. A polynomial is said to be a divisor of a polynomial (the notation is ), if there is a polynomial such that . The non-zero elements of , which are clearly divisors of each of the polynomials, are called the trivial divisors or units. A non-zero polynomial is said to be irreducible, if whenever with , then either or is a unit.

Two polynomials are called associates, if there is some such that .

Using Lemma 5.3, one can easily prove the unique factorisation theorem in the ring of polynomials following the argument of the proof of the corresponding theorem in the ring of integers (see Section 33.1). The role of the absolute value of integers is played by the degree of polynomials.

Theorem 5.4 An arbitrary polynomial can be written in the form

where is a unit, the polynomials are pairwise non-associate and irreducible, and, further, the numbers are positive integers. Furthermore, this decomposition is essentially unique in the sense that whenever

is another such decomposition, then , and, after possibly reordering the factors , the polynomials and are associates, and moreover for all .

Two polynomials are said to be relatively prime, if they have no common irreducible divisors.

A scalar is a root of a polynomial , if . Here the value is obtained by substituting into the place of in .

Lemma 5.5 Suppose that is a root of a polynomial . Then there exists a polynomial such that . Hence the polynomial may have at most roots.

Proof. By Lemma 5.3, there exists and such that . Substituting for , we find that . The second assertion now follows by induction on from the fact that the roots of are also roots of .

The cost of the operations with polynomials.

Suppose that are polynomials of degree at most . Then the polynomials can obviously be computed using field operations. The product can be obtained, using its definition, by field operations. If the Fast Fourier Transform can be performed over , then the multiplication can be computed using only field operations (see Theorem 32.2). For general fields, the cost of the fastest known multiplication algorithms for polynomials (for instance the Schönhage–Strassen-method) is , that is, field operations.

The division with remainder, that is, determining the polynomials and for which and either or , can be performed using field operations following the straightforward method outlined in the proof of Lemma 5.3. There is, however, an algorithm (the Sieveking–Kung algorithm) for the same problem using only steps. The details of this algorithm are, however, not discussed here.

Congruence, residue class ring.

Let with , and let . We say that is congruent to modulo , or simply , if divides the polynomial . This concept of congruence is similar to the corresponding concept introduced in the ring of integers (see 33.3.2). It is easy to see from the definition that the relation is an equivalence relation on the set . Let (or simply if is clear from the context) denote the equivalence class containing . From Lemma 5.3 we obtain immediately, for each , that there is a unique such that , and either (if divides ) or . This polynomial is called the representative of the class . The set of equivalence classes is traditionally denoted by .

Lemma 5.6 Let and let . Suppose that and . Then

and

Proof. The first congruence is valid, as

and the right-hand side of this is clearly divisible by . The second and the third congruences follow similarly from the identities

and

respectively.

The previous lemma makes it possible to define the sum and the product of two congruence classes and as and , respectively. The lemma claims that the sum and the product are independent of the choice of the congruence class representatives. The same way, we may define the action of on the set of congruence classes: we set .

Theorem 5.7 Suppose that and that .

(i) The set of residue classes is a commutative ring with an identity under the operations and defined above.

(ii) The ring contains the field as a subring, and it is an -dimensional vector space over . Further, the residue classes form a basis of .

(iii) If is an irreducible polynomial in , then is a field.

Proof. (i) The fact that is a ring follows easily from the fact that is a ring. Let us, for instance, verify the distributive property:

The zero element of is the class , the additive inverse of the class is the class , while the multiplicative identity element is the class . The details are left to the reader.

(ii) The set is a subring isomorphic to . The correspondence is obvious: . By part (i), is an additive Abelian group, and the action of satisfies the vector space axioms. This follows from the fact that the polynomial ring is itself a vector space over . Let us, for example, verify the distributive property:

The other properties are left to the reader.

We claim that the classes are linearly independent. For, if

then , as the zero polynomial is the unique polynomial with degree less than that is divisible by . On the other hand, for a polynomial , the degree of the class representative of is less than . Thus the class can be expressed as a linear combination of the classes . Hence the classes form a basis of , and so .

(iii) Suppose that is irreducible. First we show that has no zero divisors. If , then divides , and so divides either or . That is, either or . Suppose now that with . We claim that the classes are linearly independent. Indeed, an equation implies , and, in turn, it also yields that . Therefore the classes form a basis of . Hence there exist coefficients for which

Thus we find that the class has a multiplicative inverse, and so is a field, as required.

We note that the converse of part (iii) of the previous theorem is also true, and its proof is left to the reader (Exercise 5.1-1).

Example 5.2 We usually represent the elements of the residue class ring by their representatives, which are polynomials with degree less than .

1. Suppose that is the field of two elements, and let . Then the ring has 8 elements, namely

Practically speaking, the addition between the classes is the is addition of polynomials. For instance

When computing the product, we compute the product of the representatives, and substitute it (or reduce it) with its remainder after dividing by . For instance,

The polynomial is irreducible over , since it has degree 3, and has no roots. Hence the residue class ring is a field.

2. Let and let . The elements of the residue class ring are the classes of the form where . The ring is not a field, since is not irreducible. For instance, .

Lemma 5.8 Let be a field containing a field and let .

(i) If is finite-dimensional as a vector space over , then there is a non-zero polynomial such that is a root of .

(ii) Assume that there is a polynomial with , and let be such a polynomial with minimal degree. Then the polynomial is irreducible in . Further, if with then is a divisor of .

Proof. (i) For a sufficiently large , the elements are linearly dependent over . A linear dependence gives a polynomial such that .

(ii) If , then, as , the element is a root of either or . As was chosen to have minimal degree, one of the polynomials is a unit, and so is irreducible. Finally, let such that . Let be the polynomials as in Lemma 5.3 for which . Substituting for into the last equation, we obtain , which is only possible if .

Definition 5.9 The polynomial in the last lemma is said to be a minimal polynomial of .

It follows from the previous lemma that the minimal polynomial is unique up to a scalar multiple. It will often be helpful to assume that the leading coefficient (the coefficient of the term with the highest degree) of the minimal polynomial is 1.

Corollary 5.10 Let be a field containing , and let . Suppose that is irreducible and that . Then is a minimal polynomial of .

Proof. Suppose that is a minimal polynomial of . By the previous lemma, and is irreducible. This is only possible if the polynomials and are associates.

Let be a field containing and let . Let denote the smallest subfield of that contains and .

Theorem 5.11 Let be a field containing and let . Suppose that