ALGORITHMS OF INFORMATICS

Volume 2

Új Széchenyi Terv logó.


Table of Contents

IV. COMPUTER NETWORKS
13. Distributed Algorithms
13.1 Message passing systems and algorithms
13.1.1 Modeling message passing systems
13.1.2 Asynchronous systems
13.1.3 Synchronous systems
13.2 Basic algorithms
13.2.1 Broadcast
13.2.2 Construction of a spanning tree
13.3 Ring algorithms
13.3.1 The leader election problem
13.3.2 The leader election algorithm
13.3.3 Analysis of the leader election algorithm
13.4 Fault-tolerant consensus
13.4.1 The consensus problem
13.4.2 Consensus with crash failures
13.4.3 Consensus with Byzantine failures
13.4.4 Lower bound on the ratio of faulty processors
13.4.5 A polynomial algorithm
13.4.6 Impossibility in asynchronous systems
13.5 Logical time, causality, and consistent state
13.5.1 Logical time
13.5.2 Causality
13.5.3 Consistent state
13.6 Communication services
13.6.1 Properties of broadcast services
13.6.2 Ordered broadcast services
13.6.3 Multicast services
13.7 Rumor collection algorithms
13.7.1 Rumor collection problem and requirements
13.7.2 Efficient gossip algorithms
13.8 Mutual exclusion in shared memory
13.8.1 Shared memory systems
13.8.2 The mutual exclusion problem
13.8.3 Mutual exclusion using powerful primitives
13.8.4 Mutual exclusion using read/write registers
13.8.5 Lamport's fast mutual exclusion algorithm
14. Network Simulation
14.1 Types of simulation
14.2 The need for communications network modelling and simulation
14.3 Types of communications networks, modelling constructs
14.4 Performance targets for simulation purposes
14.5 Traffic characterisation
14.6 Simulation modelling systems
14.6.1 Data collection tools and network analysers
14.6.2 Model specification
14.6.3 Data collection and simulation
14.6.4 Analysis
14.6.5 Network Analysers
14.6.6 Sniffer
14.7 Model Development Life Cycle (MDLC)
14.8 Modelling of traffic burstiness
14.8.1 Model parameters
14.8.2 Implementation of the Hurst parameter
14.8.3 Validation of the baseline model
14.8.4 Consequences of traffic burstiness
14.8.5 Conclusion
14.9 Appendix A
14.9.1 Measurements for link utilisation
14.9.2 Measurements for message delays
15. Parallel Computations
15.1 Parallel architectures
15.1.1 SIMD architectures
15.1.2 Symmetric multiprocessors
15.1.3 Cache-coherent NUMA architectures
15.1.4 Non-cache-coherent NUMA architectures
15.1.5 No remote memory access architectures
15.1.6 Clusters
15.1.7 Grids
15.2 Performance in practice
15.3 Parallel programming
15.3.1 MPI programming
15.3.2 OpenMP programming
15.3.3 Other programming models
15.4 Computational models
15.4.1 PRAM
15.4.2 BSP, LogP and QSM
15.4.3 Mesh, hypercube and butterfly
15.5 Performance in theory
15.6 PRAM algorithms
15.6.1 Prefix
15.6.2 Ranking
15.6.3 Merge
15.6.4 Selection
15.6.5 Sorting
15.7 Mesh algorithms
15.7.1 Prefix on chain
15.7.2 Prefix on square
16. Systolic Systems
16.1 Basic concepts of systolic systems
16.1.1 An introductory example: matrix product
16.1.2 Problem parameters and array parameters
16.1.3 Space coordinates
16.1.4 Serialising generic operators
16.1.5 Assignment-free notation
16.1.6 Elementary operations
16.1.7 Discrete timesteps
16.1.8 External and internal communication
16.1.9 Pipelining
16.2 Space-time transformation and systolic arrays
16.2.1 Further example: matrix product
16.2.2 The space-time transformation as a global view
16.2.3 Parametric space coordinates
16.2.4 Symbolically deriving the running time
16.2.5 How to unravel the communication topology
16.2.6 Inferring the structure of the cells
16.3 Input/output schemes
16.3.1 From data structure indices to iteration vectors
16.3.2 Snapshots of data structures
16.3.3 Superposition of input/output schemes
16.3.4 Data rates induced by space-time transformations
16.3.5 Input/output expansion
16.3.6 Coping with stationary variables
16.3.7 Interleaving of calculations
16.4 Control
16.4.1 Cells without control
16.4.2 Global control
16.4.3 Local control
16.4.4 Distributed control
16.4.5 The cell program as a local view
16.5 Linear systolic arrays
16.5.1 Matrix-vector product
16.5.2 Sorting algorithms
16.5.3 Lower triangular linear equation systems
V. DATA BASES
17. Memory Management
17.1 Partitioning
17.1.1 Fixed partitions
17.1.2 Dynamic partitions
17.2 Page replacement algorithms
17.2.1 Static page replacement
17.2.2 Dynamic paging
17.3 Anomalies
17.3.1 Page replacement
17.3.2 Scheduling with lists
17.3.3 Parallel processing with interleaved memory
17.3.4 Avoiding the anomaly
17.4 Optimal file packing
17.4.1 Approximation algorithms
17.4.2 Optimal algorithms
17.4.3 Shortening of lists (SL)
17.4.4 Upper and lower estimations (ULE)
17.4.5 Pairwise comparison of the algorithms
17.4.6 The error of approximate algorithms
18. Relational Database Design
18.1 Functional dependencies
18.1.1 Armstrong-axioms
18.1.2 Closures
18.1.3 Minimal cover
18.1.4 Keys
18.2 Decomposition of relational schemata
18.2.1 Lossless join
18.2.2 Checking the lossless join property
18.2.3 Dependency preserving decompositions
18.2.4 Normal forms
18.2.5 Multivalued dependencies
18.3 Generalised dependencies
18.3.1 Join dependencies
18.3.2 Branching dependencies
19. Query Rewriting in Relational Databases
19.1 Queries
19.1.1 Conjunctive queries
19.1.2 Extensions
19.1.3 Complexity of query containment
19.2 Views
19.2.1 View as a result of a query
19.3 Query rewriting
19.3.1 Motivation
19.3.2 Complexity problems of query rewriting
19.3.3 Practical algorithms
20. Semi-structured Databases
20.1 Semi-structured data and XML
20.2 Schemas and simulations
20.3 Queries and indexes
20.4 Stable partitions and the PT-algorithm
20.5 A()-indexes
20.6 D()- and M()-indexes
20.7 Branching queries
20.8 Index refresh
VI. APPLICATIONS
21. Bioinformatics
21.1 Algorithms on sequences
21.1.1 Distances of two sequences using linear gap penalty
21.1.2 Dynamic programming with arbitrary gap function
21.1.3 Gotoh algorithm for affine gap penalty
21.1.4 Concave gap penalty
21.1.5 Similarity of two sequences, the Smith-Waterman algorithm
21.1.6 Multiple sequence alignment
21.1.7 Memory-reduction with the Hirschberg algorithm
21.1.8 Memory-reduction with corner-cutting
21.2 Algorithms on trees
21.2.1 The small parsimony problem
21.2.2 The Felsenstein algorithm
21.3 Algorithms on stochastic grammars
21.3.1 Hidden Markov Models
21.3.2 Stochastic context-free grammars
21.4 Comparing structures
21.4.1 Aligning labelled, rooted trees
21.4.2 Co-emission probability of two HMMs
21.5 Distance based algorithms for constructing evolutionary trees
21.5.1 Clustering algorithms
21.5.2 Neighbour joining
21.6 Miscellaneous topics
21.6.1 Genome rearrangement
21.6.2 Shotgun sequencing
22. Computer Graphics
22.1 Fundamentals of analytic geometry
22.1.1 Cartesian coordinate system
22.2 Description of point sets with equations
22.2.1 Solids
22.2.2 Surfaces
22.2.3 Curves
22.2.4 Normal vectors
22.2.5 Curve modelling
22.2.6 Surface modelling
22.2.7 Solid modelling with blobs
22.2.8 Constructive solid geometry
22.3 Geometry processing and tessellation algorithms
22.3.1 Polygon and polyhedron
22.3.2 Vectorization of parametric curves
22.3.3 Tessellation of simple polygons
22.3.4 Tessellation of parametric surfaces
22.3.5 Subdivision curves and meshes
22.3.6 Tessellation of implicit surfaces
22.4 Containment algorithms
22.4.1 Point containment test
22.4.2 Polyhedron-polyhedron collision detection
22.4.3 Clipping algorithms
22.5 Translation, distortion, geometric transformations
22.5.1 Projective geometry and homogeneous coordinates
22.5.2 Homogeneous linear transformations
22.6 Rendering with ray tracing
22.6.1 Ray surface intersection calculation
22.6.2 Speeding up the intersection calculation
22.7 Incremental rendering
22.7.1 Camera transformation
22.7.2 Normalizing transformation
22.7.3 Perspective transformation
22.7.4 Clipping in homogeneous coordinates
22.7.5 Viewport transformation
22.7.6 Rasterization algorithms
22.7.7 Incremental visibility algorithms
23. Human-Computer Interaction
23.1 Multiple-choice systems
23.1.1 Examples of multiple-choice systems
23.2 Generating multiple candidate solutions
23.2.1 Generating candidate solutions with heuristics
23.2.2 Penalty method with exact algorithms
23.2.3 The linear programming—penalty method
23.2.4 Penalty method with heuristics
23.3 More algorithms for interactive problem solving
23.3.1 Anytime algorithms
23.3.2 Interactive evolution and generative design
23.3.3 Successive fixing
23.3.4 Interactive multicriteria decision making
23.3.5 Miscellaneous
Bibliography

List of Figures

14.1. Estimation of the parameters of the most common distributions.
14.2. An example normal distribution.
14.3. An example Poisson distribution.
14.4. An example exponential distribution.
14.5. An example uniform distribution.
14.6. An example Pareto distribution.
14.7. Exponential distribution of interarrival time with 10 sec on the average.
14.8. Probability density function of the Exp (10.0) interarrival time.
14.9. Visualisation of anomalies in packet lengths.
14.10. Large deviations between delta times.
14.11. Histogram of frame lengths.
14.12. The three modelling abstraction levels specified by the Project, Node, and Process editors.
14.13. Example for graphical representation of scalar data (upper graph) and vector data (lower graph).
14.14. Figure 14.14 shows four graphs represented by the Analysis Tool.
14.15. Data Exchange Chart.
14.16. Summary of Delays.
14.17. Diagnosis window.
14.18. Statistics window.
14.19. Impact of adding more bandwidth on the response time.
14.20. Baseline model for further simulation studies.
14.21. Comparison of RMON Standards.
14.22. The self-similar nature of Internet network traffic.
14.23. Traffic traces.
14.24. Measured network parameters.
14.25. Part of the real network topology where the measurements were taken.
14.26. “Message Source” remote client.
14.27. Interarrival time and length of messages sent by the remote client.
14.28. The Pareto probability distribution for mean 440 bytes and Hurst parameter The Pareto probability distribution for mean 440 bytes and Hurst parameter H=0.55 ..
14.29. The internal links of the 6Mbps ATM network with variable rate control (VBR).
14.30. Parameters of the 6Mbps ATM connection.
14.31. The “Destination” subnetwork.
14.32. utilisation of the frame relay link in the baseline model.
14.33. Baseline message delay between the remote client and the server.
14.34. Input buffer level of remote router.
14.35. Baseline utilisations of the DS-3 link and Ethernet link in the destination.
14.36. Network topology of bursty traffic sources with various Hurst parameters.
14.37. Simulated average and peak link utilisation.
14.38. Response time and burstiness.
14.39. Relation between the number of cells dropped and burstiness.
14.40. Utilisation of the frame relay link for fixed size messages.
14.41. Utilisation of the frame relay link for Hurst parameter Utilisation of the frame relay link for Hurst parameter H=0.55 ..
14.42. Utilisation of the frame relay link for Hurst parameter Utilisation of the frame relay link for Hurst parameter H=0.95 (many high peaks). (many high peaks).
14.43. Message delay for fixed size message.
14.44. Message delay for Message delay for H=0.55 (longer response time peaks). (longer response time peaks).
14.45. Message delay for Message delay for H=0.95 (extremely long response time peak). (extremely long response time peak).
14.46. Settings.
14.47. New alert action.
14.48. Mailing information.
14.49. Settings.
14.50. Network topology.
15.1. SIMD architecture.
15.2. Bus-based SMP architecture.
15.3. ccNUMA architecture.
15.4. Ideal, typical, and super-linear speedup curves.
15.5. Locality optimisation by data transformation.
15.6. A simple MPI program.
15.7. Structure of an OpenMP program.
15.8. Matrix-vector multiply in OpenMP using a parallel loop.
15.9. Parallel random access machine.
15.10. Types of parallel random access machines.
15.11. A chain consisting of six processors.
15.12. A square of size A square of size 4\times4 ..
15.13. A 3-dimensional cube of size A 3-dimensional cube of size 2\times2\times2 ..
15.14. A 4-dimensional hypercube A 4-dimensional hypercube \mathcal{H}_{4} ..
15.15. A butterfly model.
15.16. A ring consisting of 6 processors.
15.17. Computation of prefixes of 16 elements using Optimal-Prefix.
15.18. Input data of array ranking and the the result of the ranking.
15.19. Work of algorithm Det-Ranking on the data of Example 15.4.
15.20. Sorting of 16 numbers by algorithm Odd-Even-Merge.
15.21. A work-optimal merge algorithm Optimal-Merge.
15.22. Selection of maximal integer number.
15.23. Prefix computation on square.
16.1. Rectangular systolic array for matrix product. (a) Array structure and input scheme. (b) Cell structure.
16.2. Two snapshots for the systolic array from Figure 16.1.
16.3. Hexagonal systolic array for matrix product. (a) Array structure and principle of the data input/output. (b) Cell structure.
16.4. Image of a rectangular domain under projection. Most interior points have been suppressed for clarity. Images of previous vertex points are shaded.
16.5. Partitioning of the space coordinates.
16.6. Detailed input/output scheme for the systolic array from Figure 16.3(a).
16.7. Extended input/output scheme, correcting Figure 16.6.
16.8. Interleaved calculation of three matrix products on the systolic array from Figure 16.3.
16.9. Resetting registers via global control. (a) Array structure. (b) Cell structure.
16.10. Output scheme with delayed output of results.
16.11. Combined local/global control. (a) Array structure. (b) Cell structure.
16.12. Matrix product on a rectangular systolic array, with output of results and distributed control. (a) Array structure. (b) Cell structure.
16.13. Matrix product on a rectangular systolic array, with output of results and distributed control. (a) Array structure. (b) Cell on the upper border.
16.14. Bubble sort algorithm on a linear systolic array. (a) Array structure with input/output scheme. (b) Cell structure.
17.1. Task system Task system \tau_{1} , and its optimal schedule., and its optimal schedule.
17.2. Scheduling of the task system Scheduling of the task system \tau_{1} at list L' . at list Scheduling of the task system \tau_{1} at list L' ..
17.3. Scheduling of the task system Scheduling of the task system \tau_{1} using list L on m'=4 processors. using list Scheduling of the task system \tau_{1} using list L on m'=4 processors. on Scheduling of the task system \tau_{1} using list L on m'=4 processors. processors.
17.4. Scheduling of Scheduling of \tau_{2} with list L on m=3 processors. with list Scheduling of \tau_{2} with list L on m=3 processors. on Scheduling of \tau_{2} with list L on m=3 processors. processors.
17.5. Scheduling task system Scheduling task system \tau_{3} on m=3 processors. on Scheduling task system \tau_{3} on m=3 processors. processors.
17.6. Task system Task system \tau and its optimal scheduling S_{\mbox{\texttt{\textit{OPT}}}} on two processors. and its optimal scheduling Task system \tau and its optimal scheduling S_{\mbox{\texttt{\textit{OPT}}}} on two processors. on two processors.
17.7. Optimal list scheduling of task system Optimal list scheduling of task system \tau' ..
17.8. Scheduling Scheduling S_{7}(\tau_{4}) belonging to list L=(T_{1},\dots,T_{n}) . belonging to list Scheduling S_{7}(\tau_{4}) belonging to list L=(T_{1},\dots,T_{n}) ..
17.9. Scheduling Scheduling S_{8}(\tau_{4}) belonging to list L' . belonging to list Scheduling S_{8}(\tau_{4}) belonging to list L' ..
17.10. Identical graph of task systems Identical graph of task systems \tau_{5} and \tau_{5}' . and Identical graph of task systems \tau_{5} and \tau_{5}' ..
17.11. Schedulings Schedulings S_{9}(\tau_{5}) and S_{10}(\tau_{5}') . and Schedulings S_{9}(\tau_{5}) and S_{10}(\tau_{5}') ..
17.12. Graph of the task system Graph of the task system \tau_{6} ..
17.13. Optimal scheduling Optimal scheduling S_{11}(\tau_{6}) ..
17.14. Scheduling Scheduling S_{12}(\tau_{6}') ..
17.15. Precedence graph of task system Precedence graph of task system \tau_{7} ..
17.16. The optimal scheduling The optimal scheduling S_{13}(\tau_{7}) (a=mm'-m'+3 , b=a+1 , c=mm'-m'+m+1 ). (The optimal scheduling S_{13}(\tau_{7}) (a=mm'-m'+3 , b=a+1 , c=mm'-m'+m+1 )., The optimal scheduling S_{13}(\tau_{7}) (a=mm'-m'+3 , b=a+1 , c=mm'-m'+m+1 )., The optimal scheduling S_{13}(\tau_{7}) (a=mm'-m'+3 , b=a+1 , c=mm'-m'+m+1 ).).
17.17. The optimal scheduling The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 ). (The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 )., The optimal scheduling S_{14}(\tau_{7}) (a=mm'-2m'+m+2 , b=m+1 , c=2m+2 , d=mm'-2m'+2m+2 , e=m+m'+1 , f=mm'-m'+m+1 ).).
17.18. Summary of the numbers of discs.
17.19. Pairwise comparison of algorithms.
17.20. Results of the pairwise comparison of algorithms.
18.1. Application of Join-test(Application of Join-test(R,F,\rho ).).
19.1. The database CinePest.
19.2. The three levels of database architecture.
19.3. GMAPs for the university domain.
19.4. The graph The graph G ..
19.5. The graph The graph G' ..
19.6. A taxonomy of work on answering queries using views.
20.1. Edge-labeled graph assigned to a vertex-labeled graph.
20.2. An edge-labeled graph and the corresponding vertex-labeled graph.
20.3. The graph corresponding to the XML file “forbidden”.
20.4. A relational database in the semi-structured model.
20.5. The schema of the semi-structured database given in Figure 20.4.
21.1. The tree on which we introduce the Felsenstein algorithm. Evolutionary times are denoted with The tree on which we introduce the Felsenstein algorithm. Evolutionary times are denoted with v s on the edges of the tree.s on the edges of the tree.
21.2. A dendrogram.
21.3. Connecting leaf Connecting leaf n+1 to the dendrogram. to the dendrogram.
21.4. Calculating Calculating d_{u,k} according to the Centroid method. according to the Centroid method.
21.5. Connecting leaf Connecting leaf n+1 for constructing an additive tree. for constructing an additive tree.
21.6. Some tree topologies for proving Theorem 21.7.
21.7. The configuration of nodes The configuration of nodes i , j , k and l if i and j follows a cherry motif., The configuration of nodes i , j , k and l if i and j follows a cherry motif., The configuration of nodes i , j , k and l if i and j follows a cherry motif. and The configuration of nodes i , j , k and l if i and j follows a cherry motif. if The configuration of nodes i , j , k and l if i and j follows a cherry motif. and The configuration of nodes i , j , k and l if i and j follows a cherry motif. follows a cherry motif.
21.8. The possible places for node The possible places for node m on the tree. on the tree.
21.9. Representation of the Representation of the -1,\,+2,\,+5,\,+3,\,+4 signed permutation with an unsigned permutation, and its graph of desire and reality. signed permutation with an unsigned permutation, and its graph of desire and reality.
22.1. Functions defining the sphere, the block, and the torus.
22.2. Parametric forms of the sphere, the cylinder, and the cone, where Parametric forms of the sphere, the cylinder, and the cone, where u,v\in[0,1] ..
22.3. Parametric forms of the ellipse, the helix, and the line segment, where Parametric forms of the ellipse, the helix, and the line segment, where t\in[0,1] ..
22.4. A Bézier curve defined by four control points and the respective basis functions (A Bézier curve defined by four control points and the respective basis functions (m=3 ).).
22.5. Construction of B-spline basis functions. A higher order basis function is obtained by blending two consecutive basis functions on the previous level using a linearly increasing and a linearly decreasing weighting, respectively. Here the number of control points is 5, i.e. Construction of B-spline basis functions. A higher order basis function is obtained by blending two consecutive basis functions on the previous level using a linearly increasing and a linearly decreasing weighting, respectively. Here the number of control points is 5, i.e. m=4 . Arrows indicate useful interval [t_{k-1},t_{m+1}] where we can find m+1 number of basis functions that add up to 1. The right side of the figure depicts control points with triangles and curve points corresponding to the knot values by circles.. Arrows indicate useful interval Construction of B-spline basis functions. A higher order basis function is obtained by blending two consecutive basis functions on the previous level using a linearly increasing and a linearly decreasing weighting, respectively. Here the number of control points is 5, i.e. m=4 . Arrows indicate useful interval [t_{k-1},t_{m+1}] where we can find m+1 number of basis functions that add up to 1. The right side of the figure depicts control points with triangles and curve points corresponding to the knot values by circles. where we can find Construction of B-spline basis functions. A higher order basis function is obtained by blending two consecutive basis functions on the previous level using a linearly increasing and a linearly decreasing weighting, respectively. Here the number of control points is 5, i.e. m=4 . Arrows indicate useful interval [t_{k-1},t_{m+1}] where we can find m+1 number of basis functions that add up to 1. The right side of the figure depicts control points with triangles and curve points corresponding to the knot values by circles. number of basis functions that add up to 1. The right side of the figure depicts control points with triangles and curve points corresponding to the knot values by circles.
22.6. A B-spline interpolation. Based on points A B-spline interpolation. Based on points \vec{p}_{0},\ldots,\vec{p}_{m} to be interpolated, control points \vec{c}_{-1},\ldots,\vec{c}_{m+1} are computed to make the start and end points of the segments equal to the interpolated points. to be interpolated, control points A B-spline interpolation. Based on points \vec{p}_{0},\ldots,\vec{p}_{m} to be interpolated, control points \vec{c}_{-1},\ldots,\vec{c}_{m+1} are computed to make the start and end points of the segments equal to the interpolated points. are computed to make the start and end points of the segments equal to the interpolated points.
22.7. Iso-parametric curves of surface.
22.8. The influence decreases with the distance. Spheres of influence of similar signs increase, of different signs decrease each other.
22.9. The operations of constructive solid geometry for a cone of implicit function The operations of constructive solid geometry for a cone of implicit function f and for a sphere of implicit function g : union (\max(f,g) ), intersection (\min(f,g) ), and difference (\min(f,-g) ). and for a sphere of implicit function The operations of constructive solid geometry for a cone of implicit function f and for a sphere of implicit function g : union (\max(f,g) ), intersection (\min(f,g) ), and difference (\min(f,-g) ).: union (The operations of constructive solid geometry for a cone of implicit function f and for a sphere of implicit function g : union (\max(f,g) ), intersection (\min(f,g) ), and difference (\min(f,-g) ).), intersection (The operations of constructive solid geometry for a cone of implicit function f and for a sphere of implicit function g : union (\max(f,g) ), intersection (\min(f,g) ), and difference (\min(f,-g) ).), and difference (The operations of constructive solid geometry for a cone of implicit function f and for a sphere of implicit function g : union (\max(f,g) ), intersection (\min(f,g) ), and difference (\min(f,-g) ).).
22.10. Constructing a complex solid by set operations. The root and the leaf of the CSG tree represents the complex solid, and the primitives, respectively. Other nodes define the set operations (U: union, Constructing a complex solid by set operations. The root and the leaf of the CSG tree represents the complex solid, and the primitives, respectively. Other nodes define the set operations (U: union, \setminus : difference).: difference).
22.11. Types of polygons. (a) simple; (b) complex, single connected; (c) multiply connected.
22.12. Diagonal and ear of a polygon.
22.13. The proof of the existence of a diagonal for simple polygons.
22.14. Tessellation of parametric surfaces.
22.15. Estimation of the tessellation error.
22.16. T vertices and their elimination with forced subdivision.
22.17. Construction of a subdivision curve: at each step midpoints are obtained, then the original vertices are moved to the weighted average of neighbouring midpoints and of the original vertex.
22.18. One smoothing step of the Catmull-Clark subdivision. First the face points are obtained, then the edge midpoints are moved, and finally the original vertices are refined according to the weighted sum of its neighbouring edge and face points.
22.19. Original mesh and its subdivision applying the smoothing step once, twice and three times, respectively.
22.20. Generation of the new edge point with butterfly subdivision.
22.21. Possible intersections of the per-voxel tri-linear implicit surface and the voxel edges. From the possible 256 cases, these 15 topologically different cases can be identified, from which the others can be obtained by rotations. Grid points where the implicit function has the same sign are depicted by circles.
22.22. Polyhedron-point containment test. A convex polyhedron contains a point if the point is on that side of each face plane where the polyhedron is. To test a concave polyhedron, a half line is cast from the point and the number of intersections is counted. If the result is an odd number, then the point is inside, otherwise it is outside.
22.23. Point in triangle containment test. The figure shows that case when point Point in triangle containment test. The figure shows that case when point \vec{p} is on the left of oriented lines \vec{ab} and \vec{bc} , and on the right of line \vec{ca} , that is, when it is not inside the triangle. is on the left of oriented lines Point in triangle containment test. The figure shows that case when point \vec{p} is on the left of oriented lines \vec{ab} and \vec{bc} , and on the right of line \vec{ca} , that is, when it is not inside the triangle. and Point in triangle containment test. The figure shows that case when point \vec{p} is on the left of oriented lines \vec{ab} and \vec{bc} , and on the right of line \vec{ca} , that is, when it is not inside the triangle., and on the right of line Point in triangle containment test. The figure shows that case when point \vec{p} is on the left of oriented lines \vec{ab} and \vec{bc} , and on the right of line \vec{ca} , that is, when it is not inside the triangle., that is, when it is not inside the triangle.
22.24. Point in triangle containment test on coordinate plane Point in triangle containment test on coordinate plane xy . Third vertex \vec{c} can be either on the left or on the right side of oriented line \vec{ab} , which can always be traced back to the case of being on the left side by exchanging the vertices.. Third vertex Point in triangle containment test on coordinate plane xy . Third vertex \vec{c} can be either on the left or on the right side of oriented line \vec{ab} , which can always be traced back to the case of being on the left side by exchanging the vertices. can be either on the left or on the right side of oriented line Point in triangle containment test on coordinate plane xy . Third vertex \vec{c} can be either on the left or on the right side of oriented line \vec{ab} , which can always be traced back to the case of being on the left side by exchanging the vertices., which can always be traced back to the case of being on the left side by exchanging the vertices.
22.25. Polyhedron-polyhedron collision detection. Only a part of collision cases can be recognized by testing the containment of the vertices of one object with respect to the other object. Collision can also occur when only edges meet, but vertices do not penetrate to the other object.
22.26. Clipping of simple convex polygon Clipping of simple convex polygon \vec{p}[0],\ldots,\vec{p}[5] results in polygon \vec{q}[0],\ldots,\vec{q}[4] . The vertices of the resulting polygon are the inner vertices of the original polygon and the intersections of the edges and the boundary plane. results in polygon Clipping of simple convex polygon \vec{p}[0],\ldots,\vec{p}[5] results in polygon \vec{q}[0],\ldots,\vec{q}[4] . The vertices of the resulting polygon are the inner vertices of the original polygon and the intersections of the edges and the boundary plane.. The vertices of the resulting polygon are the inner vertices of the original polygon and the intersections of the edges and the boundary plane.
22.27. When concave polygons are clipped, the parts that should fall apart are connected by even number of edges.
22.28. The 4-bit codes of the points in a plane and the 6-bit codes of the points in space.
22.29. The embedded model of the projective plane: the projective plane is embedded into a three-dimensional Euclidean space, and a correspondence is established between points of the projective plane and lines of the embedding three-dimensional Euclidean space by fitting the line to the origin of the three-dimensional space and the given point.
22.30. Ray tracing.
22.31. Partitioning the virtual world by a uniform grid. The intersections of the ray and the coordinate planes of the grid are at regular distances Partitioning the virtual world by a uniform grid. The intersections of the ray and the coordinate planes of the grid are at regular distances c_{x}/v_{x} , c_{y}/v_{y} , and c_{z}/v_{z} , respectively., Partitioning the virtual world by a uniform grid. The intersections of the ray and the coordinate planes of the grid are at regular distances c_{x}/v_{x} , c_{y}/v_{y} , and c_{z}/v_{z} , respectively., and Partitioning the virtual world by a uniform grid. The intersections of the ray and the coordinate planes of the grid are at regular distances c_{x}/v_{x} , c_{y}/v_{y} , and c_{z}/v_{z} , respectively., respectively.
22.32. Encapsulation of the intersection space by the cells of the data structure in a uniform subdivision scheme. The intersection space is a cylinder of radius Encapsulation of the intersection space by the cells of the data structure in a uniform subdivision scheme. The intersection space is a cylinder of radius r . The candidate space is the union of those spheres that may overlap a cell intersected by the ray.. The candidate space is the union of those spheres that may overlap a cell intersected by the ray.
22.33. A quadtree partitioning the plane, whose three-dimensional version is the octree. The tree is constructed by halving the cells along all coordinate axes until a cell contains “just a few” objects, or the cell sizes gets smaller than a threshold. Objects are registered in the leaves of the tree.
22.34. A kd-tree. A cell containing “many” objects are recursively subdivided to two cells with a plane that is perpendicular to one of the coordinate axes.
22.35. Notations and cases of algorithm Ray-First-Intersection-with-kd-Tree. Notations and cases of algorithm Ray-First-Intersection-with-kd-Tree. t_{in} , t_{out} , and t are the ray parameters of the entry, exit, and the separating plane, respectively. d is the signed distance between the ray origin and the separating plane., Notations and cases of algorithm Ray-First-Intersection-with-kd-Tree. t_{in} , t_{out} , and t are the ray parameters of the entry, exit, and the separating plane, respectively. d is the signed distance between the ray origin and the separating plane., and Notations and cases of algorithm Ray-First-Intersection-with-kd-Tree. t_{in} , t_{out} , and t are the ray parameters of the entry, exit, and the separating plane, respectively. d is the signed distance between the ray origin and the separating plane. are the ray parameters of the entry, exit, and the separating plane, respectively. Notations and cases of algorithm Ray-First-Intersection-with-kd-Tree. t_{in} , t_{out} , and t are the ray parameters of the entry, exit, and the separating plane, respectively. d is the signed distance between the ray origin and the separating plane. is the signed distance between the ray origin and the separating plane.
22.36. Kd-tree based space partitioning with empty space cutting.
22.37. Steps of incremental rendering. (a) Modelling defines objects in their reference state. (b) Shapes are tessellated to prepare for further processing. (c) Modelling transformation places the object in the world coordinate system. (d) Camera transformation translates and rotates the scene to get the eye to be at the origin and to look parallel with axis Steps of incremental rendering. (a) Modelling defines objects in their reference state. (b) Shapes are tessellated to prepare for further processing. (c) Modelling transformation places the object in the world coordinate system. (d) Camera transformation translates and rotates the scene to get the eye to be at the origin and to look parallel with axis -z . (e) Perspective transformation converts projection lines meeting at the origin to parallel lines, that is, it maps the eye position onto an ideal point. (f) Clipping removes those shapes and shape parts, which cannot be projected onto the window. (g) Hidden surface elimination removes those surface parts that are occluded by other shapes. (h) Finally, the visible polygons are projected and their projections are filled with their visible colours.. (e) Perspective transformation converts projection lines meeting at the origin to parallel lines, that is, it maps the eye position onto an ideal point. (f) Clipping removes those shapes and shape parts, which cannot be projected onto the window. (g) Hidden surface elimination removes those surface parts that are occluded by other shapes. (h) Finally, the visible polygons are projected and their projections are filled with their visible colours.
22.38. Parameters of the virtual camera: eye position Parameters of the virtual camera: eye position \vec{eye} , target \vec{lookat} , and vertical direction \vec{up} , from which camera basis vectors \vec{u},\vec{v},\vec{w} are obtained, front f_{p} and back b_{p} clipping planes, and vertical field of view fov (the horizontal field of view is computed from aspect ratio aspect )., target Parameters of the virtual camera: eye position \vec{eye} , target \vec{lookat} , and vertical direction \vec{up} , from which camera basis vectors \vec{u},\vec{v},\vec{w} are obtained, front f_{p} and back b_{p} clipping planes, and vertical field of view fov (the horizontal field of view is computed from aspect ratio aspect )., and vertical direction Parameters of the virtual camera: eye position \vec{eye} , target \vec{lookat} , and vertical direction \vec{up} , from which camera basis vectors \vec{u},\vec{v},\vec{w} are obtained, front f_{p} and back b_{p} clipping planes, and vertical field of view fov (the horizontal field of view is computed from aspect ratio aspect )., from which camera basis vectors Parameters of the virtual camera: eye position \vec{eye} , target \vec{lookat} , and vertical direction \vec{up} , from which camera basis vectors \vec{u},\vec{v},\vec{w} are obtained, front f_{p} and back b_{p} clipping planes, and vertical field of view fov (the horizontal field of view is computed from aspect ratio aspect ). are obtained, front Parameters of the virtual camera: eye position \vec{eye} , target \vec{lookat} , and vertical direction \vec{up} , from which camera basis vectors \vec{u},\vec{v},\vec{w} are obtained, front f_{p} and back b_{p} clipping planes, and vertical field of view fov (the horizontal field of view is computed from aspect ratio aspect ). and back Parameters of the virtual camera: eye position \vec{eye} , target \vec{lookat} , and vertical direction \vec{up} , from which camera basis vectors \vec{u},\vec{v},\vec{w} are obtained, front f_{p} and back b_{p} clipping planes, and vertical field of view fov (the horizontal field of view is computed from aspect ratio aspect ). clipping planes, and vertical field of view Parameters of the virtual camera: eye position \vec{eye} , target \vec{lookat} , and vertical direction \vec{up} , from which camera basis vectors \vec{u},\vec{v},\vec{w} are obtained, front f_{p} and back b_{p} clipping planes, and vertical field of view fov (the horizontal field of view is computed from aspect ratio aspect ). (the horizontal field of view is computed from aspect ratio Parameters of the virtual camera: eye position \vec{eye} , target \vec{lookat} , and vertical direction \vec{up} , from which camera basis vectors \vec{u},\vec{v},\vec{w} are obtained, front f_{p} and back b_{p} clipping planes, and vertical field of view fov (the horizontal field of view is computed from aspect ratio aspect ).).
22.39. The normalizing transformation sets the field of view to 90 degrees.
22.40. The perspective transformation maps the finite frustum of pyramid defined by the front and back clipping planes, and the edges of the window onto an axis aligned, origin centred cube of edge size 2.
22.41. Notations of the Bresenham algorithm: Notations of the Bresenham algorithm: s is the signed distance between the closest pixel centre and the line segment along axis Y , which is positive if the line segment is above the pixel centre. t is the distance along axis Y between the pixel centre just above the closest pixel and the line segment. is the signed distance between the closest pixel centre and the line segment along axis Notations of the Bresenham algorithm: s is the signed distance between the closest pixel centre and the line segment along axis Y , which is positive if the line segment is above the pixel centre. t is the distance along axis Y between the pixel centre just above the closest pixel and the line segment., which is positive if the line segment is above the pixel centre. Notations of the Bresenham algorithm: s is the signed distance between the closest pixel centre and the line segment along axis Y , which is positive if the line segment is above the pixel centre. t is the distance along axis Y between the pixel centre just above the closest pixel and the line segment. is the distance along axis Notations of the Bresenham algorithm: s is the signed distance between the closest pixel centre and the line segment along axis Y , which is positive if the line segment is above the pixel centre. t is the distance along axis Y between the pixel centre just above the closest pixel and the line segment. between the pixel centre just above the closest pixel and the line segment.
22.42. Polygon fill. Pixels inside the polygon are identified scan line by scan line.
22.43. Incremental computation of the intersections between the scan lines and the edges. Coordinate Incremental computation of the intersections between the scan lines and the edges. Coordinate X always increases with the reciprocal of the slope of the line. always increases with the reciprocal of the slope of the line.
22.44. The structure of the active edge table.
22.45. A triangle in the screen coordinate system. Pixels inside the projection of the triangle on plane A triangle in the screen coordinate system. Pixels inside the projection of the triangle on plane XY need to be found. The Z coordinates of the triangle in these pixels are computed using the equation of the plane of the triangle. need to be found. The A triangle in the screen coordinate system. Pixels inside the projection of the triangle on plane XY need to be found. The Z coordinates of the triangle in these pixels are computed using the equation of the plane of the triangle. coordinates of the triangle in these pixels are computed using the equation of the plane of the triangle.
22.46. Incremental Incremental Z coordinate computation for a left oriented triangle. coordinate computation for a left oriented triangle.
22.47. Polygon-window relations: (a) distinct; (b) surrounding ; (c) intersecting; (d) contained.
22.48. A BSP-tree. The space is subdivided by the planes of the contained polygons.
23.1. 1000 shortest paths in a 100\times100 grid-graph, printed in overlap.shortest paths in a 1000 shortest paths in a 100\times100 grid-graph, printed in overlap. grid-graph, printed in overlap.
23.2. The graph for Examples 23.1, 23.2 and 23.6.
23.3. \overline{\phi}_{\varepsilon_{i}} for \varepsilon_{1}=0.025,\;\varepsilon_{2}=0.050,\;\dots,\;\varepsilon_{30}=0.750 on 25\times25 grids.for \overline{\phi}_{\varepsilon_{i}} for \varepsilon_{1}=0.025,\;\varepsilon_{2}=0.050,\;\dots,\;\varepsilon_{30}=0.750 on 25\times25 grids. on \overline{\phi}_{\varepsilon_{i}} for \varepsilon_{1}=0.025,\;\varepsilon_{2}=0.050,\;\dots,\;\varepsilon_{30}=0.750 on 25\times25 grids. grids.
23.4. Example graph for the LP-penalty method.
23.5. An example for a non-unique decomposition in two paths.
23.6. \overline{\phi}_{\varepsilon_{i}} for \varepsilon_{0}=0,\;\varepsilon_{1}=0.025,\;\dots,\;\varepsilon_{30}=0.750 on 25\times25 grids.for \overline{\phi}_{\varepsilon_{i}} for \varepsilon_{0}=0,\;\varepsilon_{1}=0.025,\;\dots,\;\varepsilon_{30}=0.750 on 25\times25 grids. on \overline{\phi}_{\varepsilon_{i}} for \varepsilon_{0}=0,\;\varepsilon_{1}=0.025,\;\dots,\;\varepsilon_{30}=0.750 on 25\times25 grids. grids.

AnTonCom, Budapest, 2011

This electronic book was prepared in the framework of project Eastern Hungarian Informatics Books Repository no. TÁMOP-4.1.2-08/1/A-2009-0046

This electronic book appeared with the support of European Union and with the co-financing of European Social Fund

Nemzeti Fejlesztési Ügynökség http://ujszechenyiterv.gov.hu/ 06 40 638-638

Editor: Antal Iványi

Authors of Volume 1: László Lovász (Preface), Antal Iványi (Introduction), Zoltán Kása (Chapter 1), Zoltán Csörnyei (Chapter 2), Ulrich Tamm (Chapter 3), Péter Gács (Chapter 4), Gábor Ivanyos and Lajos Rónyai (Chapter 5), Antal Járai and Attila Kovács (Chapter 6), Jörg Rothe (Chapters 7 and 8), Csanád Imreh (Chapter 9), Ferenc Szidarovszky (Chapter 10), Zoltán Kása (Chapter 11), Aurél Galántai and András Jeney (Chapter 12)

Validators of Volume 1: Zoltán Fülöp (Chapter 1), Pál Dömösi (Chapter 2), Sándor Fridli (Chapter 3), Anna Gál (Chapter 4), Attila Pethő (Chapter 5), Lajos Rónyai (Chapter 6), János Gonda (Chapter 7), Gábor Ivanyos (Chapter 8), Béla Vizvári (Chapter 9), János Mayer (Chapter 10), András Recski (Chapter 11), Tamás Szántai (Chapter 12), Anna Iványi (Bibliography)

Authors of Volume 2: Burkhard Englert, Dariusz Kowalski, Gregorz Malewicz, and Alexander Shvartsman (Chapter 13), Tibor Gyires (Chapter 14), Claudia Fohry and Antal Iványi (Chapter 15), Eberhard Zehendner (Chapter 16), Ádám Balogh and Antal Iványi (Chapter 17), János Demetrovics and Attila Sali (Chapters 18 and 19), Attila Kiss (Chapter 20), István Miklós (Chapter 21), László Szirmay-Kalos (Chapter 22), Ingo Althöfer and Stefan Schwarz (Chapter 23)

Validators of Volume 2: István Majzik (Chapter 13), János Sztrik (Chapter 14), Dezső Sima (Chapters 15 and 16), László Varga (Chapter 17), Attila Kiss (Chapters 18 and 19), András Benczúr (Chapter 20), István Katsányi (Chapter 21), János Vida (Chapter 22), Tamás Szántai (Chapter 23), Anna Iványi (Bibliography)

©2011 AnTonCom Infokommunikációs Kft.

Homepage: http://www.antoncom.hu/

Part IV. COMPUTER NETWORKS

Table of Contents

13. Distributed Algorithms
13.1 Message passing systems and algorithms
13.1.1 Modeling message passing systems
13.1.2 Asynchronous systems
13.1.3 Synchronous systems
13.2 Basic algorithms
13.2.1 Broadcast
13.2.2 Construction of a spanning tree
13.3 Ring algorithms
13.3.1 The leader election problem
13.3.2 The leader election algorithm
13.3.3 Analysis of the leader election algorithm
13.4 Fault-tolerant consensus
13.4.1 The consensus problem
13.4.2 Consensus with crash failures
13.4.3 Consensus with Byzantine failures
13.4.4 Lower bound on the ratio of faulty processors
13.4.5 A polynomial algorithm
13.4.6 Impossibility in asynchronous systems
13.5 Logical time, causality, and consistent state
13.5.1 Logical time
13.5.2 Causality
13.5.3 Consistent state
13.6 Communication services
13.6.1 Properties of broadcast services
13.6.2 Ordered broadcast services
13.6.3 Multicast services
13.7 Rumor collection algorithms
13.7.1 Rumor collection problem and requirements
13.7.2 Efficient gossip algorithms
13.8 Mutual exclusion in shared memory
13.8.1 Shared memory systems
13.8.2 The mutual exclusion problem
13.8.3 Mutual exclusion using powerful primitives
13.8.4 Mutual exclusion using read/write registers
13.8.5 Lamport's fast mutual exclusion algorithm
14. Network Simulation
14.1 Types of simulation
14.2 The need for communications network modelling and simulation
14.3 Types of communications networks, modelling constructs
14.4 Performance targets for simulation purposes
14.5 Traffic characterisation
14.6 Simulation modelling systems
14.6.1 Data collection tools and network analysers
14.6.2 Model specification
14.6.3 Data collection and simulation
14.6.4 Analysis
14.6.5 Network Analysers
14.6.6 Sniffer
14.7 Model Development Life Cycle (MDLC)
14.8 Modelling of traffic burstiness
14.8.1 Model parameters
14.8.2 Implementation of the Hurst parameter
14.8.3 Validation of the baseline model
14.8.4 Consequences of traffic burstiness
14.8.5 Conclusion
14.9 Appendix A
14.9.1 Measurements for link utilisation
14.9.2 Measurements for message delays
15. Parallel Computations
15.1 Parallel architectures
15.1.1 SIMD architectures
15.1.2 Symmetric multiprocessors
15.1.3 Cache-coherent NUMA architectures
15.1.4 Non-cache-coherent NUMA architectures
15.1.5 No remote memory access architectures
15.1.6 Clusters
15.1.7 Grids
15.2 Performance in practice
15.3 Parallel programming
15.3.1 MPI programming
15.3.2 OpenMP programming
15.3.3 Other programming models
15.4 Computational models
15.4.1 PRAM
15.4.2 BSP, LogP and QSM
15.4.3 Mesh, hypercube and butterfly
15.5 Performance in theory
15.6 PRAM algorithms
15.6.1 Prefix
15.6.2 Ranking
15.6.3 Merge
15.6.4 Selection
15.6.5 Sorting
15.7 Mesh algorithms
15.7.1 Prefix on chain
15.7.2 Prefix on square
16. Systolic Systems
16.1 Basic concepts of systolic systems
16.1.1 An introductory example: matrix product
16.1.2 Problem parameters and array parameters
16.1.3 Space coordinates
16.1.4 Serialising generic operators
16.1.5 Assignment-free notation
16.1.6 Elementary operations
16.1.7 Discrete timesteps
16.1.8 External and internal communication
16.1.9 Pipelining
16.2 Space-time transformation and systolic arrays
16.2.1 Further example: matrix product
16.2.2 The space-time transformation as a global view
16.2.3 Parametric space coordinates
16.2.4 Symbolically deriving the running time
16.2.5 How to unravel the communication topology
16.2.6 Inferring the structure of the cells
16.3 Input/output schemes
16.3.1 From data structure indices to iteration vectors
16.3.2 Snapshots of data structures
16.3.3 Superposition of input/output schemes
16.3.4 Data rates induced by space-time transformations
16.3.5 Input/output expansion
16.3.6 Coping with stationary variables
16.3.7 Interleaving of calculations
16.4 Control
16.4.1 Cells without control
16.4.2 Global control
16.4.3 Local control
16.4.4 Distributed control
16.4.5 The cell program as a local view
16.5 Linear systolic arrays
16.5.1 Matrix-vector product
16.5.2 Sorting algorithms
16.5.3 Lower triangular linear equation systems

Chapter 13. Distributed Algorithms

We define a distributed system as a collection of individual computing devices that can communicate with each other. This definition is very broad, it includes anything, from a VLSI chip, to a tightly coupled multiprocessor, to a local area cluster of workstations, to the Internet. Here we focus on more loosely coupled systems. In a distributed system as we view it, each processor has its semi-independent agenda, but for various reasons, such as sharing of resources, availability, and fault-tolerance, processors need to coordinate their actions.

Distributed systems are highly desirable, but it is notoriously difficult to construct efficient distributed algorithms that perform well in realistic system settings. These difficulties are not just of a more practical nature, they are also fundamental in nature. In particular, many of the difficulties are introduced by the three factors of: asynchrony, limited local knowledge, and failures. Asynchrony means that global time may not be available, and that both absolute and relative times at which events take place at individual computing devices can often not be known precisely. Moreover, each computing device can only be aware of the information it receives, it has therefore an inherently local view of the global status of the system. Finally, computing devices and network components may fail independently, so that some remain functional while others do not.

We will begin by describing the models used to analyse distributed systems in the message-passing model of computation. We present and analyze selected distributed algorithms based on these models. We include a discussion of fault-tolerance in distributed systems and consider several algorithms for reaching agreement in the messages-passing models for settings prone to failures. Given that global time is often unavailable in distributed systems, we present approaches for providing logical time that allows one to reason about causality and consistent states in distributed systems. Moving on to more advanced topics, we present a spectrum of broadcast services often considered in distributed systems and present algorithms implementing these services. We also present advanced algorithms for rumor gathering algorithms. Finally, we also consider the mutual exclusion problem in the shared-memory model of distributed computation.

13.1 Message passing systems and algorithms

We present our first model of distributed computation, for message passing systems without failures. We consider both synchronous and asynchronous systems and present selected algorithms for message passing systems with arbitrary network topology, and both synchronous and asynchronous settings.

13.1.1 Modeling message passing systems

In a message passing system, processors communicate by sending messages over communication channels, where each channel provides a bidirectional connection between two specific processors. We call the pattern of connections described by the channels, the topology of the system. This topology is represented by an undirected graph, where each node represents a processor, and an edge is present between two nodes if and only if there is a channel between the two processors represented by the nodes. The collection of channels is also called the network. An algorithm for such a message passing system with a specific topology consists of a local program for each processor in the system. This local program provides the ability to the processor to perform local computations, to send and receive messages from each of its neighbours in the given topology.

Each processor in the system is modeled as a possibly infinite state machine. A configuration is a vector where each is the state of a processor . Activities that can take place in the system are modeled as events (or actions) that describe indivisible system operations. Examples of events include local computation events and delivery events where a processor receives a message. The behaviour of the system over time is modeled as an execution, a (finite or infinite) sequence of configurations () alternating with events (): . Executions must satisfy a variety of conditions that are used to represent the correctness properties, depending on the system being modeled. These conditions can be classified as either safety or liveness conditions. A safety condition for a system is a condition that must hold in every finite prefix of any execution of the system. Informally it states that nothing bad has happened yet. A liveness condition is a condition that must hold a certain (possibly infinite) number of times. Informally it states that eventually something good must happen. An important liveness condition is fairness, which requires that an (infinite) execution contains infinitely many actions by a processor, unless after some configuration no actions are enabled at that processor.

13.1.2 Asynchronous systems

We say that a system is asynchronous if there is no fixed upper bound on how long it takes for a message to be delivered or how much time elapses between consecutive steps of a processor. An obvious example of such an asynchronous system is the Internet. In an implementation of a distributed system there are often upper bounds on message delays and processor step times. But since these upper bounds are often very large and can change over time, it is often desirable to develop an algorithm that is independent of any timing parameters, that is, an asynchronous algorithm.

In the asynchronous model we say that an execution is admissible if each processor has an infinite number of computation events, and every message sent is eventually delivered. The first of these requirements models the fact that processors do not fail. (It does not mean that a processor's local program contains an infinite loop. An algorithm can still terminate by having a transition function not change a processors state after a certain point.)

We assume that each processor's set of states includes a subset of terminated states. Once a processor enters such a state it remains in it. The algorithm has terminated if all processors are in terminated states and no messages are in transit.

The message complexity of an algorithm in the asynchronous model is the maximum over all admissible executions of the algorithm, of the total number of (point-to-point) messages sent.

A timed execution is an execution that has a nonnegative real number associated with each event, the time at which the event occurs. To measure the time complexity of an asynchronous algorithm we first assume that the maximum message delay in any execution is one unit of time. Hence the time complexity is the maximum time until termination among all timed admissible executions in which every message delay is at most one. Intuitively this can be viewed as taking any execution of the algorithm and normalising it in such a way that the longest message delay becomes one unit of time.

13.1.3 Synchronous systems

In the synchronous model processors execute in lock-step. The execution is partitioned into rounds so that every processor can send a message to each neighbour, the messages are delivered, and every processor computes based on the messages just received. This model is very convenient for designing algorithms. Algorithms designed in this model can in many cases be automatically simulated to work in other, more realistic timing models.

In the synchronous model we say that an execution is admissible if it is infinite. From the round structure it follows then that every processor takes an infinite number of computation steps and that every message sent is eventually delivered. Hence in a synchronous system with no failures, once a (deterministic) algorithm has been fixed, the only relevant aspect determining an execution that can change is the initial configuration. On the other hand in an asynchronous system, there can be many different executions of the same algorithm, even with the same initial configuration and no failures, since here the interleaving of processor steps, and the message delays, are not fixed.

The notion of terminated states and the termination of the algorithm is defined in the same way as in the asynchronous model.

The message complexity of an algorithm in the synchronous model is the maximum over all admissible executions of the algorithm, of the total number of messages sent.

To measure time in a synchronous system we simply count the number of rounds until termination. Hence the time complexity of an algorithm in the synchronous model is the maximum number of rounds in any admissible execution of the algorithm until the algorithm has terminated.

13.2 Basic algorithms

We begin with some simple examples of algorithms in the message passing model.

13.2.1 Broadcast

We start with a simple algorithm Spanning-Tree-Broadcast for the (single message) broadcast problem, assuming that a spanning tree of the network graph with nodes (processors) is already given. Later, we will remove this assumption. A processor wishes to send a message to all other processors. The spanning tree rooted at is maintained in a distributed fashion: Each processor has a distinguished channel that leads to its parent in the tree as well as a set of channels that lead to its children in the tree. The root sends the message on all channels leading to its children. When a processor receives the message on a channel from its parent, it sends on all channels leading to its children.

Spanning-Tree-Broadcast

       Initially  is in transit from  to all its children in the spanning tree.
       Code for :
  1    upon receiving no message: // first computation event by  
  2       TERMINATE 
        
       Code for , , :
  3    upon receiving  from parent: 
  4       SEND  to all children 
  5       TERMINATE 

The algorithm Spanning-Tree-Broadcast is correct whether the system is synchronous or asynchronous. Moreover, the message and time complexities are the same in both models.

Using simple inductive arguments we will first prove a lemma that shows that by the end of round , the message reaches all processors at distance (or less) from in the spanning tree.

Lemma 13.1 In every admissible execution of the broadcast algorithm in the synchronous model, every processor at distance from in the spanning tree receives the message in round .

Proof. We proceed by induction on the distance of a processor from . First let . It follows from the algorithm that each child of receives the message in round 1.

Assume that each processor at distance received the message in round . We need to show that each processor at distance receives the message in round . Let be the parent of in the spanning tree. Since is at distance from , by the induction hypothesis, received in round . By the algorithm, will hence receive in round .

By Lemma 13.1 the time complexity of the broadcast algorithm is , where is the depth of the spanning tree. Now since is at most (when the spanning tree is a chain) we have:

Theorem 13.2 There is a synchronous broadcast algorithm for processors with message complexity and time complexity , when a rooted spanning tree with depth is known in advance.

We now move to an asynchronous system and apply a similar analysis.

Lemma 13.3 In every admissible execution of the broadcast algorithm in the asynchronous model, every processor at distance from in the spanning tree receives the message by time .

We proceed by induction on the distance of a processor from . First let . It follows from the algorithm that is initially in transit to each processor at distance from . By the definition of time complexity for the asynchronous model, receives by time 1.

Assume that each processor at distance received the message at time . We need to show that each processor at distance receives the message by time . Let be the parent of in the spanning tree. Since is at distance from , by the induction hypothesis, sends to when it receives at time . By the algorithm, will hence receive by time .

We immediately obtain:

Theorem 13.4 There is an asynchronous broadcast algorithm for processors with message complexity and time complexity , when a rooted spanning tree with depth is known in advance.

13.2.2 Construction of a spanning tree

The asynchronous algorithm called Flood, discussed next, constructs a spanning tree rooted at a designated processor . The algorithm is similar to the Depth First Search (DFS) algorithm. However, unlike DFS where there is just one processor with “global knowledge” about the graph, in the Flood algorithm, each processor has “local knowledge” about the graph, processors coordinate their work by exchanging messages, and processors and messages may get delayed arbitrarily. This makes the design and analysis of Flood algorithm challenging, because we need to show that the algorithm indeed constructs a spanning tree despite conspiratorial selection of these delays.

Algorithm description.

Each processor has four local variables. The links adjacent to a processor are identified with distinct numbers starting from 1 and stored in a local variable called . We will say that the spanning tree has been constructed, when the variable parent stores the identifier of the link leading to the parent of the processor in the spanning tree, except that this variable is NONE for the designated processor ; children is a set of identifiers of the links leading to the children processors in the tree; and other is a set of identifiers of all other links. So the knowledge about the spanning tree may be “distributed” across processors.

The code of each processor is composed of segments. There is a segment (lines 1–4) that describes how local variables of a processor are initialised. Recall that the local variables are initialised that way before time 0. The next three segments (lines 5–11, 12–15 and 16–19) describe the instructions that any processor executes in response to having received a message: <adopt>, <approved> or <rejected>. The last segment (lines 20–22) is only included in the code of processor . This segment is executed only when the local variable parent of processor is NIL. At some point of time, it may happen that more than one segment can be executed by a processor (e.g., because the processor received <adopt> messages from two processors). Then the processor executes the segments serially, one by one (segments of any given processor are never executed concurrently). However, instructions of different processor may be arbitrarily interleaved during an execution. Every message that can be processed is eventually processed and every segment that can be executed is eventually executed (fairness).

Flood

       Code for any processor , 
  1  INITIALISATION 
  2    parent  NIL 
  3    children  
  4    other  
        
  5  PROCESS MESSAGE <adopt> that has arrived on link  
  6    IF parent NIL 
  7       THEN parent  
  8          SEND <approved> to link  
  9          SEND <adopt> to all links in neighbours  
 10       ELSE SEND <rejected> to link  
        
 11  PROCESS MESSAGE <approved> that has arrived on link  
 12    children  children  
 13    IFchildren other neighbours {parent} 
 14       THEN TERMINATE 
        
 15  PROCESS MESSAGE <rejected> that has arrived on link  
 16    other other  
 17    IFchildren other neighbours {parent} 
 18       THEN TERMINATE 
       Extra code for the designated processor 
 19  IFparent NIL 
 20    THEN parent NONE 
 21       SEND <adopt> to all links in neighbours 

Let us outline how the algorithm works. The designated processor sends an <adopt> message to all its neighbours, and assigns NONE to the parent variable (NIL and NONE are two distinguished values, different from any natural number), so that it never again sends the message to any neighbour.

When a processor processes message <adopt> for the first time, the processor assigns to its own parent variable the identifier of the link on which the message has arrived, responds with an <approved> message to that link, and forwards an <adopt> message to every other link. However, when a processor processes message <adopt> again, then the processor responds with a <rejected> message, because the parent variable is no longer NIL.

When a processor processes message <approved>, it adds the identifier of the link on which the message has arrived to the set children. It may turn out that the sets children and other combined form identifiers of all links adjacent to the processor except for the identifier stored in the parent variable. In this case the processor enters a terminating state.

When a processor processes message <rejected>, the identifier of the link is added to the set other. Again, when the union of children and other is large enough, the processor enters a terminating state.

Correctness proof.

We now argue that Flood constructs a spanning tree. The key moments in the execution of the algorithm are when any processor assigns a value to its parent variable. These assignments determine the “shape” of the spanning tree. The facts that any processor eventually executes an instruction, any message is eventually delivered, and any message is eventually processed, ensure that the knowledge about these assignments spreads to neighbours. Thus the algorithm is expanding a subtree of the graph, albeit the expansion may be slow. Eventually, a spanning tree is formed. Once a spanning tree has been constructed, eventually every processor will terminate, even though some processors may have terminated even before the spanning tree has been constructed.

Lemma 13.5 For any , there is time which is the first moment when there are exactly processors whose parent variables are not NIL, and these processors and their parent variables form a tree rooted at .

Proof. We prove the statement of the lemma by induction on . For the base case, assume that . Observe that processor eventually assigns NONE to its parent variable. Let be the moment when this assignment happens. At that time, the parent variable of any processor other than is still NIL, because no <adopt> messages have been sent so far. Processor and its parent variable form a tree with a single node and not arcs. Hence they form a rooted tree. Thus the inductive hypothesis holds for .

For the inductive step, suppose that and that the inductive hypothesis holds for . Consider the time which is the first moment when there are exactly processors whose parent variables are not NIL. Because , there is a non-tree processor. But the graph is connected, so there is a non-tree processor adjacent to the tree. (For any subset of processors, a processor is adjacent to if and only if there an edge in the graph from to a processor in .) Recall that by definition, parent variable of such processor is NIL. By the inductive hypothesis, the processors must have executed line of their code, and so each either has already sent or will eventually send <adopt> message to all its neighbours on links other than the parent link. So the non-tree processors adjacent to the tree have already received or will eventually receive <adopt> messages. Eventually, each of these adjacent processors will, therefore, assign a value other than NIL to its parent variable. Let be the first moment when any processor performs such assignment, and let us denote this processor by . This cannot be a tree processor, because such processor never again assigns any value to its parent variable. Could be a non-tree processor that is not adjacent to the tree? It could not, because such processor does not have a direct link to a tree processor, so it cannot receive <adopt> directly from the tree, and so this would mean that at some time between and some other non-tree processor must have sent <adopt> message to , and so would have to assign a value other than NIL to its parent variable some time after but before , contradicting the fact the is the first such moment. Consequently, is a non-tree processor adjacent to the tree, such that, at time , assigns to its parent variable the index of a link leading to a tree processor. Therefore, time is the first moment when there are exactly processors whose parent variables are not NIL, and, at that time, these processors and their parent variables form a tree rooted at . This completes the inductive step, and the proof of the lemma.

Theorem 13.6 Eventually each processor terminates, and when every processor has terminated, the subgraph induced by the parent variables forms a spanning tree rooted at .

Proof. By Lemma 13.5, we know that there is a moment which is the first moment when all processors and their parent variables form a spanning tree.

Is it possible that every processor has terminated before time ? By inspecting the code, we see that a processor terminates only after it has received <rejected> or <approved> messages from all its neighbours other than the one to which parent link leads. A processor receives such messages only in response to <adopt> messages that the processor sends. At time , there is a processor that still has not even sent <adopt> messages. Hence, not every processor has terminated by time .

Will every processor eventually terminate? We notice that by time , each processor either has already sent or will eventually send <adopt> message to all its neighbours other than the one to which parent link leads. Whenever a processor receives <adopt> message, the processor responds with <rejected> or <approved>, even if the processor has already terminated. Hence, eventually, each processor will receive either <rejected> or <approved> message on each link to which the processor has sent <adopt> message. Thus, eventually, each processor terminates.

We note that the fact that a processor has terminated does not mean that a spanning tree has already been constructed. In fact, it may happen that processors in a different part of the network have not even received any message, let alone terminated.

Theorem 13.7 Message complexity of Flood is , where is the number of edges in the graph .

The proof of this theorem is left as Problem 13-1.

Exercises

13.2-1 It may happen that a processor has terminated even though a processor has not even received any message. Show a simple network and how to delay message delivery and processor computation to demonstrate that this can indeed happen.

13.2-2 It may happen that a processor has terminated but may still respond to a message. Show a simple network and how to delay message delivery and processor computation to demonstrate that this can indeed happen.

13.3 Ring algorithms

One often needs to coordinate the activities of processors in a distributed system. This can frequently be simplified when there is a single processor that acts as a coordinator. Initially, the system may not have any coordinator, or an existing coordinator may fail and so another may need to be elected. This creates the problem where processors must elect exactly one among them, a leader. In this section we study the problem for special types of networks—rings. We will develop an asynchronous algorithm for the problem. As we shall demonstrate, the algorithm has asymptotically optimal message complexity. In the current section, we will see a distributed analogue of the well-known divide-and-conquer technique often used in sequential algorithms to keep their time complexity low. The technique used in distributed systems helps reduce the message complexity.

13.3.1 The leader election problem

The leader election problem is to elect exactly leader among a set of processors. Formally each processor has a local variable leader initially equal to NIL. An algorithm is said to solve the leader election problem if it satisfies the following conditions:

  1. in any execution, exactly one processor eventually assigns TRUE to its leader variable, all other processors eventually assign FALSE to their leader variables, and

  2. in any execution, once a processor has assigned a value to its leader variable, the variable remains unchanged.

Ring model.

We study the leader election problem on a special type of network—the ring. Formally, the graph that models a distributed system consists of nodes that form a simple cycle; no other edges exist in the graph. The two links adjacent to a processor are labeled CW (Clock-Wise) and CCW (Counter Clock-Wise). Processors agree on the orientation of the ring i.e., if a message is passed on in CW direction times, then it visits all processors and comes back to the one that initially sent the message; same for CCW direction. Each processor has a unique identifier that is a natural number, i.e., the identifier of each processor is different from the identifier of any other processor; the identifiers do not have to be consecutive numbers . Initially, no processor knows the identifier of any other processor. Also processors do not know the size of the ring.

13.3.2 The leader election algorithm

Bully elects a leader among asynchronous processors . Identifiers of processors are used by the algorithm in a crucial way. Briefly speaking, each processor tries to become the leader, the processor that has the largest identifier among all processors blocks the attempts of other processors, declares itself to be the leader, and forces others to declare themselves not to be leaders.

Let us begin with a simpler version of the algorithm to exemplify some of the ideas of the algorithm. Suppose that each processor sends a message around the ring containing the identifier of the processor. Any processor passes on such message only if the identifier that the message carries is strictly larger than the identifier of the processor. Thus the message sent by the processor that has the largest identifier among the processors of the ring, will always be passed on, and so it will eventually travel around the ring and come back to the processor that initially sent it. The processor can detect that such message has come back, because no other processor sends a message with this identifier (identifiers are distinct). We observe that, no other message will make it all around the ring, because the processor with the largest identifier will not pass it on. We could say that the processor with the largest identifier “swallows” these messages that carry smaller identifiers. Then the processor becomes the leader and sends a special message around the ring forcing all others to decide not to be leaders. The algorithm has message complexity, because each processor induces at most messages, and the leader induces extra messages; and one can assign identifiers to processors and delay processors and messages in such a way that the messages sent by a constant fraction of processors are passed on around the ring for a constant fraction of hops. The algorithm can be improved so as to reduce message complexity to , and such improved algorithm will be presented in the remainder of the section.

The key idea of the Bully algorithm is to make sure that not too many messages travel far, which will ensure message complexity. Specifically, the activity of any processor is divided into phases. At the beginning of a phase, a processor sends “probe” messages in both directions: CW and CCW. These messages carry the identifier of the sender and a certain “time-to-live” value that limits the number of hops that each message can make. The probe message may be passed on by a processor provided that the identifier carried by the message is larger than the identifier of the processor. When the message reaches the limit, and has not been swallowed, then it is “bounced back”. Hence when the initial sender receives two bounced back messages, each from each direction, then the processor is certain that there is no processor with larger identifier up until the limit in CW nor CCW directions, because otherwise such processor would swallow a probe message. Only then does the processor enter the next phase through sending probe messages again, this time with the time-to-live value increased by a factor, in an attempt to find if there is no processor with a larger identifier in twice as large neighbourhood. As a result, a probe message that the processor sends will make many hops only when there is no processor with larger identifier in a large neighbourhood of the processor. Therefore, fewer and fewer processors send messages that can travel longer and longer distances. Consequently, as we will soon argue in detail, message complexity of the algorithm is .

We detail the Bully algorithm. Each processor has five local variables. The variable id stores the unique identifier of the processor. The variable leader stores TRUE when the processor decides to be the leader, and FALSE when it decides not to be the leader. The remaining three variables are used for bookkeeping: asleep determines if the processor has ever sent a <probe,id,0,0> message that carries the identifier id of the processor. Any processor may send <probe,id,phase, > message in both directions (CW and CCW) for different values of phase. Each time a message is sent, a <reply,id,phase> message may be sent back to the processor. The variables and are used to remember whether the replies have already been processed the processor.

The code of each processor is composed of five segments. The first segment (lines 1–5) initialises the local variables of the processor. The second segment (lines 6–8) can only be executed when the local variable asleep is TRUE. The remaining three segments (lines 9–17, 1–26, and 27–31) describe the actions that the processor takes when it processes each of the three types of messages: <probe,ids,phase,ttl>, <reply,ids,phase> and <terminate> respectively. The messages carry parameters , phase and that are natural numbers.

We now describe how the algorithm works. Recall that we assume that the local variables of each processor have been initialised before time 0 of the global clock. Each processor eventually sends a <probe,id,0,0> message carrying the identifier id of the processor. At that time we say that the processor enters phase number zero. In general, when a processor sends a message <probe,id,phase, >, we say that the processor enters phase number phase. Message <probe,id,0,0> is never sent again because FALSE is assigned to asleep in line 7. It may happen that by the time this message is sent, some other messages have already been processed by the processor.

When a processor processes message <probe,ids,phase,ttl> that has arrived on link CW (the link leading in the clock-wise direction), then the actions depend on the relationship between the parameter and the identifier id of the processor. If is smaller than id, then the processor does nothing else (the processor swallows the message). If is equal to id and processor has not yet decided, then, as we shall see, the probe message that the processor sent has circulated around the entire ring. Then the processor sends a <terminate> message, decides to be the leader, and terminates (the processor may still process messages after termination). If is larger than id, then actions of the processor depend on the value of the parameter (time-to-live). When the value is strictly larger than zero, then the processor passes on the probe message with decreased by one. If, however, the value of is already zero, then the processor sends back (in the CW direction) a reply message. Symmetric actions are executed when the <</sl>probe,ids,phase,ttl</sl>> message has arrived on link CCW, in the sense that the directions of sending messages are respectively reversed – see the code for details.

Bully

       Code for any processor , 
  1  INITIALISATION 
  2    asleep TRUE 
  3    CWreplied FALSE 
  4    CCWreplied FALSE 
  5    leader NIL 
        
  6  IF asleep 
  7    THEN asleep FALSE 
  8       SEND <probe,id,0,0> to links CW and CCW 
        
  9  PROCESS MESSAGE <probe,ids,phase,ttl> that has arrived 
          on link CW (resp. CCW)
 10    IF id ids and leader NIL 
 11       THEN SEND <terminate> to link CCW 
 12          leader TRUE 
 13          TERMINATE 
 14    IF ids id and ttl  
 15       THEN SEND <probe,ids,phase,ttl > 
             to link CCW (resp. CW)
 16    IF ids id and ttl  
 17       THEN SEND <reply,ids,phase> to link CW (resp. CCW) 
        
 18  PROCESS MESSAGE <reply,ids,phase> that has arrived on link CW (resp. CCW) 
 19    IF id ids 
 20       THEN SEND <reply,ids,phase> to link CCW (resp. CW) 
 21       ELSE CWreplied TRUE (resp. CCWreplied) 
 22    IF CWreplied and CCWreplied 
 23       THEN CWreplied FALSE 
 24          CCWreplied FALSE 
 25          SEND <probe,id,phase+1, > 
                to links CW and CCW
        
 26  PROCESS MESSAGE <terminate> that has arrived on link CW 
 27    IF leader NIL 
 28       THEN SEND <terminate> to link CCW 
 29          leader FALSE 
 30          TERMINATE 

When a processor processes message <reply,ids,phase> that has arrived on link CW, then the processor first checks if ids is different from the identifier id of the processor. If so, the processor merely passes on the message. However, if , then the processor records the fact that a reply has been received from direction CW, by assigning TRUE to CWreplied. Next the processor checks if both CWreplied and CCWreplied variables are true. If so, the processor has received replies from both directions. Then the processor assigns false to both variables. Next the processor sends a probe message. This message carries the identifier id of the processor, the next phase number , and an increased time-to-live parameter . Symmetric actions are executed when <reply,ids,phase> has arrived on link CCW.

The last type of message that a processor can process is <terminate>. The processor checks if it has already decided to be or not to be the leader. When no decision has been made so far, the processor passes on the <terminate> message and decides not to be the leader. This message eventually reaches a processor that has already decided, and then the message is no longer passed on.

13.3.3 Analysis of the leader election algorithm

We begin the analysis by showing that the algorithm Bully solves the leader election problem.

Theorem 13.8 Bully solves the leader election problem on any ring with asynchronous processors.

Proof. We need to show that the two conditions listed at the beginning of the section are satisfied. The key idea that simplifies the argument is to focus on one processor. Consider the processor with maximum id among all processors in the ring. This processor eventually executes lines 6–8. Then the processor sends <probe,id,0,0> messages in CW and CCW directions. Note that whenever the processor sends <probe,id,phase, > messages, each such message is always passed on by other processors, until the ttl parameter of the message drops down to zero, or the message travels around the entire ring and arrives at . If the message never arrives at , then a processor eventually receives the probe message with ttl equal to zero, and the processor sends a response back to . Then, eventually receives messages <reply,id,phase> from each directions, and enters phase number by sending probe messages <probe,id,phase+1, > in both directions. These messages carry a larger time-to-live value compared to the value from the previous phase number phase. Since the ring is finite, eventually ttl becomes so large that processor receives a probe message that carries the identifier of . Note that will eventually receive two such messages. The first time when processes such message, the processor sends a <terminate> message and terminates as the leader. The second time when processes such message, lines 11–13 are not executed, because variable leader is no longer NIL. Note that no other processor can execute lines 11–13, because a probe message originated at cannot travel around the entire ring, since is on the way, and would swallow the message; and since identifiers are distinct, no other processor sends a probe message that carries the identifier of processor . Thus no processor other than can assign TRUE to its leader variable. Any processor other than will receive the <terminate> message, assign FALSE to its leader variable, and pass on the message. Finally, the <terminate> message will arrive at , and will not pass it anymore. The argument presented thus far ensures that eventually exactly one processor assigns TRUE to its leader variable, all other processors assign FALSE to their leader variables, and once a processor has assigned a value to its leader variable, the variable remains unchanged.

Our next task is to give an upper bound on the number of messages sent by the algorithm. The subsequent lemma shows that the number of processors that can enter a phase decays exponentially as the phase number increases.

Lemma 13.9 Given a ring of size , the number of processors that enter phase number is at most .

Proof. There are exactly processors that enter phase number , because each processor eventually sends <probe,id,0,0> message. The bound stated in the lemma says that the number of processors that enter phase 0 is at most , so the bound evidently holds for . Let us consider any of the remaining cases i.e., let us assume that . Suppose that a processor enters phase number , and so by definition it sends message <probe,id,i, >. In order for a processor to send such message, each of the two probe messages <probe,id,i-1, > that the processor sent in the previous phase in both directions must have made hops always arriving at a processor with strictly lower identifier than the identifier of (because otherwise, if a probe message arrives at a processor with strictly larger or the same identifier, than the message is swallowed, and so a reply message is not generated, and consequently cannot enter phase number ). As a result, if a processor enters phase number , then there is no other processor hops away in both directions that can ever enter the phase. Suppose that there are processors that enter phase . We can associate with each such processor , the consecutive processors that follow in the CW direction. This association assigns distinct processors to each of the processors. So there must be at least distinct processor in the ring. Hence , and so we can weaken this bound by dropping , and conclude that , as desired.

Theorem 13.10 The algorithm Bully has message complexity, where is the size of the ring.

Note that any processor in phase , sends messages that are intended to travel away and back in each direction (CW and CCW). This contributes at most messages per processor that enters phase number . The contribution may be smaller than if a probe message gets swallowed on the way away from the processor. Lemma 13.9 provides an upper bound on the number of processors that enter phase number . What is the highest phase that a processor can ever enter? The number of processors that can be in phase is at most . So when , then there can be no processor that ever enters phase . Thus no processor can enter any phase beyond phase number , because . Finally, a single processor sends one termination message that travels around the ring once. So for the total number of messages sent by the algorithm we get the

upper bound.

Burns furthermore showed that the asynchronous leader election algorithm is asymptotically optimal: Any uniform algorithm solving the leader election problem in an asynchronous ring must send the number of messages at least proportional to .

Theorem 13.11 Any uniform algorithm for electing a leader in an asynchronous ring sends messages.

The proof, for any algorithm, is based on constructing certain executions of the algorithm on rings of size . Then two rings of size are pasted together in such a way that the constructed executions on the smaller rings are combined, and additional messages are received. This construction strategy yields the desired logarithmic multiplicative overhead.

Exercises

13.3-1 Show that the simplified Bully algorithm has message complexity, by appropriately assigning identifiers to processors on a ring of size , and by determining how to delay processors and messages.

13.3-2 Show that the algorithm Bully has message complexity.

13.4 Fault-tolerant consensus

The algorithms presented so far are based on the assumption that the system on which they run is reliable. Here we present selected algorithms for unreliable distributed systems, where the active (or correct) processors need to coordinate their activities based on common decisions.

It is inherently difficult for processors to reach agreement in a distributed setting prone to failures. Consider the deceptively simple problem of two failure-free processors attempting to agree on a common bit using a communication medium where messages may be lost. This problem is known as the two generals problem. Here two generals must coordinate an attack using couriers that may be destroyed by the enemy. It turns out that it is not possible to solve this problem using a finite number of messages. We prove this fact by contradiction. Assume that there is a protocol used by processors and involving a finite number of messages. Let us consider such a protocol that uses the smallest number of messages, say messages. Assume without loss of generality that the last message is sent from to . Since this final message is not acknowledged by , must determine the decision value whether or not receives this message. Since the message may be lost, must determine the decision value without receiving this final message. But now both and decide on a common value without needing the message. In other words, there is a protocol that uses only messages for the problem. But this contradicts the assumption that is the smallest number of messages needed to solve the problem.

In the rest of this section we consider agreement problems where the communication medium is reliable, but where the processors are subject to two types of failures: crash failures, where a processor stops and does not perform any further actions, and Byzantine failures, where a processor may exhibit arbitrary, or even malicious, behaviour as the result of the failure. The algorithms presented deal with the so called consensus problem, first introduced by Lamport, Pease, and Shostak. The consensus problem is a fundamental coordination problem that requires processors to agree on a common output, based on their possibly conflicting inputs.

13.4.1 The consensus problem

We consider a system in which each processor has a special state component , called the input and , called the output (also called the decision). The variable initially holds a value from some well ordered set of possible inputs and is undefined. Once an assignment to has been made, it is irreversible. Any solution to the consensus problem must guarantee:

  • Termination: In every admissible execution, is eventually assigned a value, for every nonfaulty processor .

  • Agreement: In every execution, if and are assigned, then , for all nonfaulty processors and . That is nonfaulty processors do not decide on conflicting values.

  • Validity: In every execution, if for some value , for all processors , and if is assigned for some nonfaulty processor , then . That is, if all processors have the same input value, then any value decided upon must be that common input.

Note that in the case of crash failures this validity condition is equivalent to requiring that every nonfaulty decision value is the input of some processor. Once a processor crashes it is of no interest to the algorithm, and no requirements are put on its decision.

We begin by presenting a simple algorithm for consensus in a synchronous message passing system with crash failures.

13.4.2 Consensus with crash failures

Since the system is synchronous, an execution of the system consists of a series of rounds. Each round consists of the delivery of all messages, followed by one computation event for every processor. The set of faulty processors can be different in different executions, that is, it is not known in advance. Let be a subset of at most processors, the faulty processors. Each round contains exactly one computation event for the processors not in and at most one computation event for every processor in . Moreover, if a processor in does not have a computation event in some round, it does not have such an event in any further round. In the last round in which a faulty processor has a computation event, an arbitrary subset of its outgoing messages are delivered.

Consensus-with-Crash-Failures

       Code for processor , .       
       Initially 
       round , 
  1  SEND {  has not already sent } to all processors 
  2  RECEIVE  from , ,  
  3   
  4  IF   
  5    THEN  

In the previous algorithm, which is based on an algorithm by Dolev and Strong, each processor maintains a set of the values it knows to exist in the system. Initially, the set contains only its own input. In later rounds the processor updates its set by joining it with the sets received from other processors. It then broadcasts any new additions to the set of all processors. This continues for rounds, where is the maximum number of processors that can fail. At this point, the processor decides on the smallest value in its set of values.

To prove the correctness of this algorithm we first notice that the algorithm requires exactly rounds. This implies termination. Moreover the validity condition is clearly satisfied since the decision value is the input of some processor. It remains to show that the agreement condition holds. We prove the following lemma:

Lemma 13.12 In every execution at the end of round , , for every two nonfaulty processors and .

Proof. We prove the claim by showing that if at the end of round then at the end of round .

Let be the first round in which is added to for any nonfaulty processor . If is initially in let . If then, in round sends to each , causing to add to , if not already present.

Otherwise, suppose and let be a nonfaulty processor that receives for the first time in round . Then there must be a chain of processors that transfers the value to . Hence sends to in round one etc. until sends to in round . But then is a chain of processors. Hence at least one of them, say must be nonfaulty. Hence adds to its set in round , contradicting the minimality of .

This lemma together with the before mentioned observations hence implies the following theorem.

Theorem 13.13 The previous consensus algorithm solves the consensus problem in the presence of crash failures in a message passing system in rounds.

The following theorem was first proved by Fischer and Lynch for Byzantine failures. Dolev and Strong later extended it to crash failures. The theorem shows that the previous algorithm, assuming the given model, is optimal.

Theorem 13.14 There is no algorithm which solves the consensus problem in less than rounds in the presence of crash failures, if .

What if failures are not benign? That is can the consensus problem be solved in the presence of Byzantine failures? And if so, how?

13.4.3 Consensus with Byzantine failures

In a computation step of a faulty processor in the Byzantine model, the new state of the processor and the message sent are completely unconstrained. As in the reliable case, every processor takes a computation step in every round and every message sent is delivered in that round. Hence a faulty processor can behave arbitrarily and even maliciously. For example, it could send different messages to different processors. It can even appear that the faulty processors coordinate with each other. A faulty processor can also mimic the behaviour of a crashed processor by failing to send any messages from some point on.

In this case, the definition of the consensus problem is the same as in the message passing model with crash failures. The validity condition in this model, however, is not equivalent with requiring that every nonfaulty decision value is the input of some processor. Like in the crash case, no conditions are put on the output of faulty processors.

13.4.4 Lower bound on the ratio of faulty processors

Pease, Shostak and Lamport first proved the following theorem.

Theorem 13.15 In a system with processors and Byzantine processors, there is no algorithm which solves the consensus problem if .

13.4.5 A polynomial algorithm

The following algorithm uses messages of constant size, takes rounds, and assumes that . It was presented by Berman and Garay.

This consensus algorithm for Byzantine failures contains phases, each taking two rounds. Each processor has a preferred decision for each phase, initially its input value. At the first round of each phase, processors send their preferences to each other. Let be the majority value in the set of values received by processor at the end of the first round of phase . If no majority exists, a default value is used. In the second round of the phase processor , called the king of the phase, sends its majority value to all processors. If receives more than copies of (in the first round of the phase) then it sets its preference for the next phase to be ; otherwise it sets its preference to the phase kings preference, received in the second round of the phase. After phases, the processor decides on its preference. Each processor maintains a local array pref with entries.

We prove correctness using the following lemmas. Termination is immediate. We next note the persistence of agreement:

Lemma 13.16 If all nonfaulty processors prefer at the beginning of phase , then they all prefer at the end of phase , for all , .

Proof. Since all nonfaulty processors prefer at the beginning of phase , they all receive at least copies of (including their own) in the first round of phase . Since , , implying that all nonfaulty processors will prefer at the end of phase .

Consensus-with-Byzantine-failures

       Code for processor , .
       Initially , for any 
       round , 
  1  SEND  to all processors 
  2  RECEIVE  from  and assign to , for all ,  
  3  let maj be the majority value of ( if none) 
  4  let mult be the multiplicity of maj 
       round ,  
  5  IF  
  6    THEN SEND  to all processors 
  7  RECEIVE  from  ( if none) 
  8  IF  
  9    THEN  
 10    ELSE  
 11  IF  
 12    THEN  

This implies the validity condition: If they all start with the same input they will continue to prefer and finally decide on in phase . Agreement is achieved by the king breaking ties. Since each phase has a different king and there are phases, at least one round has a nonfaulty king.

Lemma 13.17 Let be a phase whose king is nonfaulty. Then all nonfaulty processors finish phase with the same preference.

Proof. Suppose all nonfaulty processors use the majority value received from the king for their preference. Since the king is nonfaulty, it sends the same message and hence all the nonfaulty preferences are the same.

Suppose a nonfaulty processor uses its own majority value for its preference. Thus receives more than messages for in the first round of phase . Hence every processor, including receives more than messages for in the first round of phase and sets its majority value to . Hence every nonfaulty processor has for its preference.

Hence at phase all processors have the same preference and by Lemma 13.16 they will decide on the same value at the end of the algorithm. Hence the algorithm has the agreement property and solves consensus.

Theorem 13.18 There exists an algorithm for processors which solves the consensus problem in the presence of Byzantine failures within rounds using constant size messages, if .

13.4.6 Impossibility in asynchronous systems

As shown before, the consensus problem can be solved in synchronous systems in the presence of both crash (benign) and Byzantine (severe) failures. What about asynchronous systems? Under the assumption that the communication system is completely reliable, and the only possible failures are caused by unreliable processors, it can be shown that if the system is completely asynchronous then there is no consensus algorithm even in the presence of only a single processor failure. The result holds even if the processors only fail by crashing. The impossibility proof relies heavily on the system being asynchronous. This result was first shown in a breakthrough paper by Fischer, Lynch and Paterson. It is one of the most influential results in distributed computing.

The impossibility holds for both shared memory systems if only read/write registers are used, and for message passing systems. The proof first shows it for shared memory systems. The result for message passing systems can then be obtained through simulation.

Theorem 13.19 There is no consensus algorithm for a read/write asynchronous shared memory system that can tolerate even a single crash failure.

And through simulation the following assertion can be shown.

Theorem 13.20 There is no algorithm for solving the consensus problem in an asynchronous message passing system with processors, one of which may fail by crashing.

Note that these results do not mean that consensus can never be solved in asynchronous systems. Rather the results mean that there are no algorithms that guarantee termination, agreement, and validity, in all executions. It is reasonable to assume that agreement and validity are essential, that is, if a consensus algorithm terminates, then agreement and validity are guaranteed. In fact there are efficient and useful algorithms for the consensus problem that are not guaranteed to terminate in all executions. In practice this is often sufficient because the special conditions that cause non-termination may be quite rare. Additionally, since in many real systems one can make some timing assumption, it may not be necessary to provide a solution for asynchronous consensus.

Exercises

13.4-1 Prove the correctness of algorithm Consensus-Crash.

13.4-2 Prove the correctness of the consensus algorithm in the presence of Byzantine failures.

13.4-3 Prove Theorem 13.20.

13.5 Logical time, causality, and consistent state

In a distributed system it is often useful to compute a global state that consists of the states of all processors. Having access to the global can allows us to reason about the system properties that depend on all processors, for example to be able to detect a deadlock. One may attempt to compute global state by stopping all processors, and then gathering their states to a central location. Such a method is will-suited for many distributed systems that must continue computation at all times. This section discusses how one can compute global state that is quite intuitive, yet consistent, in a precise sense. We first discuss a distributed algorithm that imposes a global order on instructions of processors. This algorithm creates the illusion of a global clock available to processors. Then we introduce the notion of one instruction causally affecting other instruction, and an algorithm for computing which instruction affects which. The notion turns out to be very useful in defining a consistent global state of distributed system. We close the section with distributed algorithms that compute a consistent global state of distributed system.

13.5.1 Logical time

The design of distributed algorithms is easier when processors have access to (Newtonian) global clock, because then each event that occurs in the distributed system can be labeled with the reading of the clock, processors agree on the ordering of any events, and this consensus can be used by algorithms to make decisions. However, construction of a global clock is difficult. There exist algorithms that approximate the ideal global clock by periodically synchronising drifting local hardware clocks. However, it is possible to totally order events without using hardware clocks. This idea is called the logical clock.

Recall that an execution is an interleaving of instructions of the programs. Each instruction can be either a computational step of a processor, or sending a message, or receiving a message. Any instruction is performed at a distinct point of global time. However, the reading of the global clock is not available to processors. Our goal is to assign values of the logical clock to each instruction, so that these values appear to be readings of the global clock. That is, it possible to postpone or advance the instants when instructions are executed in such a way, that each instruction that has been assigned a value of the logical clock, is executed exactly at the instant of the global clock, and that the resulting execution is a valid one, in the sense that it can actually occur when the algorithm is run with the modified delays.

The Logical-Clock algorithm assigns logical time to each instruction. Each processor has a local variable called counter. This variable is initially zero and it gets incremented every time processor executes an instruction. Specifically, when a processor executes any instruction other than sending or receiving a message, the variable counter gets incremented by one. When a processor sends a message, it increments the variable by one, and attaches the resulting value to the message. When a processor receives a message, then the processor retrieves the value attached to the message, then calculates the maximum of the value and the current value of counter, increments the maximum by one, and assigns the result to the counter variable. Note that every time instruction is executed, the value of counter is incremented by at least one, and so it grows as processor keeps on executing instructions. The value of logical time assigned to instruction is defined as the pair , where counter is the value of the variable counter right after the instruction has been executed, and id is the identifier of the processor. The values of logical time form a total order, where pairs are compared lexicographically. This logical time is also called Lamport time. We define to be a quotient , which is an equivalent way to represent the pair.

Remark 13.21 For any execution, logical time satisfies three conditions:

(i) if an instruction is performed by a processor before an instruction is performed by the same processor, then the logical time of is strictly smaller than that of ,

(ii) any two distinct instructions of any two processors get assigned different logical times,

(iii) if instruction sends a message and instruction receives this message, then the logical time of is strictly smaller than that of .

Our goal now is to argue that logical clock provides to processors the illusion of global clock. Intuitively, the reason why such an illusion can be created is that we can take any execution of a deterministic algorithm, compute the logical time of each instruction , and run the execution again delaying or speeding up processors and messages in such a way that each instruction is executed at the instant of the global clock. Thus, without access to a hardware clock or other external measurements not captured in our model, the processors cannot distinguish the reading of logical clock from the reading of a real global clock. Formally, the reason why the re-timed sequence is a valid execution that is indistinguishable from the original execution, is summarised in the subsequent corollary that follows directly from Remark 13.21.

Corollary 13.22 For any execution , let be the assignment of logical time to instructions, and let be the sequence of instructions ordered by their logical time in . Then for each processor, the subsequence of instructions executed by the processor in is the same as the subsequence in . Moreover, each message is received in after it is sent in .

13.5.2 Causality

In a system execution, an instruction can affect another instruction by altering the state of the computation in which the second instruction executes. We say that one instruction can causally affect (or influence) another, if the information that one instruction produces can be passed on to the other instruction. Recall that in our model of distributed system, each instruction is executed at a distinct instant of global time, but processors do not have access to the reading of the global clock. Let us illustrate causality. If two instructions are executed by the same processor, then we could say that the instruction executed earlier can causally affect the instruction executed later, because it is possible that the result of executing the former instruction was used when the later instruction was executed. We stress the word possible, because in fact the later instruction may not use any information produced by the former. However, when defining causality, we simplify the problem of capturing how processors influence other processors, and focus on what is possible. If two instructions and are executed by two different processors, then we could say that instruction can causally affect instruction , when the processor that executes sends a message when or after executing , and the message is delivered before or during the execution of at the other processor. It may also be the case that influence is passed on through intermediate processors or multiple instructions executed by processors, before reaching the second processor.

We will formally define the intuition that one instruction can causally affect another in terms of a relation called happens before, and that relates pairs of instructions. The relation is defined for a given execution, i.e., we fix a sequence of instructions executed by the algorithm and instances of global clock when the instructions were executed, and define which pairs of instructions are related by the happens before relation. The relation is introduced in two steps. If instructions and are executed by the same processor, then we say that happens before if and only if is executed before . When and are executed by two different processors, then we say that happens before if and only if there is a chain of instructions and messages

for , such that is either equal to or is executed after by the same processor that executes ; is either equal to or is executed before by the same processor that executes ; is executed before by the same processor, ; and sends a message that is received by , . Note that no instruction happens before itself. We write when happens before . We omit the reference to the execution for which the relation is defined, because it will be clear from the context which execution we mean. We say that two instructions and are concurrent when neither nor . The question stands how processors can determine if one instruction happens before another in a given execution according to our definition. This question can be answered through a generalisation of the Logical-Clock algorithm presented earlier. This generalisation is called vector clocks.

The Vector-Clocks algorithm allows processors to relate instructions, and this relation is exactly the happens before relation. Each processor maintains a vector of integers. The -th coordinate of the vector is denoted by . The vector is initialised to the zero vector . A vector is modified each time processor executes an instruction, in a way similar to the way counter was modified in the Logical-Clock algorithm. Specifically, when a processor executes any instruction other than sending or receiving a message, the coordinate gets incremented by one, and other coordinates remain intact. When a processor sends a message, it increments by one, and attaches the resulting vector to the message. When a processor receives a message, then the processor retrieves the vector attached to the message, calculates coordinate-wise maximum of the current vector and the vector , except for coordinate that gets incremented by one, and assigns the result to the variable .

        
       FOR ALL 
          

We label each instruction executed by processor with the value of the vector right after the instruction has been executed. The label is denoted by and is called vector timestamp of instruction . Intuitively, represents the knowledge of processor about how many instructions each processor has executed at the moment when has executed instruction . This knowledge may be obsolete.

Vector timestamps can be used to order instructions that have been executed. Specifically, given two instructions and , and their vector timestamps and , we write that when the vector is majorised by the vector i.e., for all , the coordinate is at most the corresponding coordinate . We write when but .

The next theorem explains that the Vector-Clocks algorithm indeed implements the happens before relation, because we can decide if two instructions happen or not before each other, just by comparing the vector timestamps of the instructions.

Theorem 13.23 For any execution and any two instructions and , if and only if .

Proof. We first show the forward implication. Suppose that . Hence and are two different instructions. If the two instructions are executed on the same processor, then must be executed before . Only finite number of instructions have been executed by the time has been executed. The Vector-Clock algorithm increases a coordinate by one as it calculates vector timestamps of instructions from until inclusive, and no coordinate is ever decreased. Thus . If and were executed on different processors, then by the definition of happens before relation, there must be a finite chain of instructions and messages leading from to . But then by the Vector-Clock algorithm, the value of a coordinate of vector timestamp gets increased at each move, as we move along the chain, and so again .

Now we show the reverse implication. Suppose that it is not the case that . We consider a few subcases always concluding that it is not that case that . First, it could be the case that and are the same instruction. But then obviously vector clocks assigned to and are the same, and so it cannot be the case that . Let us, therefore, assume that and are different instructions. If they are executed by the same processor, then cannot be executed before , and so is executed after . Thus, by monotonicity of vector timestamps, , and so it is not the case that . The final subcase is when and are executed by two distinct processors and . Let us focus on the component of vector clock of processor right after was executed. Let its value be . Recall that other processors can only increase the value of their components by adopting the value sent by other processors. Hence, in order for the value of component of processor to be or more at the moment is executed, there must be a chain of instructions and messages that passes a value at least , originating at processor . This chain starts at or at an instruction executed by subsequent to . But the existence of such chain would imply that happens before , which we assumed was not the case. So the component of vector clock is strictly smaller than the component of vector clock . Thus it cannot be the case that .

This theorem tells us that we can decide if two distinct instructions and are concurrent, by checking that it is not the case that nor is it the case that .

13.5.3 Consistent state

The happens before relation can be used to compute a global state of distributed system, such that this state is in some sense consistent. Shortly, we will formally define the notion of consistency. Each processor executes instructions. A cut is defined as a vector of non-negative integers. Intuitively, the vector denotes the states of processors. Formally, denotes the number of instructions that processor has executed. Not all cuts correspond to collections of states of distributed processors that could be considered natural or consistent. For example, if a processor has received a message from and we record the state of in the cut by making appropriately large, but make so small that the cut contains the state of the sender before the moment when the message was sent, then we could say that such cut is not natural—there are instructions recorded in the cut that are causally affected by instructions that are not recorded in the cut. Such cuts we consider not consistent and so undesirable. Formally, a cut is inconsistent when there are processors and such that the instruction number of processor is causally affected by an instruction subsequent to instruction number of processor . So in an inconsistent cut there is a message that “crosses” the cut in a backward direction. Any cut that is not inconsistent is called a consistent cut.

The Consistent-Cut algorithm uses vector timestamps to find a consistent cut. We assume that each processor is given the same cut as an input. Then processors must determine a consistent cut that is majorised by . Each processor has an infinite table of vectors. Processor executes instructions, and stores vector timestamps in consecutive entries of the table. Specifically, entry of the table is the vector timestamp of the -th instruction executed by the processor; we define to be the zero vector. Processor begins calculating a cut right after the moment when the processor has executed instruction number . The processor determines the largest number that is at most , such that the vector is majorised by . The vector that processors collectively find turns out to be a consistent cut.

Theorem 13.24 For any cut , the cut computed by the Consistent-Cut algorithm is a consistent cut majorised by .

Proof. First observe that there is no need to consider entries of further than . Each of these entries is not majorised by , because the -th coordinate of any of these vectors is strictly larger than . So we can indeed focus on searching among the first entries of . Let be the largest entry such that the vector is majorised by the vector . We know that such vector exists, because is a zero vector, and such vector is majorised by any cut .

We argue that is a consistent cut by way of contradiction. Suppose that the vector is an inconsistent cut. Then, by definition, there are processors and such that there is an instruction of processor subsequent to instruction number , such that happens before instruction number of processor . Recall that is the furthest entry of majorised by . So entry is not majorised by , and since all subsequent entries, including the one for instruction , can have only larger coordinates, the entries are not majorised by either. But, happens before instruction number , so entry can only have larger coordinates than respective coordinates of the entry corresponding to , and so cannot be majorised by either. This contradicts the assumption that is majorised by . Therefore, must be a consistent cut.

There is a trivial algorithm for finding a consistent cut. The algorithm picks . However, the Consistent-Cut algorithm is better in the sense that the consistent cut found is maximal. That this is indeed true, is left as an exercise.

There is an alternative way to find a consistent cut. The Consistent Cut algorithm requires that we attach vector timestamps to messages and remember vector timestamps for all instructions executed so far by the algorithm which consistent cut we want to compute. This may be too costly. The algorithm called Distributed-Snapshot avoids this cost. In the algorithm, a processor initiates the calculation of consistent cut by flooding the network with a special message that acts like a sword that cuts the execution of algorithm consistently. In order to prove that the cut is indeed consistent, we require that messages are received by the recipient in the order they were sent by the sender. Such ordering can be implemented using sequence number.

In the Distributed-Snapshot algorithm, each processor has a variable called counter that counts the number of instructions of algorithm executed by the processor so far. In addition the processor has a variable that will store the -th coordinate of the cut. This variable is initialised to . Since the variables counter only count the instructions of algorithm , the instructions of Distributed-Snapshot algorithm do not affect the counter variables. In some sense the snapshot algorithm runs in the “background”. Suppose that there is exactly one processor that can decide to take a snapshot of the distributed system. Upon deciding, the processor “floods” the network with a special message <Snapshot>. Specifically, the processor sends the message to all its neighbours and assigns counter to . Whenever a processor receives the message and the variable is still , then the processor sends <Snapshot> message to all its neighbours and assigns current to . The sending of <Snapshot> messages and assignment are done by the processor without executing any instruction of (we can think of Distributed-Snapshot algorithm as an “interrupt”). The algorithm calculates a consistent cut.

Theorem 13.25 Let for any processors and , the messages sent from to be received in the order they are sent. The Distributed-Snapshot algorithm eventually finds a consistent cut . The algorithm sends messages, where is the number of edges in the graph.

Proof. The fact that each variable is eventually different from follows from our model, because we assumed that instructions are eventually executed and messages are eventually received, so the <Snapshot> messages will eventually reach all nodes.

Suppose that is not a consistent cut. Then there is a processor such that instruction number or later sends a message < > other than <Snapshot>, and the message is received on or before a processor executes instruction number . So the message < > must have been sent after the message <Snapshot> was sent from to . But messages are received in the order they are sent, so processes <Snapshot> before it processes < >. But then message < > arrives after snapshot was taken at . This is a desired contradiction.

Exercises

13.5-1 Show that logical time preserves the happens before () relation. That is, show that if for events and it is the case that , then , where is the logical time of an event.

13.5-2 Show that any vector clock that captures concurrency between processors must have at least coordinates.

13.5-3 Show that the vector calculated by the algorithm Consistent-Cut is in fact a maximal consistent cut majorised by . That is that there is no that majorises and is different from , such that is majorised by .

13.6 Communication services

Among the fundamental problems in distributed systems where processors communicate by message passing are the tasks of spreading and gathering information. Many distributed algorithms for communication networks can be constructed using building blocks that implement various broadcast and multicast services. In this section we present some basic communication services in the message-passing model. Such services typically need to satisfy some quality of service requirements dealing with ordering of messages and reliability. We first focus on broadcast services, then we discuss more general multicast services.

13.6.1 Properties of broadcast services

In the broadcast problem, a selected processor , called a source or a sender, has the message , which must be delivered to all processors in the system (including the source). The interface of the broadcast service is specified as follows:

  • bc-send : an event of processor that sends a message to all processors.

  • bc-recv : an event of processor that receives a message sent by processor .

In above definitions qos denotes the quality of service provided by the system. We consider two kinds of quality service:

  • Ordering: how the order of received messages depends on the order of messages sent by the source?

  • Reliability: how the set of received messages depends on the failures in the system?

The basic model of a message-passing distributed system normally does not guarantee any ordering or reliability of messaging operations. In the basic model we only assume that each pair of processors is connected by a link, and message delivery is independent on each link — the order of received messages may not be related to the order of the sent messages, and messages may be lost in the case of crashes of senders or receivers.

We present some of the most useful requirements for ordering and reliability of broadcast services. The main question we address is how to implement a stronger service on top of the weaker service, starting with the basic system model.

Variants of ordering requirements.

Applying the definition of happens before to messages, we say that message happens before message if either and are sent by the same processor and is sent before , or the bc-recv event for happens before the bc-send event for .

We identify four common broadcast services with respect to the message ordering properties:

  • Basic Broadcast: no order of messages is guaranteed.

  • Single-Source FIFO (first-in-first-out): messages sent by one processor are received by each processor in the same order as sent; more precisely, for all processors and messages , if processor sends before it sends then processor does not receive message before message .

  • Causal Order: messages are received in the same order as they happen; more precisely, for all messages and every processor , if happens before then does not receive before .

  • Total Order: the same order of received messages is preserved in each processor; more precisely, for all processors and messages , if processor receives before it receives then processor does not receive message before message .

It is easy to see that Causal Order implies Single-Source FIFO requirements (since the relation “happens before” for messages includes the order of messages sent by one processor), and each of the given services trivially implies Basic Broadcast. There are no additional relations between these four services. For example, there are executions that satisfy Single-Source FIFO property, but not Causal Order. Consider two processors and . In the first event broadcasts message , next processor receives , and then broadcasts message . It follows that happens before . But if processor receives before , which may happen, then this execution violates Causal Order. Note that trivially Single-Source FIFO requirement is preserved, since each processor broadcasts only one message.

We denote by the Basic Broadcast service, by ssf the Single-Source FIFO, by the Causal Order and by the Total Order service.

Reliability requirements.

In the model without failures we would like to guarantee the following properties of broadcast services:

  • Integrity: each message received in event bc-recv has been sent in some bc-send event.

  • No-Duplicates: each processor receives a message not more than once.

  • Liveness: each message sent is received by all processors.

In the model with failures we define the notion of reliable broadcast service, which satisfies Integrity, No-Duplicates and two kinds of Liveness properties:

  • Nonfaulty Liveness: each message sent by non-faulty processor must be received by every non-faulty processor.

  • Faulty Liveness: each message sent by a faulty processor is either received by all non-faulty processors or by none of them.

We denote by rbb the Reliable Basic Broadcast service, by rssf the Reliable Single-Source FIFO, by the Reliable Causal Order, and by rto the Reliable Total Order service.

13.6.2 Ordered broadcast services

We now describe implementations of algorithms for various broadcast services.

Implementing basic broadcast on top of asynchronous point-to-point messaging.

The bb service is implemented as follows. If event occurs then processor sends message via every link from to , where . If a message comes to processor then it enables event .

To provide reliability we do the following. We build the reliable broadcast on the top of basic broadcast service. When occurs, processor enables event . If event occurs and message-coordinate appears for the first time then processor first enables event (to inform other non-faulty processors about message in case when processor is faulty), and next enables event .

We prove that the above algorithm provides reliability for the basic broadcast service. First observe that Integrity and No-Duplicates properties follow directly from the fact that each processor enables only if message-coordinate is received for the first time. Nonfaulty liveness is preserved since links between non-faulty processors enables events correctly. Faulty Liveness is guaranteed by the fact that if there is a non-faulty processor which receives message from the faulty source , then before enabling processor sends message using event. Since is non-faulty, each non-faulty processor gets message in some event, and then accepts it (enabling event ) during the first such event.

Implementing single-source FIFO on top of basic broadcast service.

Each processor has its own counter (timestamp), initialised to . If event occurs then processor sends message with its current timestamp attached, using . If an event occurs then processor enables event just after events have been enabled, where are the messages such that events have been enabled.

Note that if we use reliable Basic Broadcast instead of Basic Broadcast as the background service, the above implementation of Single-Source FIFO becomes Reliable Single-Source FIFO service. We leave the proof to the reader as an exercise.

Implementing causal order and total order on the top of single-source FIFO service.

We present an ordered broadcast algorithm which works in the asynchronous message-passing system providing single-source FIFO broadcast service. It uses the idea of timestamps, but in more advanced way than in the implementation of ssf. We denote by cto the service satisfying causal and total orders requirements.

Each processor maintains in a local array its own increasing counter (timestamp), and the estimated values of timestamps of other processors. Timestamps are used to mark messages before sending—if is going to broadcast a message, it increases its timestamp and uses it to tag this message (lines 11-13). During the execution processor estimates values of timestamps of other processors in the local vector —if processor receives a message from processor with a tag (timestamp of ), it puts into (lines 23–32). Processor sets its current timestamp to be the maximum of the estimated timestamps in the vector plus one (lines 24–26). After updating the timestamp processor sends an update message. Processor accepts a message with associated timestamp from processor if pair is the smallest among other received messages (line 42), and each processor has at least as large a timestamp as known by processor (line 43). The details are given in the code below.

Ordered-Broadcast

       Code for any processor , 
<01>INITIALISATION</01>
<02>    for every </02>
        
 11  IF  occurs 
 12    THEN  
 13       ENABLE  
        
 21  IF  occurs 
 22    THEN ADD triple  to pending 
 23        
 24    IF  
 25       THEN  
 26          ENABLE  
        
 31  IF  occurs 
 32    THEN  
        
 41  IF 
 42     is the pending triple with the smallest  and 
           for every 
 43  THEN ENABLE  
 44    REMOVE triple  from pending 

Ordered-Broadcast satisfies the causal order requirement. We leave the proof to the reader as an exercise (in the latter part we show how to achieve stronger reliable causal order service and provide the proof for that stronger case).

Theorem 13.26 Ordered-Broadcast satisfies the total order requirement.

Proof. Integrity follows from the fact that each processor can enable event only if the triple is pending (lines 41–45), which may happen after receiving a message from processor (lines 21–22). No-Duplicates property is guaranteed by the fact that there is at most one pending triple containing message sent by processor (lines 13 and 21–22).

Liveness follows from the fact that each pending triple satisfies conditions in lines 42–43 in some moment of the execution. The proof of this fact is by induction on the events in the execution — suppose to the contrary that is the triple with smallest which does not satisfy conditions in lines 42–43 at any moment of the execution. It follows that there is a moment from which triple has smallest coordinates among pending triples in processor . Hence, starting from this moment, it must violate condition in line 43 for some . Note that , by updating rules in lines 23–25. It follows that processor never receives a message from with timestamp greater than , which by updating rules in lines 24–26 means that processor never receives a message from , which contradicts the liveness property of broadcast service.

To prove Total Order property it is sufficient to prove that for every processor and messages sent by processors with timestamps respectively, each of the triples , are accepted according to the lexicographic order of . There are two cases.

Case 1. Both triples are pending in processor at some moment of the execution. Then condition in line 42 guarantees acceptance in order of .

Case 2. Triple (without loss of generality) is accepted by processor before triple is pending. If then still the acceptance is according to the order of . Otherwise , and by condition in line 43 we get in particular that , and consequently . This can not happen because of the ssf requirement and the assumption that processor has not yet received message from via the broadcast service.

Now we address reliable versions of Causal Order and Total Order services. A Reliable Causal Order requirements can be implemented on the top of Reliable Basic Broadcast service in asynchronous message-passing system with processor crashes using the following algorithm. It uses the same data structures as previous Ordered-Bbroadcast. The main difference between reliable Causally-Ordered-Broadcast and Ordered-Broadcast are as follows: instead of using integer timestamps processors use vector timestamps , and they do not estimate timestamps of other processors, only compare in lexicographic order their own (vector) timestamps with received ones. The intuition behind vector timestamp of processor is that it stores information how many messages have been sent by and how many have been accepted by from every , where .

In the course of the algorithm processor increases corresponding position in its vector timestamp before sending a new message (line 12), and increases th position of its vector timestamp after accepting new message from processor (line 38). After receiving a new message from processor together with its vector timestamp , processor adds triple to pending and accepts this triple if it is first not accepted message received from processor (condition in line 33) and the number of accepted messages (from each processor ) by processor was not bigger in the moment of sending than it is now in processor (condition in line 34). Detailed code of the algorithm follows.

Reliable-Causally-Ordered-Broadcast

       Code for any processor , 
<01>INITIALISATION</01>
<02>    for every </02>
<03>   pending list is empty</03>
        
 11  IF  occurs 
 12    THEN  
 13       ENABLE  
        
 21  IF  occurs 
 22    THEN ADD triple  to pending 
        
 31  IF  is the pending triple, and 
 32    , and 
 33     for every  
 34    THEN ENABLE  
 35       REMOVE triple  from pending 
 36        

We argue that the algorithm Reliable-Causally-Ordered-Broadcast provides Reliable Causal Order broadcast service on the top of the system equipped with the Reliable Basic Broadcast service. Integrity and No-Duplicate properties are guaranteed by rbb broadcast service and facts that each message is added to pending at most once and non-received message is never added to pending. Nonfaulty and Faulty Liveness can be proved by one induction on the execution, using facts that non-faulty processors have received all messages sent, which guarantees that conditions in lines 33–34 are eventually satisfied. Causal Order requirement holds since if message happens before message then each processor accepts messages according to the lexicographic order of , and these vector-arrays are comparable in this case. Details are left to the reader.

Note that Reliable Total Order broadcast service can not be implemented in the general asynchronous setting with processor crashes, since it would solve consensus in this model — first accepted message would determine the agreement value (against the fact that consensus is not solvable in the general model).

13.6.3 Multicast services

Multicast services are similar to the broadcast services, except each multicast message is destined for a specified subset of all processors.In the multicast service we provide two types of events, where qos denotes a quality of service required:

  • : an event of processor which sends a message together with its id to all processors in a destination set .

  • : an event of processor which receives a message sent by processor .

Note that the event mc-recv is similar to bc-recv.

As in case of a broadcast service, we would like to provide useful ordering and reliable properties of the multicast services. We can adapt ordering requirements from the broadcast services. Basic Multicast does not require any ordering properties. Single-Source FIFO requires that if one processor multicasts messages (possibly to different destination sets), then the messages received in each processors (if any) must be received in the same order as sent by the source. Definition of Causal Order remains the same. Instead of Total Order, which is difficult to achieve since destination sets may be different, we define another ordering property:

  • Sub-Total Order: orders of received messages in all processors may be extended to the total order of messages; more precisely, for any messages and processors , if and receives both messages then they are received in the same order by and .

The reliability conditions for multicast are somewhat different from the conditions for reliable broadcast.

  • Integrity: each message received in event was sent in some mc-send event with destination set containing processor .

  • No Duplicates: each processor receives a message not more than once.

  • Nonfaulty Liveness: each message sent by non-faulty processor must be received in every non-faulty processor in the destination set.

  • Faulty Liveness: each message sent by a faulty processor is either received by all non-faulty processors in the destination set or by none of them.

One way of implementing ordered and reliable multicast services is to use the corresponding broadcast services (for Sub-Total Order the corresponding broadcast requirement is Total Order). More precisely, if event occurs processor enables event . When an event occurs, processor enables event if , otherwise it ignores this event. The proof that such method provides required multicast quality of service is left as an exercise.

13.7 Rumor collection algorithms

Reliable multicast services can be used as building blocks in constructing algorithms for more advanced communication problems. In this section we illustrate this method for the problem of collecting rumors by synchronous processors prone to crashes. (Since we consider only fair executions, we assume that at least one processor remains operational to the end of the computation).

13.7.1 Rumor collection problem and requirements

The classic problem of collecting rumors, or gossip, is defined as follows:

At the beginning, each processor has its distinct piece of information, called a rumor, the goal is to make every processor know all the rumors.

However in the model with processor crashes we need to re-define the gossip problem to respect crash failures of processors. Both Integrity and No-Duplicates properties are the same as in the reliable broadcast service, the only difference (which follows from the specification of the gossip problem) is in Liveness requirements:

  • Non-faulty Liveness: the rumor of every non-faulty processor must be known by each non-faulty processor.

  • Faulty Liveness: if processor has crashed during execution then each non-faulty processor either knows the rumor of or knows that is crashed.

The efficiency of gossip algorithms is measured in terms of time and message complexity. Time complexity measures number of (synchronous) steps from the beginning to the termination. Message complexity measures the total number of point-to-point messages sent (more precisely, if a processor sends a message to three other processors in one synchronous step, it contributes three to the message complexity).

The following simple algorithm completes gossip in just one synchronous step: each processor broadcasts its rumor to all processors. The algorithm is correct, because each message received contains a rumor, and a message not received means the failure of its sender. A drawback of such a solution is that a quadratic number of messages could be sent, which is quite inefficient.

We would like to perform gossip not only quickly, but also with fewer point-to-point messages. There is a natural trade-off between time and communication. Note that in the system without processor crashes such a trade-off may be achieved, e.g., sending messages over the (almost) complete binary tree, and then time complexity is , while the message complexity is . Hence by slightly increasing time complexity we may achieve almost linear improvement in message complexity. However, if the underlying communication network is prone to failures of components, then irregular failure patterns disturb a flow of information and make gossiping last longer. The question we address in this section is what is the best trade-off between time and message complexity in the model with processor crashes?

13.7.2 Efficient gossip algorithms

In this part we describe the family of gossip algorithms, among which we can find some efficient ones. They are all based on the same generic code, and their efficiency depends on the quality of two data structures put in the generic algorithm. Our goal is to prove that we may find some of those data structures that obtained algorithm is always correct, and efficient if the number of crashes in the execution is at most , where is a parameter.

We start with description of these structures: communication graph and communication schedules.

Communication graph.

A graph consists of a set of vertices and a set of edges. Graphs in this paper are always simple, which means that edges are pairs of vertices, with no direction associated with them. Graphs are used to describe communication patterns. The set of vertices of a graph consists of the processors of the underlying distributed system. Edges in determine the pairs of processors that communicate directly by exchanging messages, but this does not necessarily mean an existence of a physical link between them. We abstract form the communication mechanism: messages that are exchanged between two vertices connected by an edge in may need to be routed and traverse a possibly long path in the underlying physical communication network. Graph topologies we use, for a given number of processors, vary depending on an upper bound on the number of crashes we would like to tolerate in an execution. A graph that matters, at a given point in an execution, is the one induced by the processors that have not crashed till this step of the execution.

To obtain an efficient gossip algorithm, communication graphs should satisfy some suitable properties, for example the following property :

Definition 13.27 Let be a pair of positive integers. Graph is said to satisfy property , if has vertices, and if, for each subgraph of size at least , there is a subgraph of , such that the following hold:

1:

2:

3: The diameter of is at most

4: If , then

In the above definition, clause (1.) requires the existence of subgraphs whose vertices has the potential of (informally) inheriting the properties of the vertices of , clause (2.) requires the subgraphs to be sufficiently large, linear in size, clause (3.) requires the existence of paths in the subgraphs that can be used for communication of at most logarithmic length, and clause (4.) imposes monotonicity on the required subgraphs. Observe that graph is connected, even if is not, since its diameter is finite. The following result shows that graphs satisfying property can be constructed, and that their degree is not too large.

Theorem 13.28 For each , there exists a graph satisfying property . The maximum degree of graph is .

Communication schedules.

A local permutation is a permutation of all the integers in the range . We assume that prior the computation there is given set of local permutations. Each processor has such a permutation from . For simplicity we assume that . Local permutation is used to collect rumor in systematic way according to the order given by this permutation, while communication graphs are rather used to exchange already collected rumors within large and compact non-faulty graph component.

Generic algorithm.

We start with specifying a goal that gossiping algorithms need to achieve. We say that processor has heard about processor if either knows the original input rumor of or knows that has already failed. We may reformulate correctness of a gossiping algorithm in terms of hearing about other processors: algorithm is correct if Integrity and No-Duplicates properties are satisfied and if each processor has hard about any other processor by the termination of the algorithm.

The code of a gossiping algorithm includes objects that depend on the number of processors in the system, and also on the bound on the number of failures which are “efficiently tolerated” (if the number of failures is at most then message complexity of design algorithm is small). The additional parameter is a termination threshold which influences time complexity of the specific implementation of the generic gossip scheme. Our goal is to construct the generic gossip algorithm which is correct for any additional parameters and any communication graph and set of schedules, while efficient for some values and structures and .

Each processor starts gossiping as a collector. Collectors seek actively information about rumors of the other processors, by sending direct inquiries to some of them. A collector becomes a disseminator after it has heard about all the processors. Processors with this status disseminate their knowledge by sending local views to selected other processors.

Local views. Each processor starts with knowing only its ID and its input information . To store incoming data, processor maintains the following arrays:

  , and , 

each of size . All these arrays are initialised to store the value nil. For an array of processor , we denote its th entry by - intuitively this entry contains some information about processor . The array Rumor is used to store all the rumors that a processor knows. At the start, processor sets to its own input . Each time processor learns some , it immediately sets to this value. The array Active is used to store a set of all the processors that the owner of the array knows as crashed. Once processor learns that some processor has failed, it immediately sets to failed. Notice that processor has heard about processor , if one among the values and is not equal to NIL.

The purpose of using the array Pending is to facilitate dissemination. Each time processor learns that some other processor is fully informed, that is, it is either a disseminator itself or has been notified by a disseminator, then it marks this information in . Processor uses the array to send dissemination messages in a systematic way, by scanning to find those processors that possibly still have not heard about some processor.

The following is a useful terminology about the current contents of the arrays Active and Pending. Processor is said to be active according to , if has not yet received any information implying that crashed, which is the same as having nil in . Processor is said to need to be notified by if it is active according to and is equal to nil.

Phases. An execution of a gossiping algorithm starts with the processors initialising all the local objects. Processor initialises its list with nil at all the locations, except for the th one, which is set equal to . The remaining part of execution is structured as a loop, in which phases are iterated. Each phase consists of three parts: receiving messages, local computation, and multicasting messages. Phases are of two kinds: regular phase and ending phase. During regular phases processor: receives messages, updates local knowledge, checks its status, sends its knowledge to neighbours in communication graphs as well as inquiries about rumors and replies about its own rumor. During ending phases processor: receives messages, sends inquiries to all processors from which it has not heard yet, and replies about its own rumor. The regular phases are performed times; the number is a termination threshold. After this, the ending phase is performed four times. This defines a generic gossiping algorithm.

Generic-Gossip

       Code for any processor , 
<01>INITIALISATION</01>
<02>   processor  becomes a collector</02>
<03>   initialisation of arrays ,  and </03>
        
 11  REPEAT  times 
 12    PERFORM regular phase 
        
 20  REPEAT  times 
 21    PERFORM ending phase 

Now we describe communication and kinds of messages used in regular and ending phases.

Graph and range messages used during regular phases. A processor may send a message to its neighbour in the graph , provided that it is is still active according to . Such a message is called a graph one. Sending these messages only is not sufficient to complete gossiping, because the communication graph may become disconnected as a result of node crashes. Hence other messages are also sent, to cover all the processors in a systematic way. In this kind of communication processor considers the processors as ordered by its local permutation , that is, in the order . Some of additional messages sent in this process are called range ones.

During regular phase processors send the following kind of range messages: inquiring, reply and notifying messages. A collector sends an inquiring message to the first processor about which has not heard yet. Each recipient of such a message sends back a range message that is called a reply one.

Disseminators send range messages also to subsets of processors. Such messages are called notifying ones. The target processor selected by disseminator is the first one that still needs to be notified by . Notifying messages need not to be replied to: a sender already knows the rumors of all the processors, that are active according to it, and the purpose of the message is to disseminate this knowledge.

Regular-Phase

       Code for any processor , 
<01>RECEIVE messages</01>
        
 11  PERFORM local computation 
 12    UPDATE the local arrays 
 13    IF  is a collector, that has already heard about all the processors 
 14       THEN  becomes a disseminator 
 15    COMPUTE set of destination processors: FOR each processor  
 16       IF  is active according to  and  is a neighbour of  in graph  
 17          THEN add  to destination set for a graph message 
 18       IF     is a collector and  is the first processor 
                   about which  has not heard yet
 19          THEN send an inquiring message to  
 20       IF  is a disseminator and  is the first processor 
                   that needs to be notified by 
 21          THEN send a notifying message to  
 22       IF  is a collector, from which an inquiring message was received 
                   in the receiving step of this phase
 23          THEN send a reply message to  
        
 30  SEND graph/inquiring/notifying/reply messages to corresponding destination sets 

Last-resort messages used during ending phases. Messages sent during the ending phases are called last-resort ones. These messages are categorised into inquiring, replying, and notifying, similarly as the corresponding range ones, which is because they serve a similar purpose. Collectors that have not heard about some processors yet send direct inquiries to all of these processors simultaneously. Such messages are called inquiring ones. They are replied to by the non-faulty recipients in the next step, by way of sending reply messages. This phase converts all the collectors into disseminators. In the next phase, each disseminator sends a message to all the processors that need to be notified by it. Such messages are called notifying ones.

The number of graph messages, sent by a processor at a step of the regular phase, is at most as large as the maximum node degree in the communication graph. The number of range messages, sent by a processor in a step of the regular phase, is at most as large as the number of inquiries received plus a constant - hence the global number of point-to-point range messages sent by all processors during regular phases may be accounted as a constant times the number of inquiries sent (which is one per processor per phase). In contrast to that, there is no a priori upper bound on the number of messages sent during the ending phase. By choosing the termination threshold to be large enough, one may control how many rumors still needs to be collected during the ending phases.

Updating local view. A message sent by a processor carries its current local knowledge. More precisely, a message sent by processor brings the following: the ID , the arrays , , and , and a label to notify the recipient about the character of the message. A label is selected from the following: graph_message, inquiry_from_collector, notification_from_disseminator, this_is_a_reply, their meaning is self-explanatory. A processor scans a newly received message from some processor to learn about rumors, failures, and the current status of other processors. It copies each rumor from the received copy of into , unless it is already there. It sets to failed, if this value is at . It sets to done, if this value is at . It sets to done, if is a disseminator and the received message is a range one. If is itself a disseminator, then it sets to done immediately after sending a range message to . If a processor expects a message to come from processor , for instance a graph one from a neighbour in the communication graph, or a reply one, and the message does not arrive, then knows that processor has failed, and it immediately sets to failed.

Ending-Phase

       Code for any processor , 
<01>RECEIVE messages</01>
        
 11  PERFORM local computation 
 12    UPDATE the local arrays 
 13    IF  is a collector, that has already heard about all the processors 
 14       THEN  becomes a disseminator 
 15    COMPUTE set of destination processors: FOR each processor  
 16       IF  is a collector and it has not heard about  yet 
 17          THEN send an inquiring message to  
 18       IF  is a disseminator and  needs to be notified by  
 19          THEN send a notifying message to  
        
 20       IF an inquiring message was received from  
                in the receiving step of this phase
 21          THEN send a reply message to  
        
 30  SEND inquiring/notifying/reply messages to corresponding destination sets 

Correctness. Ending phases guarantee correctness, as is stated in the next fact.

Lemma 13.29 Generic-Gossip is correct for every communication graph and set of schedules .

Proof. Integrity and No-Duplicates properties follow directly from the code and the multicast service in synchronous message-passing system. It remains to prove that each processor has heard about all processors. Consider the step just before the first ending phases. If a processor has not heard about some other processor yet, then it sends a last-resort message to in the first ending phase. It is replied to in the second ending phase, unless processor has crashed already. In any case, in the third ending phase, processor either learns the input rumor of or it gets to know that has failed. The fourth ending phase provides an opportunity to receive notifying messages, by all the processors that such messages were sent to by .

The choice of communication graph , set of schedules and termination threshold influences however time and message complexities of the specific implementation of Generic Gossip Algorithm. First consider the case when is a communication graph satisfying property from Definition 13.27, contains random permutations, and for sufficiently large positive constant . Using Theorem 13.28 we get the following result.

Theorem 13.30 For every and , for some constant , there is a graph such that the implementation of the generic gossip scheme with as a communication graph and a set of random permutations completes gossip in expected time and with expected message complexity , if the number of crashes is at most .

Consider a small modification of Generic Gossip scheme: during regular phase every processor sends an inquiring message to the first (instead of one) processors according to permutation , where is a maximum degree of used communication graph . Note that it does not influence the asymptotic message complexity, since besides inquiring messages in every regular phase each processor sends graph messages.

Theorem 13.31 For every there are parameters and and there is a graph such that the implementation of the modified Generic Gossip scheme with as a communication graph and a set of random permutations completes gossip in expected time and with expected message complexity , for any number of crashes.

Since in the above theorem set is selected prior the computation, we obtain the following existential deterministic result.

Theorem 13.32 For every there are parameters and and there are graph and set of schedules such that the implementation of the modified Generic Gossip scheme with as a communication graph and schedules completes gossip in time and with message complexity , for any number of crashes.

Exercises

13.7-1 Design executions showing that there is no relation between Causal Order and Total Order and between Single-Source FIFO and Total Order broadcast services. For simplicity consider two processors and two messages sent.

13.7-2 Does broadcast service satisfying Single-Source FIFO and Causal Order requirements satisfy a Total Order property? Does broadcast service satisfying Single-Source FIFO and Total Order requirements satisfy a Causal Order property? If yes provide a proof, if not show a counterexample.

13.7-3 Show that using reliable Basic Broadcast instead of Basic Broadcast in the implementation of Single-Source FIFO service, then we obtain reliable Single-Source FIFO broadcast.

13.7-4 Prove that the Ordered Broadcast algorithm implements Causal Order service on a top of Single-Source FIFO one.

13.7-5 What is the total number of point-to-point messages sent in the algorithm Ordered-Broadcast in case of broadcasts?

13.7-6 Estimate the total number of point-to-point messages sent during the execution of Reliable-Causally-Ordered-Broadcast, if it performs broadcast and there are processor crashes during the execution.

13.7-7 Show an execution of the algorithm Reliable-Causally-Ordered-Broadcast which violates Total Order requirement.

13.7-8 Write a code of the implementation of reliable Sub-Total Order multicast service.

13.7-9 Show that the described method of implementing multicast services on the top of corresponding broadcast services is correct.

13.7-10 Show that the random graph - in which each node selects independently at random edges from itself to other processors - satisfies property from Definition 13.27 and has degree with probability at least .

13.7-11 Leader election problem is as follows: all non-faulty processors must elect one non-faulty processor in the same synchronous step. Show that leader election can not be solved faster than gossip problem in synchronous message-passing system with processors crashes.

13.8 Mutual exclusion in shared memory

We now describe the second main model used to describe distributed systems, the shared memory model. To illustrate algorithmic issues in this model we discuss solutions for the mutual exclusion problem.

13.8.1 Shared memory systems

The shared memory is modeled in terms of a collection of shared variables, commonly referred to as registers. We assume the system contains processors, , and registers . Each processor is modeled as a state machine. Each register has a type, which specifies:

  1. the values it can hold,

  2. the operations that can be performed on it,

  3. the value (if any) to be returned by each operation, and

  4. the new register value resulting from each operation.

Each register can have an initial value.

For example, an integer valued read/write register can take on all integer values and has operations read(R,v) and write(R,v). The read operation returns the value of the last preceding write, leaving unchanged. The write(R,v) operation has an integer parameter , returns no value and changes 's value to . A configuration is a vector , where is a state of and is a value of register . The events are computation steps at the processors where the following happens atomically (indivisibly):

  1. chooses a shared variable to access with a specific operation, based on 's current state,

  2. the specified operation is performed on the shared variable,

  3. 's state changes based on its transition function, based on its current state and the value returned by the shared memory operation performed.

A finite sequence of configurations and events that begins with an initial configuration is called an execution. In the asynchronous shared memory system, an infinite execution is admissible if it has an infinite number of computation steps.

13.8.2 The mutual exclusion problem

In this problem a group of processors need to access a shared resource that cannot be used simultaneously by more than a single processor. The solution needs to have the following two properties. (1) Mutual exclusion: Each processor needs to execute a code segment called a critical section so that at any given time at most one processor is executing it (i.e., is in the critical section). (2) Deadlock freedom: If one or more processors attempt to enter the critical section, then one of them eventually succeeds as long as no processor stays in the critical section forever. These two properties do not provide any individual guarantees to any processor. A stronger property is (3) No lockout: A processor that wishes to enter the critical section eventually succeeds as long as no processor stays in the critical section forever. Original solutions to this problem relied on special synchronisation support such as semaphores and monitors. We will present some of the distributed solutions using only ordinary shared variables.

We assume the program of a processor is partitioned into the following sections:

  • Entry / Try: the code executed in preparation for entering the critical section.

  • Critical: the code to be protected from concurrent execution.

  • Exit: the code executed when leaving the critical section.

  • Remainder: the rest of the code.

A processor cycles through these sections in the order: remainder, entry, critical and exit. A processor that wants to enter the critical section first executes the entry section. After that, if successful, it enters the critical section. The processor releases the critical section by executing the exit section and returning to the remainder section. We assume that a processor may transition any number of times from the remainder to the entry section. Moreover, variables, both shared and local, accessed in the entry and exit section are not accessed in the critical and remainder section. Finally, no processor stays in the critical section forever. An algorithm for a shared memory system solves the mutual exclusion problem with no deadlock (or no lockout) if the following hold:

  • Mutual Exclusion: In every configuration of every execution at most one processor is in the critical section.

  • No deadlock: In every admissible execution, if some processor is in the entry section in a configuration, then there is a later configuration in which some processor is in the critical section.

  • No lockout: In every admissible execution, if some processor is in the entry section in a configuration, then there is a later configuration in which that same processor is in the critical section.

In the context of mutual exclusion, an execution is admissible if for every processor , either takes an infinite number of steps or ends in the remainder section. Moreover, no processor is ever stuck in the exit section (unobstructed exit condition).

13.8.3 Mutual exclusion using powerful primitives

A single bit suffices to guarantee mutual exclusion with no deadlock if a powerful test&set register is used. A test&set variable is a binary variable which supports two atomic operations, test&set and reset, defined as follows:

       test&set(: memory address) returns binary value:
          
          
          return ()
       reset(: memory address):
          

The test&set operation atomically reads and updates the variable. The reset operation is merely a write. There is a simple mutual exclusion algorithm with no deadlock, which uses one test&set register.

Mutual exclusion using one test&set register

       Initially  equals 
        
        :
  1  wait until  
        
        :
  2   
        

Assume that the initial value of is . In the entry section, processor repeatedly tests until it returns . The last such test will assign to , causing any following test by other processors to return , prohibiting any other processor from entering the critical section. In the exit section resets to ; another processor waiting in the entry section can now enter the critical section.

Theorem 13.33 The algorithm using one test&set register provides mutual exclusion without deadlock.

13.8.4 Mutual exclusion using read/write registers

If a powerful primitive such as test&set is not available, then mutual exclusion must be implemented using only read/write operations.

The bakery algorithm

Lamport's bakery algorithm for mutual exclusion is an early, classical example of such an algorithm that uses only shared read/write registers. The algorithm guarantees mutual exclusion and no lockout for processors using registers (but the registers may need to store integer values that cannot be bounded ahead of time).

Processors wishing to enter the critical section behave like customers in a bakery. They all get a number and the one with the smallest number in hand is the next one to be “served”. Any processor not standing in line has number , which is not counted as the smallest number.

The algorithm uses the following shared data structures: Number is an array of integers, holding in its -th entry the current number of processor . Choosing is an array of boolean values such that is true while is in the process of obtaining its number. Any processor that wants to enter the critical section attempts to choose a number greater than any number of any other processor and writes it into . To do so, processors read the array Number and pick the greatest number read as their own number. Since however several processors might be reading the array at the same time, symmetry is broken by choosing (, ) as 's ticket. An ordering on tickets is defined using the lexicographical ordering on pairs. After choosing its ticket, waits until its ticket is minimal: For all other , waits until is not in the process of choosing a number and then compares their tickets. If 's ticket is smaller, waits until executes the critical section and leaves it.

Bakery

       Code for processor , .
       Initially  and
        FALSE, for 
        
        :
  1  TRUE 
  2   
  3  FALSE 
  4  FOR  TO  DO 
  5    WAIT UNTIL FALSE 
  6    WAIT UNTIL  or   
        :
  7   
        

We leave the proofs of the following theorems as Exercises 13.8-2 and 13.8-3.

Theorem 13.34 Bakery guarantees mutual exclusion.

Theorem 13.35 Bakery guarantees no lockout.

A bounded mutual exclusion algorithm for processors

Lamports Bakery algorithm requires the use of unbounded values. We next present an algorithm that removes this requirement. In this algorithm, first presented by Peterson and Fischer, processors compete pairwise using a two-processor algorithm in a tournament tree arrangement. All pairwise competitions are arranged in a complete binary tree. Each processor is assigned to a specific leaf of the tree. At each level, the winner in a given node is allowed to proceed to the next higher level, where it will compete with the winner moving up from the other child of this node (if such a winner exists). The processor that finally wins the competition at the root node is allowed to enter the critical section.

Let . Consider a complete binary tree with leaves and a total of nodes. The nodes of the tree are numbered inductively in the following manner: The root is numbered ; the left child of node numbered is numbered and the right child is numbered . Hence the leaves of the tree are numbered .

With each node , three binary shared variables are associated: , and . All variables have an initial value of . The algorithm is recursive. The code of the algorithm consists of a procedure Node which is executed when a processor accesses node , while assuming the role of processor . Each node has a critical section. It includes the entry section at all the nodes on the path from the nodes parent to the root, the original critical section and the exit code on all nodes from the root to the nodes parent. To begin, processor executes the code of node .

Tournament-Tree

       procedure Node(: integer; side: )
  1   
  2  WAIT UNTIL ( or ) 
  3   
  4  IF  
  5    THEN IF () 
  6          THEN goto line 1 
  7          ELSE WAIT UNTIL  
  8  IF  
  9    THEN  
 10    ELSE Node() 
 11     
 12     
       end procedure

This algorithm uses bounded values and as the next theorem shows, satisfies the mutual exclusion, no lockout properties:

Theorem 13.36 The tournament tree algorithm guarantees mutual exclusion.

Proof. Consider any execution. We begin at the nodes closest to the leaves of the tree. A processor enters the critical section of this node if it reaches line 9 (it moves up to the next node). Assume we are at a node that connects to the leaves where and start. Assume that two processors are in the critical section at some point. It follows from the code that then at this point. Assume, without loss of generality that 's last write to before entering the critical section follows 's last write to before entering the critical section. Note that can enter the critical section (of ) either through line 5 or line 6. In both cases reads . However 's read of , follows 's write to , which by assumption follows 's write to . Hence 's read of should return , a contradiction.

The claim follows by induction on the levels of the tree.

Theorem 13.37 The tournament tree algorithm guarantees no lockout.

Proof. Consider any admissible execution. Assume that some processor is starved. Hence from some point on is forever in the entry section. We now show that cannot be stuck forever in the entry section of a node . The claim then follows by induction.

Case 1: Suppose executes line 10 setting to 0. Then equals forever after. Thus passes the test in line 2 and skips line 5. Hence must be waiting in line 6, waiting for to be 0, which never occurs. Thus is always executing between lines 3 and 11. But since does not stay in the critical section forever, this would mean that is stuck in the entry section forever which is impossible since will execute line 5 and reset to 0.

Case 2: Suppose never executes line 10 at some later point. Hence must be waiting in line 6 or be in the remainder section. If it is in the entry section, passes the test in line 2 ( is 1). Hence does not reach line 6. Therefore waits in line 2 with . Hence passes the test in line 6. So cannot be forever in the entry section. If is forever in the remainder section equals 0 henceforth. So cannot be stuck at line 2, 5 or 6, a contradiction.

The claim follows by induction on the levels of the tree.

Lower bound on the number of read/write registers

So far, all deadlock-free mutual exclusion algorithms presented require the use of at least shared variables, where is the number of processors. Since it was possible to develop an algorithm that uses only bounded values, the question arises whether there is a way of reducing the number of shared variables used. Burns and Lynch first showed that any deadlock-free mutual exclusion algorithm using only shared read/write registers must use at least shared variables, regardless of their size. The proof of this theorem allows the variables to be multi-writer variables. This means that each processor is allowed to write to each variable. Note that if the variables are single writer, that the theorem is obvious since each processor needs to write something to a (separate) variable before entering the critical section. Otherwise a processor could enter the critical section without any other processor knowing, allowing another processor to enter the critical section concurrently, a contradiction to the mutual exclusion property.

The proof by Burns and Lynch introduces a new proof technique, a covering argument: Given any no deadlock mutual exclusion algorithm , it shows that there is some reachable configuration of in which each of the processors is about to write to a distinct shared variable. This is called a covering of the shared variables. The existence of such a configuration can be shown using induction and it exploits the fact that any processor before entering the critical section, must write to at least one shared variable. The proof constructs a covering of all shared variables. A processor then enters the critical section. Immediately thereafter the covering writes are released so that no processor can detect the processor in the critical section. Another processor now concurrently enters the critical section, a contradiction.

Theorem 13.38 Any no deadlock mutual exclusion algorithm using only read/write registers must use at least shared variables.

13.8.5 Lamport's fast mutual exclusion algorithm

In all mutual exclusion algorithms presented so far, the number of steps taken by processors before entering the critical section depends on , the number of processors even in the absence of contention (where multiple processors attempt to concurrently enter the critical section), when a single processor is the only processor in the entry section. In most real systems however, the expected contention is usually much smaller than .

A mutual exclusion algorithm is said to be fast if a processor enters the critical section within a constant number of steps when it is the only processor trying to enter the critical section. Note that a fast algorithm requires the use of multi-writer, multi-reader shared variables. If only single writer variables are used, a processor would have to read at least variables.

Such a fast mutual exclusion algorithm is presented by Lamport.

Fast-Mutual-Exclusion

       Code for processor , . Initially Fast-Lock and Slow-Lock are , and          is false for all , 
       
        :
  1  TRUE 
  2   
  3  IF  
  4    THEN FALSE 
  5    WAIT UNTIL  
  6    goto 1 
  7   
  8  IF  
  9    THEN FALSE 
 10       for all , WAIT UNTIL FALSE 
 11       IF  
 12          THEN WAIT UNTIL  
 13       goto 1 
        
        :
 14   
 15  FALSE 
        

Lamport's algorithm is based on the correct combination of two mechanisms, one for allowing fast entry when no contention is detected, and the other for providing deadlock freedom in the case of contention. Two variables, Fast-Lock and Slow-Lock are used for controlling access when there is no contention. In addition, each processor has a boolean variable whose value is true if is interested in entering the critical section and false otherwise. A processor can enter the critical section by either finding - in this case it enters the critical section on the fast path - or by finding in which case it enters the critical section along the slow path.

Consider the case where no processor is in the critical section or in the entry section. In this case, Slow-Lock is and all Want entries are . Once now enters the entry section, it sets to and Fast-Lock to . Then it checks Slow-Lock which is . then it checks Fast-Lock again and since no other processor is in the entry section it reads and enters the critical section along the fast path with three writes and two reads.

If then waits until all Want flags are reset. After some processor executes the for loop in line , the value of Slow-Lock remains unchanged until some processor leaving the critical section resets it. Hence at most one processor may find and this processor enters the critical section along the slow path. Note that the Lamport's Fast Mutual Exclusion algorithm does not guarantee lockout freedom.

Theorem 13.39 Algorithm Fast-Mutual-Exclusion guarantees mutual exclusion without deadlock.

Exercises

13.8-1 An algorithm solves the 2-mutual exclusion problem if at any time at most two processors are in the critical section. Present an algorithm for solving the 2-mutual exclusion problem using test & set registers.

13.8-2 Prove that bakery algorithm satisfies the mutual exclusion property.

13.8-3 Prove that bakery algorithm provides no lockout.

13.8-4 Isolate a bounded mutual exclusion algorithm with no lockout for two processors from the tournament tree algorithm. Show that your algorithm has the mutual exclusion property. Show that it has the no lockout property.

13.8-5 Prove that algorithm Fast-Mutual-Exclusion has the mutual exclusion property.

13.8-6 Prove that algorithm Fast-Mutual-Exclusion has the no deadlock property.

13.8-7 Show that algorithm Fast-Mutual-Exclusion does not satisfy the no lockout property, i.e. construct an execution in which a processor is locked out of the critical section.

13.8-8 Construct an execution of algorithm Fast-Mutual-Exclusion in which two processors are in the entry section and both read at least variables before entering the critical section.

 PROBLEMS 

13-1 Number of messages of the algorithm Flood

Prove that the algorithm Flood sends messages in any execution, given a graph with vertices and edges. What is the exact number of messages as a function of the number of vertices and edges in the graph?

13-2 Leader election in a ring

Assume that messages can only be sent in CW direction, and design an asynchronous algorithm for leader election on a ring that has message complexity.

Hint. Let processors work in phases. Each processor begins in the active mode with a value equal to the identifier of the processor, and under certain conditions can enter the relay mode, where it just relays messages. An active processor waits for messages from two active processors, and then inspects the values sent by the processors, and decides whether to become the leader, remain active and adopt one of the values, or start relaying. Determine how the decisions should be made so as to ensure that if there are three or more active processors, then at least one will remain active; and no matter what values active processors have in a phase, at most half of them will still be active in the next phase.

13-3 Validity condition in asynchronous systems

Show that the validity condition is equivalent to requiring that every nonfaulty processor decision be the input of some processor.

13-4 Single source consensus

An alternative version of the consensus problem requires that the input value of one distinguished processor (the general) be distributed to all the other processors (the lieutenants). This problem is also called single source consensus problem. The conditions that need to be satisfied are:

  • Termination: Every nonfaulty lieutenant must eventually decide,

  • Agreement: All the nonfaulty lieutenants must have the same decision,

  • Validity: If the general is nonfaulty, then the common decision value is the general's input.

So if the general is faulty, then the nonfaulty processors need not decide on the general's input, but they must still agree with each other. Consider the synchronous message passing system with Byzantine faults. Show how to transform a solution to the consensus problem (in Subsection 13.4.5) into a solution to the general's problem and vice versa. What are the message and round overheads of your transformation?

13-5 Bank transactions

Imagine that there are banks that are interconnected. Each bank starts with an amount of money . Banks do not remember the initial amount of money. Banks keep on transferring money among themselves by sending messages of type <10> that represent the value of a transfer. At some point of time a bank decides to find the total amount of money in the system. Design an algorithm for calculating that does not stop monetary transactions.

 CHAPTER NOTES 

The definition of the distributed systems presented in the chapter are derived from the book by Attiya and Welch [24]. The model of distributed computation, for message passing systems without failures, was proposed by Attiya, Dwork, Lynch and Stockmeyer [23].

Modeling the processors in the distributed systems in terms of automata follows the paper of Lynch and Fisher [229].

The concept of the execution sequences is based on the papers of Fischer, Gries, Lamport and Owicki [229], [261], [262].

The definition of the asynchronous systems reflects the presentation in the papers of Awerbuch [25], and Peterson and Fischer [270].

The algorithm Spanning-Tree-Broadcast is presented after the paper due to Segall [297].

The leader election algorithm Bully was proposed by Hector Garcia-Molina in 1982 [127]. The asymptotic optimality of this algorithm was proved by Burns [51].

The two generals problem is presented as in the book of Gray [144].

The consensus problem was first studied by Lamport, Pease, and Shostak [214], [268]. They proved that the Byzantine consensus problem is unsolvable if [268].

One of the basic results in the theory of asynchronous systems is that the consensus problem is not solvable even if we have reliable communication systems, and one single faulty processor which fails by crashing. This result was first shown in a breakthrough paper by Fischer, Lynch and Paterson [108].

The algorithm Consensus-with-Crash-Failures is based on the paper of Dolev and Strong [90].

Berman and Garay [40] proposed an algorithm for the solution of the Byzantine consensus problem for the case . Their algorithm needs rounds.

The bakery algorithm [212] for mutual exclusion using only shared read/write registers to solve mutual exclusion is due to Lamport [212]. This algorithm requires arbitrary large values. This requirement is removed by Peterson and Fischer [270]. After this Burns and Lynch proved that any deadlock-free mutual exclusion algorithm using only shared read/write registers must use at least shared variables, regardless of their size [52].

The algorithm Fast-Mutual-Exclusion is presented by Lamport [213]. The source of the problems 13-3, 13-4, 13-5 is the book of Attiya and Welch [24].

Important textbooks on distributed algorithms include the monumental volume by Nancy Lynch [228] published in 1997, the book published by Gerard Tel [320] in 2000, and the book by Attiya and Welch [24]. Also of interest is the monograph by Claudia Leopold [221] published in 2001, and the book by Nicola Santoro [296], which appeared in 2006.

A recent book on the distributed systems is due to A. D. Kshemkalyani and M. [206].

Finally, several important open problems in distributed computing can be found in a recent paper of Aspnes et al. [21].

Chapter 14. Network Simulation

In this chapter we discuss methods and techniques to simulate the operations of computer network systems and network applications in real-world environment. Simulation is one of the most widely used techniques in network design and management to predict the performance of a network system or network application before the network is physically built or the application is rolled out.

14.1 Types of simulation

A network system is a set of network elements, such as routers, switches, links, users, and applications working together to achieve some tasks. The scope of a simulation study may only be a system that is part of another system as in the case of subnetworks. The state of a network system is the set of relevant variables and parameters that describe the system at a certain time that comprise the scope of the study. For instance, if we are interested in the utilisation of a link, we want to know only the number of bits transmitted via the link in a second and the total capacity of the link, rather than the amount of buffers available for the ports in the switches connected by the link.

Instead of building a physical model of a network, we build a mathematical model representing the behaviour and the logical and quantitative relations between network elements. By changing the relations between network elements, we can analyse the model without constructing the network physically, assuming that the model behaves similarly to the real system, i.e., it is a valid model. For instance, we can calculate the utilisation of a link analytically, using the formula , where is the amount of data sent at a certain time and is the capacity of the link in bits per second. This is a very simple model that is very rare in real world problems. Unfortunately, the majority of real world problems are too complex to answer questions using simple mathematical equations. In highly complex cases simulation technique is more appropriate.

Simulation models can be classified in many ways. The most common classifications are as follows:

  • Static and dynamic simulation models: A static model characterises a system independently of time. A dynamic model represents a system that changes over time.

  • Stochastic and deterministic models: If a model represents a system that includes random elements, it is called a stochastic model. Otherwise it is deterministic. Queueing systems, the underlying systems in network models, contain random components, such as arrival time of packets in a queue, service time of packet queues, output of a switch port, etc.

  • Discrete and continuous models: A continuous model represents a system with state variables changing continuously over time. Examples are differential equations that define the relationships for the extent of change of some state variables according to the change of time. A discrete model characterises a system where the state variables change instantaneously at discrete points in time. At these discrete points some event or events may occur, changing the state of the system. For instance, the arrival of a packet at a router at a certain time is an event that changes the state of the port buffer in the router.

In our discussion, we assume dynamic, stochastic, and discrete network models. We refer to these models as discrete-event simulation models.

Due to the complex nature of computer communications, network models tend to be complex as well. The development of special computer programs for a certain simulation problem is a possibility, but it may be very time consuming and inefficient. Recently, the application of simulation and modelling packages has become more customary, saving coding time and allowing the modeller to concentrate on the modelling problem in hand instead of the programming details. At first glance, the use of such network simulation and modelling packages, as COMNET, OPNET, etc., creates the risk that the modeller has to rely on modelling techniques and hidden procedures that may be proprietary and may not be available to the public. In the following sections we will discuss the simulation methodology on how to overcome the fear of this risk by using validation procedures to make sure that the real network system will perform the same way as it has been predicted by the simulation model.

14.2 The need for communications network modelling and simulation

In a world of more and more data, computers, storage systems, and networks, the design and management of systems are becoming an increasingly challenging task. As networks become faster, larger, and more complex, traditional static calculations are no longer reasonable approaches for validating the implementation of a new network design and multimillion dollar investments in new network technologies. Complex static calculations and spreadsheets are not appropriate tools any more due to the stochastic nature of network traffic and the complexity of the overall system.

Organisations depend more and more on new network technologies and network applications to support their critical business needs. As a result, poor network performance may have serious impacts on the successful operation of their businesses. In order to evaluate the various alternative solutions for a certain design goal, network designers increasingly rely on methods that help them evaluate several design proposals before the final decision is made and the actual systems is built. A widely accepted method is performance prediction through simulation. A simulation model can be used by a network designer to analyse design alternatives and study the behaviour of a new system or the modifications to an existing system without physically building it. A simulation model can also represent the network topology and tasks performed in a network in order to obtain statistical results about the network's performance.

It is important to understand the difference between simulation and emulation. The purpose of emulation is to mimic the original network and reproduce every event that happens in every network element and application. In simulation, the goal is to generate statistical results that represent the behaviour of certain network elements and their functions. In discrete event simulation, we want to observe events as they happen over time, and collect performance measures to draw conclusions on the performance of the network, such as link utilisation, response times, routers' buffer sizes, etc.

Simulation of large networks with many network elements can result in a large model that is difficult to analyse due to the large amount of statistics generated during simulation. Therefore, it is recommended to model only those parts of the network which are significant regarding the statistics we are going to obtain from the simulation. It is crucial to incorporate only those details that are significant for the objectives of the simulation. Network designers typically set the following objectives:

  • Performance modelling: Obtain statistics for various performance parameters of links, routers, switches, buffers, response time, etc.

  • Failure analysis: Analyse the impacts of network element failures.

  • Network design: Compare statistics about alternative network designs to evaluate the requirements of alternative design proposals.

  • Network resource planning: Measure the impact of changes on the network's performance, such as addition of new users, new applications, or new network elements.

Depending on the objectives, the same network might need different simulation models. For instance, if the modeller wants to determine the overhead of a new service of a protocol on the communication links, the model's links need to represent only the traffic generated by the new service. In another case, when the modeller wants to analyse the response time of an application under maximum offered traffic load, the model can ignore the traffic corresponding to the new service of the protocol analysed in the previous model.

Another important question is the granularity of the model, i.e., the level of details at which a network element is modelled. For instance, we need to decide whether we want to model the internal architecture of a router or we want to model an entire packet switched network. In the former case, we need to specify the internal components of a router, the number and speed of processors, types of buses, number of ports, amount of port buffers, and the interactions between the router's components. But if the objective is to analyse the application level end-to-end response time in the entire packet switched network, we would specify the types of applications and protocols, the topology of the network and link capacities, rather then the internal details of the routers. Although the low level operations of the routers affect the overall end-to-end response time, modelling the detailed operations do not significantly contribute to the simulation results when looking at an entire network. Modelling the details of the routers' internal operations in the order of magnitude of nanoseconds does not contribute significantly to the end-to-end delay analysis in the higher order of magnitude of microseconds or seconds. The additional accuracy gained from higher model granularity is far outweighed by the model's complexity and the time and effort required by the inclusion of the routers' details.

Simplification can also be made by applying statistical functions. For instance, modelling cell errors in an ATM network does not have to be explicitly modelled by a communication link by changing a bit in the cell's header, generating a wrong CRC at the receiver. Rather, a statistical function can be used to decide when a cell has been damaged or lost. The details of a cell do not have to be specified in order to model cell errors.

These examples demonstrate that the goal of network simulation is to reproduce the functionality of a network pertinent to a certain analysis, not to emulate it.

14.3 Types of communications networks, modelling constructs

A communications network consists of network elements, nodes (senders and receivers) and connecting communications media. Among several criteria for classifying networks we use two: transmission technology and scale. The scale or distance also determines the technique used in a network: wireline or wireless. The connection of two or more networks is called internetwork. The most widely known internetwork is the Internet.

According to transmission technology we can broadly classify networks as broadcast and point-to-point networks:

  • In broadcast networks a single communication channel is shared by every node. Nodes communicate by sending packets or frames received by all the other nodes. The address field of the frame specifies the recipient or recipients of the frame. Only the addressed recipient(s) will process the frame. Broadcast technologies also allow the addressing of a frame to all nodes by dedicating it as a broadcast frame processed by every node in the network. It is also possible to address a frame to be sent to all or any members of only a group of nodes. The operations are called multicasting and any casting, respectively.

  • Point-to-point networks consist of many connections between pairs of nodes. A packet or frame sent from a source to a destination may have to first traverse intermediate nodes where they are stored and forwarded until it reaches the final destination.

Regarding our other classification criterion, the scale of the network, we can classify networks by their physical area coverage:

  • Personal Area Networks (PANs) support a person's needs. For instance, a wireless network of a keyboard, a mouse, and a personal digital assistant (PDA) can be considered as a PAN.

  • Local area networks (LANs), typically owned by a person, department, a smaller organisation at home, on a single floor or in a building, cover a limited geographic area. LANs connect workstations, servers, and shared resources. LANs can be further classified based on the transmission technology, speed measured in bits per second, and topology. Transmissions technologies range from traditional 10 Mbps LANs to today's 10 Gbps LANs. In terms of topology, there are bus and ring networks and switched LANs.

  • Metropolitan area networks (MANs) span a larger area, such as a city or a suburb. A widely deployed MAN is the cable television network distributing not just one-way TV programs but two-way Internet services as well in the unused portion of the transmission spectrum. Other MAN technologies are the Fiber Distributed Data Interface (FDDI) and IEEE wireless technologies as discussed below.

  • Wide area networks (WANs) cover a large geographical area, a state, a country or even a continent. A WAN consists of hosts (clients and servers) connected by subnets owned by communications service providers. The subnets deliver messages from the source host to the destination host. A subnet may contain several transmission lines, each one connecting a pair of specialised hardware devices called routers. Transmission lines are made of various media; copper wire, optical fiber, wireless links, etc. When a message is to be sent to a destination host or hosts, the sending host divides the message into smaller chunks, called packets. When a packet arrives on an incoming transmission line, the router stores the packet before it selects an outgoing line and forwards the packet via that line. The selection of the outgoing line is based on a routing algorithm. The packets are delivered to the destination host(s) one-by-one where the packets are reassembled into the original message.

Wireless networks can be categorised as short-range radio networks, wireless LANs, and wireless WANs.

  • In short range radio networks, for instance Bluetooth, various components, digital cameras, Global Positioning System (GPS) devices, headsets, computers, scanners, monitors, and keyboards are connected via short-range radio connections within 20–30 feet. The components are in primary-secondary relation. The main system unit, the primary component, controls the operations of the secondary components. The primary component determines what addresses the secondary devices use, when and on what frequencies they can transmit.

  • A wireless LAN consists of computers and access points equipped with a radio modem and an antenna for sending and receiving. Computers communicate with each other directly in a peer-to-peer configuration or via the access point that connects the computers to other networks. Typical coverage area is around 300 feet. The wireless LAN protocols are specified under the family of IEEE 802.11 standards for a range of speed from 11 Mbps to 108 Mbps.

  • Wireless WANs comprise of low bandwidth and high bandwidth networks. The low bandwidth radio networks used for cellular telephones have evolved through three generations. The first generation was designed only for voice communications utilising analog signalling. The second generation also transmitted only voice but based on digital transmission technology. The current third generation is digital and transmits both voice and data at most 2Mbps. Fourth and further generation cellular systems are under development. High-bandwidth WANs provides high-speed access from homes and businesses bypassing the telephone systems. The emerging IEEE 802.16 standard delivers services to buildings, not mobile stations, as the IEEE 802.11 standards, and operates in much higher 10-66 GHz frequency range. The distance between buildings can be several miles.

  • Wired or wireless home networking is getting more and more popular connecting various devices together that can be accessible via the Internet. Home networks may consists of PCs, laptops, PDAs, TVs, DVDs, camcorders, MP3 players, microwaves, refrigerator, A/C, lights, alarms, utility meters, etc. Many homes are already equipped with high-speed Internet access (cable modem, DSL, etc.) through which people can download music and movies on demand.

The various components and types of communications networks correspond to the modelling constructs and the different steps of building a simulation model. Typically, a network topology is built first, followed by adding traffic sources, destinations, workload, and setting the parameters for network operation. The simulation control parameters determine the experiment and the running of the simulation. Prior to starting a simulation various statistics reports can be activated for analysis during or after the simulation. Statistical distributions are available to represent specific parameterisations of built-in analytic distributions. As the model is developed, the modeller creates new model libraries that can be reused in other models as well.

14.4 Performance targets for simulation purposes

In this section we discuss a non-exhausting list of network attributes that have a profound effect on the perceived network performance and are usual targets of network modelling. These attributes are the goals of the statistical analysis, design, and optimisation of computer networks. Fundamentally, network models are constructed by defining the statistical distribution of the arrival and service rate in a queueing system that subsequently determines these attributes.

  • Link capacity

    Channel or link capacity is the number of messages per unit time handled by a link. It is usually measured in bits per second. One of the most famous of all results of information theory is Shannon's channel coding theorem: “For a given channel there exists a code that will permit the error-free transmission across the channel at a rate , provided , where is the channel capacity.” Equality is achieved only when the Signal-to-noise Ratio (SNR) is infinite. See more details in textbooks on information and coding theory.

  • Bandwidth

    Bandwidth is the difference between the highest and lowest frequencies available for network signals. Bandwidth is also a loose term used to describe the throughput capacity of a specific link or protocol measured in Kilobits, Megabits, Gigabits, Terabits, etc., in a second.

  • Response time

    The response time is the time it takes a network system to react to a certain source's input. The response time includes the transmission time to the destination, the processing time at both the source and destination and at the intermediate network elements along the path, and the transmission time back to the source. Average response time is an important measure of network performance. For users, the lower the response time the better. Response time statistics (mean and variation) should be stationary; it should not dependent on the time of the day. Note that low average response time does not guarantee that there are no extremely long response times due to network congestions.

  • Latency

    Delay or latency is the amount of time it takes for a unit of data to be transmitted across a network link. Latency and bandwidth are the two factors that determine the speed of a link. It includes the propagation delay (the time taken for the electrical or optical signals to travel the distance between two points) and processing time. For instance, the latency, or round-time delay between a ground station of a satellite communication link and back to another ground station (over 34,000 km each way) is approximately 270 milliseconds. The round-time delay between the east and west coast of the US is around 100 ms, and transglobal is about 125 ms. The end-to-end delay of a data path between source and destination spanning multiple segments is affected not only by the media' signal speed, but also by the network devices, routers, switches along the route that buffer, process, route, switch, and encapsulate the data payload. Erroneous packets and cells, signal loss, accidental device and link failures and overloads can also contribute to the overall network delay. Bad cells and packets force retransmission from the initial source. These packets are typically dropped with the expectation of a later retransmission resulting in slowdowns that cause packets to overflow buffers.

  • Routing protocols

    The route is the path that network traffic takes from the source to the destination. The path in a LAN is not a critical issue because there is only one path from any source to any destination. When the network connects several enterprises and consists of several paths, routers, and links, finding the best route or routes becomes critical. A route may traverse through multiple links with different capacities, latencies, and reliabilities. Routes are established by routing protocols. The objective of the routing protocols is to find an optimal or near optimal route between source and destination avoiding congestions.

  • Traffic engineering

    A new breed of routing techniques is being developed using the concept of traffic engineering. Traffic engineering implies the use of mechanisms to avoid congestion by allocating network resources optimally, rather than continually increasing network capacities. Traffic engineering is accomplished by mapping traffic flows to the physical network topology along predetermined paths. The optimal allocation of the forwarding capacities of routers and switches are the main target of traffic engineering. It provides the ability to diverge traffic flows away from the optimal path calculated by the traditional routing protocols into a less congested area of the network. The purpose of traffic engineering is to balance the offered load on the links, routers, and switches in a way that none of these network elements is over or under utilised.

  • Protocol overhead

    Protocol messages and application data are embedded inside the protocol data units, such as frames, packets, and cells. A main interest of network designers is the overhead of protocols. Protocol overhead concerns the question: How fast can we really transmit using a given communication path and protocol stack, i.e., how much bandwidth is left for applications? Most protocols also introduce additional overhead associated with in-band protocol management functions. Keep-alive packets, network alerts, control and monitoring messages, poll, select, and various signalling messages are transmitted along with the data streams.

  • Burstiness

    The most dangerous cause of network congestion is the burstiness of the network traffic. Recent results make evident that high-speed Internet traffic is more bursty and its variability cannot be predicted as assumed previously. It has been shown that network traffic has similar statistical properties on many time scales. Traffic that is bursty on many or all time scales can be described statistically using the notion of long-range dependency. Long-range dependent traffic has observable bursts on all time scales. One of the consequences is that combining the various flows of data, as it happens in the Internet, does not result in the smoothing of traffic. Measurements of local and wide area network traffic have proven that the widely used Markovian process models cannot be applied for today's network traffic. If the traffic were Markovian process, the traffic's burst length would be smoothed by averaging over a long time scale, contradicting the observations of today's traffic characteristics. The harmful consequences of bursty traffic will be analysed in a case study in Section 14.9.

  • Frame size

    Network designers are usually worried about large frames because they can fill up routers' buffers much faster than smaller frames resulting in lost frames and retransmissions. Although the processing delay for larger frames is the same as for smaller ones, i.e., larger packets are seemingly more efficient, routers and switches can process internal queues with smaller packets faster. Larger frames are also target for fragmentation by dividing them into smaller units to fit in the Maximum Transmission Unit (MTU). MTU is a parameter that determines the largest datagram than can be transmitted by an IP interface. On the other hand, smaller frames may create more collision in an Ethernet network or have lower utilisation on a WAN link.

  • Dropped packet rate

    Packets may be dropped by the data link and network layers of the OSI architecture. The transport layer maintains buffers for unacknowledged packets and retransmits them to establish an error-free connection between sender and receiver. The rate of dropping packets at the lower layers determines the rate of retransmitting packets at the transport layer. Routers and switches may also drop packets due to the lack of internal buffers. Buffers fill up quicker when WAN links get congested which causes timeouts and retransmissions at the transport layer. The TCP's slow start algorithm tries to avoid congestions by continually estimating the round-trip propagation time and adjusting the transmission rate according to the measured variations in the roundtrip time.

14.5 Traffic characterisation

Communications networks transmit data with random properties. Measurements of network attributes are statistical samples taken from random processes, for instance, response time, link utilisation, interarrival time of messages, etc. In this section we review basic statistics that are important in network modelling and performance prediction. After a family of statistical distributions has been selected that corresponds to a network attribute under analysis, the next step is to estimate the parameters of the distribution. In many cases the sample average or mean and the sample variance are used to estimate the parameters of a hypothesised distribution. Advanced software tools include the computations for these estimates. The mean is interpreted as the most likely value about which the samples cluster. The following equations can be used when discrete or continues raw data available. Let are samples of size . The mean of the sample is defined by

The sample variance is defined by

If the data are discrete and grouped in a frequency distribution, the equations above are modified as

where is the number of different values of and is the frequency of the value of . The standard deviation is the square root of the variance .

The variance and standard deviation show the deviation of the samples around the mean value. Small deviation from the mean demonstrates a strong central tendency of the samples. Large deviation reveals little central tendency and shows large statistical randomness.

Numerical estimates of the distribution parameters are required to reduce the family of distributions to a single distribution and test the corresponding hypothesis. Figure 14.1 describes estimators for the most common distributions occurring in network modelling. If denotes a parameter, the estimator is denoted by . Except for an adjustment to remove bias in the estimates of for the normal distribution and in the estimate of of the uniform distribution, these estimators are the maximum likelihood estimators based on the sample data.

Figure 14.1.  Estimation of the parameters of the most common distributions.

Estimation of the parameters of the most common distributions.


Probability distributions describe the random variations that occur in the real world. Although we call the variations random, randomness has different degrees; the different distributions correspond to how the variations occur. Therefore, different distributions are used for different simulation purposes. Probability distributions are represented by probability density functions. Probability density functions show how likely a certain value is. Cumulative density functions give the probability of selecting a number at or below a certain value. For example, if the cumulative density function value at 1 was equal to 0.85, then of the time, selecting from this distribution would give a number less than 1. The value of a cumulative density function at a point is the area under the corresponding probability density curve to the left of that value. Since the total area under the probability density function curve is equal to one, cumulative density functions converge to one as we move toward the positive direction. In most of the modelling cases, the modeller does not need to know all details to build a simulation model successfully. He or she has only to know which distribution is the most appropriate one for the case.

Below, we summarise the most common statistical distributions. We use the simulation modelling tool COMNET to depict the respective probability density functions (PDF). From the practical point of view, a PDF can be approximated by a histogram with all the frequencies of occurrences converted into probabilities.

  • Normal distribution

    It typically models the distribution of a compound process that can be described as the sum of a number of component processes. For instance, the time to transfer a file (response time) sent over the network is the sum of times required to send the individual blocks making up the file. In modelling tools the normal distribution function takes two positive, real numbers: mean and standard deviation. It returns a positive, real number. The stream parameter specifies which random number stream will be used to provide the sample. It is also often used to model message sizes. For example, a message could be described with mean size of 20,000 bytes and a standard deviation of 5,000 bytes.

    Figure 14.2.  An example normal distribution.

    An example normal distribution.


  • Poisson distribution

    It models the number of independent events occurring in a certain time interval; for instance, the number of packets of a packet flow received in a second or a minute by a destination. In modelling tools, the Poisson distribution function takes one positive, real number, the mean. The “number” parameter in Figure 14.3 specifies which random number stream will be used to provide the sample. This distribution, when provided with a time interval, returns an integer which is often used to represent the number of arrivals likely to occur in that time interval. Note that in simulation, it is more useful to have this information expressed as the time interval between successive arrivals. For this purpose, the exponential distribution is used.

    Figure 14.3.  An example Poisson distribution.

    An example Poisson distribution.


  • Exponential distribution

    It models the time between independent events, such as the interarrival time between packets sent by the source of a packet flow. Note, that the number of events is Poisson, if the time between events is exponentially distributed. In modelling tools, the Exponential distribution function 14.4 takes one positive, real number, the mean and the stream parameter that specifies which random number stream will be used to provide the sample. Other application areas include: Time between data base transactions, time between keystrokes, file access, emails, name lookup request, HTTP lookup, X-window protocol exchange, etc.

    Figure 14.4.  An example exponential distribution.

    An example exponential distribution.


  • Uniform distribution

    Uniform distribution models (see Figure 14.5) data that range over an interval of values, each of which is equally likely. The distribution is completely determined by the smallest possible value min and the largest possible value max. For discrete data, there is a related discrete uniform distribution as well. Packet lengths are often modelled by uniform distribution. In modelling tools the Uniform distribution function takes three positive, real numbers: min, max, and stream. The stream parameter x specifies which random number stream will be used to provide the sample.

    Figure 14.5.  An example uniform distribution.

    An example uniform distribution.


  • Pareto distribution

    The Pareto distribution (see 14.6) is a power-law type distribution for modelling bursty sources (not long-range dependent traffic). The distribution is heavily peaked but the tail falls off slowly. It takes three parameters: location, shape, and offset. The location specifies where the distribution starts, the shape specifies how quickly the tail falls off, and the offset shifts the distribution.

    Figure 14.6.  An example Pareto distribution.

    An example Pareto distribution.


A common use of probability distribution functions is to define various network parameters. A typical network parameter for modelling purposes is the time between successive instances of messages when multiple messages are created. The specified time is from the start of one message to the start of the next message. As it is discussed above, the most frequent distribution to use for interarrival times is the exponential distribution (see Figure 14.7).

Figure 14.7.  Exponential distribution of interarrival time with 10 sec on the average.

Exponential distribution of interarrival time with 10 sec on the average.

The parameters entered for the exponential distribution are the mean value and the random stream number to use. Network traffic is often described as a Poisson process. This generally means that the number of messages in successive time intervals has been observed and the distribution of the number of observations in an interval is Poisson distributed. In modelling tools, the number of messages per unit of time is not entered. Rather, the interarrival time between messages is required. It may be proven that if the number of messages per unit time interval is Poisson-distributed, then the interarrival time between successive messages is exponentially distributed. The interarrival distribution in the following dialog box for a message source in COMNET is defined by Exp (10.0). It means that the time from the start of one message to the start of the next message follows an exponential distribution with 10 seconds on the average. Figure 14.8 shows the corresponding probability density function.

Figure 14.8.  Probability density function of the Exp (10.0) interarrival time.

Probability density function of the Exp (10.0) interarrival time.


Many simulation models focus on the simulation of various traffic flows. Traffic flows can be simulated by either specifying the traffic characteristics as input to the model or by importing actual traffic traces that were captured during certain application transactions under study. The latter will be discussed in a subsequent section on Baselining.

Network modellers usually start the modelling process by first analysing the captured traffic traces to visualise network attributes. It helps the modeller understand the application level processes deep enough to map the corresponding network events to modelling constructs. Common tools can be used before building the model. After the preliminary analysis, the modeller may disregard processes, events that are not important for the study in question. For instance, the capture of traffic traces of a database transaction reveals a large variation in frame lengths. Figure 14.9 helps visualise the anomalies:

Figure 14.9.  Visualisation of anomalies in packet lengths.

Visualisation of anomalies in packet lengths.

The analysis of the same trace (Figure 14.10) also discloses a large deviation of the interarrival times of the same frames (delta times):

Figure 14.10.  Large deviations between delta times.

Large deviations between delta times.

Approximating the cumulative probability distribution function by a histogram of the frame lengths of the captured traffic trace (Figure 14.11) helps the modeller determine the family of the distribution:

Figure 14.11.  Histogram of frame lengths.

Histogram of frame lengths.

14.6 Simulation modelling systems

14.6.1 Data collection tools and network analysers

The section summaries the main features of the widely used discrete event simulation tools, OPNET and COMNET, and the supporting network analysers, Network Associates' Sniffer and OPNET's Application Characterisation Environment.

OPtimized Network Engineering Tools (OPNET) is a comprehensive simulation system capable of modelling communication networks and distributed systems with detailed protocol modelling and performance analysis. OPNET consists of a number of tools that fall into three categories corresponding to the three main phases of modelling and simulation projects: model specification, data collection and simulation, and analysis.

14.6.2 Model specification

During model specification the network modeller develops a representation of the network system under study. OPNET implements the concept of model reuse, i.e., models are based on embedded models developed earlier and stored in model libraries. The model is specified at various levels of details using specification editors. These editors categorise the required modelling information corresponding to the hierarchical structure of an actual network system. The highest level editor, the Project Editor develops network models consisting of network topology, subnets, links, and node models specified in the Node Editor. The Node Editor describes nodes' internal architecture, functional elements and data flow between them. Node models in turn, consist of modules with process models specified by the Process Editor. The lowest level of the network hierarchy, the process models, describes the module's behaviour in terms of protocols, algorithms, and applications using finite state machines and a high-level language.

There are several other editors to define various data models referenced by process- or node-level models, e.g., packet formats and control information between processes. Additional editors create, edit, and view probability density functions (PDFs) to control certain events, such as the interarrival time of sending or receiving packets, etc. The model-specification editors provide a graphical interface for the user to manipulate objects representing the models and the corresponding processes. Each editor can specify objects and operations corresponding to the model's abstraction level. Therefore, the Project Editor specifies nodes and link objects of a network, the Node Editor specifies processors, queues, transmitters, and receivers in the network nodes, and the Process Editor specifies the states and transitions in the processes. Figure 14.12 depicts the abstraction level of each editor:

Figure 14.12.  The three modelling abstraction levels specified by the Project, Node, and Process editors.

The three modelling abstraction levels specified by the Project, Node, and Process editors.

14.6.3 Data collection and simulation

OPNET can produce many types of output during simulation depending on how the modeller defined the types of output. In most cases, modellers use the built in types of data: output vectors, output scalars, and animation:

  • Output vectors represent time-series simulation data consisting of list of entries, each of which is a time-value pair. The first value in the entries can be considered as the independent variable and the second as the dependent variable.

  • Scalar statistics are individual values derived from statistics collected during simulation, e.g., average transmission rate, peak number of dropped cells, mean response time, or other statistics.

  • OPNET can also generate animations that are viewed during simulation or replay after simulation. The modeller can define several forms of animations, for instance, packet flows, state transitions, and statistics.

14.6.4 Analysis

Typically, much of the data collected during simulations is stored in output scalar and output vector files. In order to analyse these data OPNET provides the Analysis Tool which is a collection of graphing and numerical processing functions. The Analysis Tool presents data in the form of graphs or traces. Each trace consists of a list of abscissa and ordinate pairs. Traces are held and displayed in analysis panels. The Analysis Tool supports a variety of methods for processing simulation output data and computing new traces. Calculations, such as histograms, PDF, CDF, and confidence intervals are included. Analysis Tool also supports the use of mathematical filters to process vector or trace data. Mathematical filters are defined as hierarchical block diagrams based on a predefined set of calculus, statistical, and arithmetic operators. The example diagrams below (Figures 14.13 and 14.14) shows graphs generated by the Analysis Tool:

Figure 14.13.  Example for graphical representation of scalar data (upper graph) and vector data (lower graph).

Example for graphical representation of scalar data (upper graph) and vector data (lower graph).

Figure 14.14.  Figure 14.14 shows four graphs represented by the Analysis Tool.

Figure 14.14 shows four graphs represented by the Analysis Tool.

Figure 14.14 Analysis Tool Showing Four Graphs.

COMNET is another popular discrete-event simulation system. We will discuss it briefly and demonstrate its features in Section 14.9.

14.6.5 Network Analysers

There is an increasing interest in predicting, measuring, modelling, and diagnosing application performance across the application lifecycle from development through deployment to production. Characterising the application's performance is extremely important in critical application areas, like in eCommerce. In the increasingly competitive eCommerce, the application's performance is critical, especially where the competition is just “one click” away. Application performance affects revenue. When an application performs poorly it is always the network that is blamed rather than the application. These performance problems may result from several areas including application design or slow database servers. Using tools, like ACE and Network Associates' Sniffer, network modellers can develop methodologies to identify the source of application slowdowns and resolve their causes. After analysing the applications, modellers can make recommendations for performance optimisation. The result is faster applications and better response times. The Application Characterisation Environment (ACE) is a tool for visualising, analysing, and troubleshooting network applications. Network managers and application developers can use ACE to

  • Locate network and application bottlenecks.

  • Diagnose network and application problems.

  • Analyse the affect of anticipated network changes on the response time of existing applications.

  • Predict application performance under varying configurations and network conditions

The performance of an application is determined by network attributes that are affected by the various components of a communication network. The following list contains some example for these attributes and the related network elements:

  • Network media

    • Bandwidth (Congestion, Burstiness)

    • Latency (TCP window size, High latency devices, Chatty applications)

  • Nodes

  • Clients

    • User time

    • Processing time

    • Starved for data

  • Servers

    • Processing time

    • Multi-tier waiting data

    • Starved for data

  • Application

    • Application turns (Too many turns – Chatty applications)

    • Threading (Single vs. multi-threaded)

    • Data profile (Bursty, Too much data processing)

Analysis of an application requires two phases:

  • Capture packet traces while an application is running to build a baseline for modelling an application. We can use the ACE's capturing tool or any other network analysers to capture packet traces. The packet traces can be captured by strategically deployed capture agents.

  • Import the capture file to create a representation of the application's transactions called an application task for further analysis of the messages and protocol data units generated by the application.

After creating the application task, we can perform the following operations over the captured traffic traces:

  • View and edit the captured packet traces on different levels of the network protocol stack in different windows. We can also use these windows to remove or delete sections of an application task. In this way, we focus on transactions of our interest.

  • Perform application level analysis by identifying and diagnosing bottlenecks. We can measure the components of the total response time in terms of application level time, processing time, and network time and view detailed statistics on the network and application. We can also decode and analyse the network and application protocol data units from the contents of the packet traces.

  • Predict application performance in “what-if” scenarios and for testing projected changes.

Without going into specific details we illustrate some of the features above through a simple three-tier application. We want to determine the reason or reasons of the slow response time from a Client that remotely accesses an Application Server (App Server) to retrieve information from a Database Server (DB Server). The connection is over an ADSL line between the client and the Internet, and a 100Mbps Ethernet connection between the App Server and the DB Server. We want to identify the cause of the slow response time and recommend solutions. We deployed capture agents at the network segments between the client and the App Server and between the servers. The agents captured traffic traces simultaneously during a transaction between the client and the App Server and the App Server and the DB Server respectively. Then, the traces were merged and synchronised to obtain the best possible analysis of delays at each tier and in the network.

After importing the trace into ACE, we can analyse the transaction in the Data Exchange Chart, which depicts the flow of application messages among tiers over time.

Figure 14.15.  Data Exchange Chart.

Data Exchange Chart.


The Data Exchange Chart shows packets of various sizes being transmitted between the Client and the servers. The overall transaction response time is approximately 6 seconds. When the “Show Dependencies” checkbox is checked, the white dependency lines indicate large processing delays on the Application Server and Client tiers. For further analysis, we generate the “Summary of Delays” window showing how the total response time of the application is divided into four general categories: Application delay, Propagation delay, Transmission delay and Protocol/Congestion delay. Based on this chart we can see the relation between application and network related delays during the transaction between the client and the servers. The chart clearly shows that the application delay far outweighs the Propagation, Transmission, and Protocol/Congestion delays slowing down the transaction.

Figure 14.16.  Summary of Delays.

Summary of Delays.


The “Diagnosis” function (Figure 14.17) provides a more granular analysis of possible bottlenecks by analysing factors that often cause performance problems in networked applications. Values over a specified threshold are marked as bottlenecks or potential bottlenecks.

Figure 14.17.  Diagnosis window.

Diagnosis window.


The diagnosis of the transaction confirms that the primary bottleneck is due to Processing Delay on the Application Server. The processing delay is due to the file I/O, CPU processing, or memory access. It also reveals another bottleneck: the chattiness of the application that leads us to the next step. We investigate the application behaviour in terms of application turns that can be obtained from the transaction statistics. An application turn is a change in direction of the application-message flow.

The statistics of the transaction (Figure 14.18) disclose that the number of application turns is high, i.e., the data sent by the transaction at a time is small. This may cause significant application and network delays. Additionally, a significant portion of application processing time can be spent processing the many requests and responses. The Diagnosis window indicates a “Chattiness” bottleneck without a “Network Cost of Chattiness” bottleneck, which means the following:

Figure 14.18.  Statistics window.

Statistics window.

  • The application does not create significant network delays due to chattiness.

  • The application creates significant processing delays due to overhead associated with handling many small application level requests and responses.

  • The application's “Network Cost of Chattiness” could dramatically increase in a high-latency network.

The recommendation is that the application should send fewer, larger application messages. This will utilise network and tier resources more efficiently. For example, a database application should avoid sending a set of records one record at a time.

Would the response time decrease significantly if we added more bandwidth to the link between the client and the APP Server (Figure 14.19)? Answering this question is important because adding more bandwidth is expensive. Using the prediction feature we can answer the question. In the following chart we selected the bandwidth from 128K to 10Mbps. The chart shows that beyond approximately 827 Kbps there is no significant improvement in response time, i.e., for this application the recommended highest bandwidth is no more than 827Kbps, which can be provided by a higher speed DSL line.

Figure 14.19.  Impact of adding more bandwidth on the response time.

Impact of adding more bandwidth on the response time.


After the analysis of the application's performance, we can immediately create the starting baseline model from the captured traffic traces for further simulation studies as illustrated in Figure 14.20.

Figure 14.20.  Baseline model for further simulation studies.

Baseline model for further simulation studies.

14.6.6 Sniffer

Another popular network analyser is Network Associates' Sniffer. (Network Associates has recently renamed it to Netasyst.) It is a powerful network visualisation tool consisting of a set of functions to:

  • Capture network traffic for detailed analysis.

  • Diagnose problems using the Expert Analyzer.

  • Monitor network activity in real time.

  • Collect detailed utilisation and error statistics for individual stations, conversations, or any portion of your network.

  • Save historical utilisation and error information for baseline analysis.

  • Generate visible and audible real-time alarms and notify network administrators when troubles are detected.

  • Probe the network with active tools to simulate traffic, measure response times, count hops, and troubleshoot problems.

For further details we refer the reader to the vendors' documentations on http://www.nai.com.

14.7 Model Development Life Cycle (MDLC)

There are several approaches for network modelling. One possible approach is the creation of a starting model that follows the network topology and approximates the assumed network traffic statistically. After some changes are made, the modeller can investigate the impact of the changes of some system parameters on the network or application performance. This is an approach when it is more important to investigate the performance difference between two scenarios rather than starting from a model based on real network traffic. For instance, assuming certain client/server transactions, we want to measure the change of the response time as the function of the link utilisation 20%, 40%, 60%, etc. In this case it is not extremely important to start from a model based on actual network traffic. It is enough to specify certain amount of data transmission estimated by a frequent user or designer. We investigate, for this amount of data, how much the response time will increase as the link utilisation increases relative to the starting scenario.

The most common approach for network modelling follows the methodologies of proactive network management. It implies the creation of a network model using actual network traffic as input to simulate current and future behaviour of the network and predict the impact of the addition of new applications on the network performance. By making use of modelling and simulation tools network managers can change the network model by adding new devices, workstations, servers, and applications. Or they can upgrade the links to higher speed network connections and perform “what-if” scenarios before the implementation of the actual changes. We follow this approach in our further discussions because this approach has been widely accepted in the academia, corporate world, and the industry. In the subsequent paragraphs we elaborate a sequence of modelling steps, called the Model Development Life Cycle – MDLC that the author has applied in various real life scenarios of modelling large enterprise networks. The MDLC has the following steps:

  • Identification of the topology and network components.

  • Data collection.

  • Construction and validation of the baseline model. Perform network simulation studies using the baseline.

  • Creation of the application model using the details of the traffic generated by the applications.

  • Integration of the application and baseline model and completion of simulation studies.

  • Further data gathering as the network growths and changes and as we know more about the applications.

  • Repeat the same sequence.

In the following, we expand the steps above:

Identification of the topology and network components.

Topology data describes the physical network components (routers, circuits, and servers) and how they are connected. It includes the location and configuration description of each internetworking device, how those devices are connected (the circuit types and speeds), the type of LANs and WANs, the location of the servers, addressing schemes, a list of applications and protocols, etc.

Data collection.

In order to build the baseline model we need to acquire topology and traffic data. Modellers can acquire topology data either by entering the data manually or by using network management tools and network devices' configuration files. Several performance management tools use the Simple Network Management Protocol – SNMP to query the Management Information Base (MIB) maintained by SNMP agents running in the network's routers and other internetworking devices. This process is known as an SNMP discovery. We can import topology data from routers' configuration files to build a representation of the topology for the network in question. Some performance management tools can import data using the map file from a network management platform, such as HP OpenView or IBM NetView. Using the network management platform's export function, the map file can be imported by modelling.

The network traffic input to the baseline model can be derived from various sources: Traffic descriptions from interviews and network documents, design or maintenance documents, MIB/SNMP reports and network analyser and Remote Monitoring—traffic traces. RMON is a network management protocol that allows network information to be gathered at a single node. RMON traces are collected by RMON probes that collect data at different levels of the network architecture depending on the probe's standard. Figure 14.21 includes the most widely used standards and the level of data collection:

Figure 14.21.  Comparison of RMON Standards.

Comparison of RMON Standards.


Network traffic can be categorised as usage-based data and application-based data. The primary difference between usage- and application-based data is the degree of details that the data provides and the conclusions that can be made based on the data. The division can be clearly specified by two adjacent OSI layers, the Transport layer and the Session layer: usage-based data is for investigating the performance issues through the transport layer; application-based data is for analysing the rest of the network architecture above the Transport layer. (In Internet terminology this is equivalent to the cut between the TCP level and the applications above the TCP level.)

The goal of collecting usage-based data is to determine the total traffic volume before the applications are implemented on the network. Usage-based data can be gathered from SNMP agents in routers or other internetworking devices. SNMP queries sent to the routers or switches provide statistics about the exact number of bytes that have passed through each LAN interface, WAN circuit, or (Permanent Virtual Circuit – PVC) interfaces. We can use the data to calculate the percentage of utilisation of the available bandwidth for each circuit.

The purpose of gathering application-based data is to determine the amount of data generated by an application and the type of demand the application makes. It allows the modeller to understand the behaviour of the application and to characterise the application level traffic. Data from traffic analysers or from RMON2-compatible probes, Sniffer, NETScout Manager, etc., provide specifics about the application traffic on the network. Strategically placed data collection devices can gather enough data to provide clear insight into the traffic behaviour and flow patterns of the network applications. Typical application level data collected by traffic analysers:

  • The type of applications.

  • Hosts communicating by network layer addresses (i.e., IP addresses).

  • The duration of the network conversation between any two hosts (start time and end time).

  • The number of bytes in both the forward and return directions for each network conversation.

  • The average size of the packets in the forward and return directions for each network conversation.

  • Traffic burstiness.

  • Packet size distributions.

  • Packet interarrival distributions.

  • Packet transport protocols.

  • Traffic profile, i.e., message and packet sizes, interarrival times, and processing delays.

  • Frequency of executing application for a typical user.

  • Major interactions of participating nodes and sequences of events.

Construction and validation of the baseline model. Perform network simulation studies using the baseline.

The goal of building a baseline model is to create an accurate model of the network as it exists today. The baseline model reflects the current “as is” state of the network. All studies will assess changes to the baseline model. This model can most easily be validated since its predictions should be consistent with current network measurements. The baseline model generally only predicts basic performance measures such as resource utilisation and response time.

The baseline model is a combination of the topology and usage-based traffic data that have been collected earlier. It has to be validated against the performance parameters of the current network, i.e., we have to prove that the model behaves similarly to the actual network activities. The baseline model can be used either for analysis of the current network or it can serve as the basis for further application and capacity planning. Using the import functions of a modelling tool, the baseline can be constructed by importing first the topology data gathered in the data collection phase of the modelling life cycle. Topology data is typically stored in topology files (.top or .csv) created by Network Management Systems, for instance HP OpenView or Network Associate's Sniffer. Traffic files can be categorised as follows:

  • Conversation pair traffic files that contain aggregated end-to-end network load information, host names, packet counts, and byte counts for each conversation pair. The data sets allow the modelling tool to preserve the bursty nature of the traffic. These files can be captured by various data collection tools.

  • Event trace traffic files that contain network load information in the form of individual conversations on the network rather than summarised information. During simulation the file can replay the captured network activity on an event by event basis.

Before simulation the modeller has to decide on the following simulation parameters:

  • Run length: Runtime length must exceed the longest message delay in the network. During this time the simulation should produce sufficient number of events to allow the model to generate enough samples of every event.

  • Warm-up period: The simulation warm-up period is the time needed to initialise packets, buffers, message queues, circuits, and the various elements of the model. The warm-up period is equal to a typical message delay between hosts. Simulation warm-up is required to ensure that the simulation has reached steady-state before data collection begins.

  • Multiple replications: There may be a need for multiple runs of the same model in cases when statistics are not sufficiently close to true values. We also need multiple runs prior to validation when we execute multiple replicates to determine variation of statistics between replications. A common cause of variation between replications is rare events.

  • Confidence interval: A confidence interval is an interval used to estimate the likely size of a population parameter. It gives an estimated range of values that has a specified probability of containing the parameter being estimated. Most commonly used intervals are the 95% and 99% confidence intervals that have .95 and .99 probabilities respectively of containing the parameter. In simulation, confidence interval provides an indicator of the precision of the simulation results. Fewer replications result in a broader confidence interval and less precision.

In many modelling tools, after importing both the topology and traffic files, the baseline model is created automatically. It has to be checked for construction errors prior to any attempts at validation by performing the following steps:

  • Execute a preliminary run to confirm that all source-destination pairs are present in the model.

  • Execute a longer simulation with warm-up and measure the sent and received message counts and link utilisation to confirm that correct traffic volume is being transmitted.

Validating the baseline model is the proof that the simulation produces the same performance parameters that are confirmed by actual measurements on the physical network. The network parameters below can usually be measured in both the model and in the physical network:

  • Number of packets sent and received

  • Buffer usage

  • Packet delays

  • Link utilisation

  • Node's CPU utilisation

Confidence intervals and the number of independent samples affect how close a match between the model and the real network is to be expected. In most cases, the best that we can expect is an overlap of the confidence interval of predicted values from the simulation and the confidence interval of the measured data. A very close match may require too many samples of the network and too many replications of the simulation to make it practical.

Creation of the application model using the details of the traffic generated by the applications.

Application models are studied whenever there is a need to evaluate the impact of a networked application on the network performance or to evaluate the application's performance affected by the network. Application models provide traffic details between network nodes generated during the execution of the application. The steps of building an application model are similar to the ones for baseline models.

  • Gather data on application events and user profiles.

  • Import application data into a simulation model manually or automatically.

  • Identify and correct any modelling errors.

  • Validate the model.

Integration of the application and baseline models and completion of simulation studies.

The integration of the application model(s) and baseline model follows the following steps:

  • Start with the baseline model created from usage-based data.

  • Use the information from the application usage scenarios (locations of users, number of users, transaction frequencies) to determine where and how to load the application profiles onto the baseline model.

  • Add the application profiles generated in the previous step to the baseline model to represent the additional traffic created by the applications under study.

Completion of Simulation studies consists of the following steps:

  • Use a modelling tool to run the model or simulation to completion.

  • Analyse the results: Look at the performance parameters of the target transactions in comparison to the goals established at the beginning of the simulation.

  • Analyse the utilisation and performance of various network elements, especially where the goals are not being met.

Typical simulation studies include the following cases:

  • Capacity analysis

    Capacity analysis studies the changes of network parameters, for instance:

    • Changes in the number and location of users.

    • Changes in network elements capacity.

    • Changes in network technologies.

    A modeller may be interested in the effect of the changes above on the following network parameters:

    • Switches and routers' utilisation

    • Communications link utilisation

    • Buffer utilisation

    • Retransmitted and lost packets

  • Response time analysis

    The scope of response time analysis is the study of message and packet transmission delay:

    • Application and network level packet end-to-end delay.

    • Packet round trip delay.

    • Message/packet delays.

    • Application response time.

  • Application Analysis

    The scope of application studies is the ratio of the total application response time relative to the individual components of network and application delay. Application's analysis provides statistics of various measures of network and application performance in addition to the items discussed in a previous section.

Further data gathering as the network growths and as we know more about the applications

The goal of this phase is to analyse or predict how a network will perform both under current conditions and when changes to traffic load (new applications, users, or network structure) are introduced:

  • Identify modifications to the network infrastructure that will alter capacity usage of the network's resources.

  • A redesign can include increasing or decreasing capacity, relocating network elements among existing network sites, or changing communications technology.

  • Modify the models to reflect these changes.

  • Assess known application development or deployment plans in terms of projected network impact.

  • Assess business conditions and plans in terms of their impact on the network from projected additional users, new sites, and other effects of the plans.

  • Use ongoing Baselining techniques to watch usage trends over time, especially related to Internet and intranet usage.

14.8 Modelling of traffic burstiness

Recent measurements of local area network traffic and wide-area network traffic have proved that the widely used Markovian process models cannot be applied for today's network traffic. If the traffic were a Markovian process, the traffic's burst length would be smoothed by averaging over a long time scale, contradicting the observations of today's traffic characteristics. Measurements of real traffic also prove that traffic burstiness is present on a wide range of time scales. Traffic that is bursty on many or all time scales can be characterised statistically using the concept of self-similarity. Selfsimilarity is often associated with objects in fractal geometry, objects that appear to look alike regardless of the scale at which they are viewed. In case of stochastic processes like time series, the term self-similarity refers to the process' distribution, which, when viewed at varying time scales, remains the same. Self-similar time series has noticeable bursts, which have long periods with extremely high values on all time scales. Characteristics of network traffic, such as packets/sec, bytes/sec, or length of frames, can be considered as stochastic time series. Therefore, measuring traffic burstiness is the same as characterising the self-similarity of the corresponding time series.

The self-similarity of network traffic has also been observed in studies in numerous papers. These and other papers show that packet loss, buffer utilisation, and response time are totally different when simulations use either real traffic data or synthetic data that include self-similarity.

Background.

Let be a covariance stationary stochastic process. Such a process has a constant mean , finite variance , and an autocorrelation function , that depends only on . It is assumed that has an autocorrelation function of the form:

where and is a positive constant. Let represent a new time series obtained by averaging the original series over nonoverlapping blocks of size . For each is specified by . Let denote the autocorrelation function of the aggregated time series .

Definition of self-similarity.

The process called exactly self-similar with self-similarity parameter if the corresponding aggregated processes have the same correlation structure as , i.e. for all .

A covariance stationary process is called asymptotically self-similar with self-similarity parameter , if for all large enough , as , .

Definition of long-range dependency.

A stationary process is called long-range dependent if the sum of the autocorrelation values approaches infinity: . Otherwise, it is called short-range dependent. It can be derived from the definitions that while short-range dependent processes have exponentially decaying autocorrelations, the autocorrelations of long-range dependent processes decay hyperbolically; i.e., the related distribution is heavy-tailed. In practical terms, a random variable with heavy-tail distribution generates extremely large values with high probability. The degree of self-similarity is expressed by the parameter or Hurst-parameter. The parameter represents the speed of decay of a process' autocorrelation function. As the extent of both self-similarity and long-range dependence increases. It can also be shown that for self-similar processes with long-range dependency .

Traffic models.

Traffic modelling originates in traditional voice networks. Most of the models have relied on the assumption that the underlying processes are Markovian (or more general, short-range dependent). However, today's high-speed digital packet networks are more complex and bursty than traditional voice traffic due to the diversity of network services and technologies.

Several sophisticated stochastic models have been developed as a reaction to new developments, such as Markov-modulated Poisson processes, fluid flow models, Markovian arrival processes, batched Markovian arrival process models, packet train models, and Transform-Expand-Sample models. These models mainly focus on the related queueing problem analytically. They are usually not compared to real traffic patterns and not proven to match the statistical property of actual traffic data.

Another category of models attempts to characterise the statistical properties of actual traffic data. For a long time, the area of networking research has lacked adequate traffic measurements. However, during the past years, large quantities of network traffic measurements have become available and collected in the Web and high-speed networks. Some of these data sets consist of high-resolution traffic measurements over hours, days, or weeks. Other data sets provide information over time periods ranging from weeks to months and years. Statistical analyses of these high time-resolution traffic measurements have proved that actual traffic data from packet networks reveal self-similarity. These results point out the difference between traditional models and measured traffic data. While the assumed processes in traditional packet traffic models are short-range dependent, measured packet traffic data show evidence of long-range dependency. Figure 14.22 illustrates the difference between Internet traffic and voice traffic for different numbers of aggregated users. As the number of voice flows increases, the traffic becomes more and more smoothed contrary to the Internet traffic.

Figure 14.22.  The self-similar nature of Internet network traffic.

The self-similar nature of Internet network traffic.

Quite the opposite to the well developed field of short-range dependent queueing models, fewer theoretical results exist for queueing systems with long-range dependence. For some of the results. In terms of modelling, the two major groups of self-similar models are fractional Gaussian noises and fractional ARIMA processes. The Gaussian models accurately represent aggregation of many traffic streams. Another well-known model, the M/Pareto model has been used in modelling network traffic that is not sufficiently aggregated for the Gaussian model to apply.

Black box vs. structural models.

We share the opinion calling the approach of traditional time series analysis as black box modelling as opposite to the structural modelling that concentrates on the environment in which the models' data was collected; i.e., the complex hierarchies of network components that make up today's communications systems. While the authors admit that black box models can be and are useful in other contexts, they argue that black box models are of no use for understanding the dynamic and complex nature of the traffic in modern packet networks. Black box models have not much use in designing, managing and controlling today's networks either. In order to provide physical explanations for empirically observed phenomena such as long-range dependency, we need to replace black box models with structural models. The attractive feature of structural traffic models is that they take into account the details of the layered architecture of today's networks and can analyse the interrelated network parameters that ultimately determine the performance and operation of a network. Time series models usually handle these details as black boxes. Because actual networks are complex systems, in many cases, black box models assume numerous parameters to represent a real system accurately. For network designers, who are important users of traffic modelling, black box models are not very useful. It is rarely possible to measure or estimate the model's numerous parameters in a complex network environment. For a network designer, a model ought to be simple, meaningful in a particular network. It can relay on actual network measurements, and the result ought to be relevant to the performance and the operation of a real network.

For a long time, traffic models were developed independently of traffic data collected in real networks. These models could not be applied in practical network design. Today the availability of huge data sets of measured network traffic and the increasing complexity of the underlying network structure emphasise the application of the Ockham' Razer in network modelling. (Ockham's Razor is a principle of the mediaeval philosopher William Ockham. According to his principle, modellers should not make more assumptions than the minimum needed. This principle is also called the Principle of Parsimony and motivates all scientific modelling and theory building. It states that modellers should choose the simplest model among a set of otherwise equivalent models of a given phenomenon. In any given model, Ockham's Razor helps modellers include only those variables that are really needed to explain the phenomenon. Following the principle, model development will become easier, reducing the possibilities for inconsistencies, ambiguities and redundancies.)

Structural models are presented, for instance in different papers, which demonstrate how the self-similar nature of aggregated network traffic of all conversations between hosts explains the details of the traffic dynamics at the level generated by the individual hosts. The papers introduce structural traffic models that have a physical meaning in the network context and underline the predominance of long-range dependence in the packet arrival patterns generated by the individual conversations between hosts. The models provide insight into how individual network connections behave in local and wide area networks. Although the models go beyond the black box modelling methodology by taking into account the physical structure of the aggregated traffic patterns, they do not include the physical structure of the intertwined structure of links, routers, switches, and their finite capacities along the traffic paths.

Crovella and Stavros demonstrated that World Wide Web traffic shows characteristics that are consistent with self-similarity. They show that transmission times may be heavy tailed, due to the distribution of available file sizes in the Web. It is also shown that silent times may also be heavy-tailed; primarily due to the effect of user “think time”. Similarly to the structural models due to Willinger at al., their paper lacks of analysing the impact of selfsimilar traffic on the parameters of the links and the routers' buffers that ultimately determine a network's performance.

This chapter describes a traffic model that belongs to the structural model category above. We implement the M/Pareto model within the discrete event simulation package COMNET that allows the analysis of the negative impact of self-similar traffic on not just one single queue, but on the overall performance of various interrelated network components, such as link, buffers, response time, etc. The commercially available package does not readily provide tools for modelling self-similar, long-range dependent network traffic. The model-generated traffic is based on measurements collected from a real ATM network. The choice of the package emphasises the need for integrated tools that could be useful not just for theoreticians, but also for network engineers and designers. Our paper intends to narrow the gap between existing, well-known theoretical results and their applicability in everyday, practical network analysis and modelling. It is highly desirable that appropriate traffic models should be accessible from measuring, monitoring, and controlling tools. Our model can help network designers and engineers, the ultimate users of traffic modelling, understand the dynamic nature of network traffic and assist them to design, measure, monitor, and control today's complex, high-speed networks in their everyday's practice.

Implications of burstiness on high-speed networks.

Various papers discuss the impact of burstiness on network congestion. Their conclusions are:

  • Congested periods can be quite long with losses that are heavily concentrated.

  • Linear increases in buffer size do not result in large decreases in packet drop rates.

  • A slight increase in the number of active connections can result in a large increase in the packet loss rate.

Results show that packet traffic “spikes” (which cause actual losses) ride on longerterm “ripples”, which in turn ride on still longer-term “swells”.

Another area where burstiness can affect network performance is a link with priority scheduling between classes of traffic. In an environment, where the higher priority class has no enforced bandwidth limitations (other than the physical bandwidth), interactive traffic might be given priority over bulk-data traffic. If the higher priority class is bursty over long time scales, then the bursts from the higher priority traffic could obstruct the lower priority traffic for long periods of time.

The burstiness may also have an impact on networks where the admission control mechanism is based on measurements of recent traffic, rather than on policed traffic parameters of individual connections. Admission control that considers only recent traffic patterns can be misled following a long period of fairly low traffic rates.

14.8.1 Model parameters

Each transaction between a client and a server consists of active periods followed by inactive periods. Transactions consist of groups of packets sent in each direction. Each group of packets is called a burst. The burstiness of the traffic can be characterised by the following time parameters:

  • Transaction Interarrival Time (TIAT): The time between the first packet in a transaction and the first packet of the next immediate transaction.

  • Burst Interarrival Time, , arrival rate of bursts: The time between bursts.

  • Packet Interarrival Time, , : arrival rate of packets: The time between packets in a burst.

The Hurst parameter.

It is anticipated that the rapid and ongoing aggregation of more and more traffic onto integrated multiservice networks will eventually result in traffic smoothing. Once the degree of aggregation is sufficient, the process can be modelled by Gaussian process. Currently, network traffic does not show characteristics that close to Gaussian. In many networks the degree of aggregation is not enough to balance the negative impact of bursty traffic. However, before traffic becomes Gaussian, existing methods can still provide accurate measurement and prediction of bursty traffic.

Most of the methods are based on the estimate of the Hurst parameter - the higher the value of H, the higher the burstiness, and consequently, the worse the queueing performance of switches and routers along the traffic path. Some are more reliable than others. The reliability depends on several factors; e.g., the estimation technique, sample size, time scale, traffic shaping or policing, etc. Based on published measurements we investigated methods with the smallest estimation error*.

Footnote. Variance, Aggregated Variance, Higuchi, Variance of Residuals, Rescaled Adjusted Range (R/S), Whittle Estimator, Periodogram, Residuals of Regression.

Among those, we chose the Rescaled Adjusted Range (R/S) method because we found it implemented in the Benoit package. The Hurst parameter calculated by the package is input to our method.

The M/Pareto traffic model and the Hurst parameter.

Recent results have proven that the M/Pareto model is appropriate for modelling long-range dependent traffic flow characterised by long bursts. Originally, the model was introduced and applied in the analysis of ATM buffer levels. The M/Pareto model was also used to predict the queueing performance of Ethernet, VBR video, and IP packet streams in a single server queue. We apply the M/Pareto model not just for a single queue, but also for predicting the performance of an interconnected system of links, switches and routers affecting the individual network elements' performance.

The M/Pareto model is a Poisson process of overlapping bursts with arrival rate . A burst generates packets with arrival rate . Each burst, from the time of its interval, will continue for a Pareto-distributed time period. The use of Pareto distribution results in generating extremely long bursts that characterise long-range dependent traffic.

The probability that a Pareto-distributed random variable exceeds threshold is:

The mean of , the mean duration of a burst and its variance is infinite. Assuming a time interval, the mean number of packets in the time interval is:

where

The M/Pareto model is asymptotically self-similar and it is shown that for the Hurst parameter the following equation holds:

14.8.2 Implementation of the Hurst parameter

We implemented the Hurst parameter and a modified version of the M/Pareto model in the discrete event simulation system COMNET. By using discrete event simulation methodology, we can get realistic results in measuring network parameters, such as utilisation of links and the queueing performance of switches and routers. Our method can model and measure the harmful consequences of aggregated bursty traffic and predict its impact on the overall network's performance.

Traffic measurements.

In order to build the baseline model, we collected traffic traces in a large corporate network by the Concord Network Health network analyser system. We took measurements from various broadband and narrow band links including 45Mbps ATM, 56Kbps, and 128 Kbps frame relay connections. The Concord Network Health system can measure the traffic in certain time intervals at network nodes, such as routers and switches. We set the time intervals to 6000 seconds and measured the number of bytes and packets sent and received per second, packet latency, dropped packets, discard eligible packets, etc. Concord Network Health cannot measure the number of packets in a burst and the duration of the bursts as it is assumed in the M/Pareto model above. Due to this limitation of our measuring tool, we slightly modify our traffic model according to the data available. We took snapshots of the traffic in every five minutes from a narrow band frame relay connection between a remote client workstation and a server at the corporate headquarters as traffic destination in the following format:

Figure 14.23.  Traffic traces.

Traffic traces.


The mean number of bytes, the message delay from the client to server, the input buffer level at the client's local router, the number of blocked packets, the mean utilisations of the 56Kbps frame relay, the DS-3 segment of the ATM network, and the 100Mbps Ethernet link at the destination are summarised in Figure 14.24.

Figure 14.24.  Measured network parameters.

Measured network parameters.


COMNET represents a transaction by a message source, a destination, the size of the message, communication devices, and links along the path. The rate at which messages are sent is specified by an interarrival time distribution, the time between two consecutive packets. The Poisson distribution in the M/Pareto model generates bursts or messages with arrival rate , the number of arrivals, which are likely to occur in a certain time interval. In simulation, this information is expressed by the time interval between successive arrivals . For this purpose, we use the Exponential distribution. Using the Exponential distribution for interarrival time will result in an arrival pattern characterised by the Poisson distribution. In COMNET, we implemented the interarrival time with the function Exp(). The interarrival time in the model is set to one second matching the sampling time interval set in Concord Network Health and corresponding to an arrival rate /sec.

In the M/Pareto model, each burst continues for a Pareto-distributed time period. The Concord Network Health cannot measure the duration of a burst; hence, we assume that a burst is characterised by the number of bytes in a message sent or received in a second. Since the ATM cell rate algorithm ensures that equal length messages are processed in equal time, then longer messages require longer processing time. So we can say that the distribution of the duration of bursts is the same as the distribution of the length of bursts. Hence, we can modify the M/Pareto model by substituting the Pareto-distributed duration of bursts with the Pareto-distributed length of bursts. We derive of the Pareto distribution not from the mean duration of bursts, but from the mean length of bursts.

The Pareto distributed length of bursts is defined in COMNET by two parameters- the location and the shape. The location parameter corresponds to the , the shape parameter corresponds to the parameter of the M/Pareto model in (1) and can be calculated from the relation (4) as

The Pareto distribution can have infinite mean and variance. If the shape parameter is greater than 2, both the mean and variance are finite. If the shape parameter is greater than 1, but less than or equal to 2, the mean is finite, but then the variance is infinite. If the shape parameter is less than or equal to 1, both the mean and variance are infinite.

From the mean of the Pareto distribution we get:

The relations (5) and (6) allow us to model bursty traffic based on real traffic traces by performing the following steps:

  • a. Collect traffic traces using the Concord Network Health network analyser.

  • b. Compute the Hurst parameter by making use of the Benoit package with the traffic trace as input.

  • c. Use the Exponential and Pareto distributions in the COMNET modelling tool with the parameters calculated above to specify the distribution of the interarrival time and length of messages.

  • d. Generate traffic according to the modified M/Pareto model and measure network performance parameters.

The traffic generated according to the steps above is bursty with parameter H calculated from real network traffic.

14.8.3 Validation of the baseline model

We validate our baseline model by comparing various model parameters of a 56Kbps frame relay and a 6Mbps ATM connection with the same parameters of a real network as the Concord Network Health network analyser traced it. For simplicity, we use only the “Bytes Total/sec” column of the trace, i.e., the total number of bytes in the “Bytes Total/sec” column is sent in one direction only from the client to the server. The Hurst parameter of the real traffic trace is calculated by the Benoit package. The topology is as follows:

Figure 14.25.  Part of the real network topology where the measurements were taken.

Part of the real network topology where the measurements were taken.


The “Message sources” icon is a subnetwork that represents a site with a token ring network, a local router, and a client sending messages to the server in the “Destination” subnetwork:

Figure 14.26.  “Message Source” remote client.

“Message Source” remote client.


The interarrival time and the length of messages are defined by the Exponential and Pareto functions Exp (1) and Par (208.42, 1.9) respectively. The Pareto distribution's location (208.42) and shape (1.9) are calculated from formulas (5) and (6) by substituting the mean length of bursts (440 bytes from Table 2.) and .

Figure 14.27.  Interarrival time and length of messages sent by the remote client.

Interarrival time and length of messages sent by the remote client.


The corresponding heavy-tailed Pareto probability distribution and cumulative distribution functions are illustrated in Figure 14.28 (The represents the number of bytes):

Figure 14.28.  The Pareto probability distribution for mean 440 bytes and Hurst parameter The Pareto probability distribution for mean 440 bytes and Hurst parameter H=0.55 ..

The Pareto probability distribution for mean 440 bytes and Hurst parameter H=0.55 .


The “Frame Relay” icon represents a frame relay cloud with 56K committed information rate (CIR). The “Conc” router connects the frame relay network to a 6Mbps ATM network with variable rate control (VBR) as shown in Figures 14.29 and 14.30:

Figure 14.29.  The internal links of the 6Mbps ATM network with variable rate control (VBR).

The internal links of the 6Mbps ATM network with variable rate control (VBR).

Figure 14.30.  Parameters of the 6Mbps ATM connection.

Parameters of the 6Mbps ATM connection.


The “Destination” icon denotes a subnetwork with server :

Figure 14.31.  The “Destination” subnetwork.

The “Destination” subnetwork.


The results of the model show almost identical average for the utilisation of the frame relay link () and the utilisation of the real measurements (3.1%):

Figure 14.32.  utilisation of the frame relay link in the baseline model.

utilisation of the frame relay link in the baseline model.


The message delay in the model is also very close to the measured delay between the client and the server (78 msec):

Figure 14.33.  Baseline message delay between the remote client and the server.

Baseline message delay between the remote client and the server.


The input buffer level of the remote client's router in the model is almost identical with the measured buffer level of the corresponding router:

Figure 14.34.  Input buffer level of remote router.

Input buffer level of remote router.


Similarly, the utilisations of the model's DS-3 link segment of the ATM network and the Ethernet link in the destination network closely match with the measurements of the real network:

Figure 14.35.  Baseline utilisations of the DS-3 link and Ethernet link in the destination.

Baseline utilisations of the DS-3 link and Ethernet link in the destination.


It can also be shown from the model's traffic trace that for the model generated messages the Hurst parameter , i.e., the model generates almost the same bursty traffic as the real network. Furthermore, the number of dropped packets in the model was zero similarly to the number of dropped packets in the real measurements. Therefore, we start from a model that closely represents the real network.

14.8.4 Consequences of traffic burstiness

In order to illustrate our method, we developed a COMNET simulation model to measure the consequences of bursty traffic on network links, message delays, routers' input buffers, and the number of dropped packets due to the aggregated traffic of large number of users. The model implements the Hurst parameter as it has been described in Section 3. We repeated the simulation for 6000 sec, 16000 sec and 18000 sec to allow infrequent events to occur a reasonable number of times. We found that the results are very similar in each simulation.

Topology of bursty traffic sources.

The “Message Source” subnetworks transmit messages as in the baseline model above, but with different burstiness: , and with fixed size. Initially, we simulate four subnetworks and four users per subnetwork each sending the same volume of data (mean 440 bytes per second) as in the validating model above:

Figure 14.36.  Network topology of bursty traffic sources with various Hurst parameters.

Network topology of bursty traffic sources with various Hurst parameters.

Link utilisation and message delay.

First, we are going to measure and illustrate the extremely high peaks in frame relay link utilisation and message delay. The model traffic is generated with message sizes determined by various Hurst parameters and fixed size messages for comparison. The COMNET modelling tool has a trace option to capture its own model generated traffic. It has been verified that for the model-generated traffic flows with various Hurst parameters the Benoit package computed similar Hurst parameters for the captured traces.

The following table shows the simulated average and peak link utilisation of the different cases. The utilisation is expressed in the [0, 1] scale not in percentages:

Figure 14.37.  Simulated average and peak link utilisation.

Simulated average and peak link utilisation.

The enclosed charts in Appendix A clearly demonstrate that even though the average link utilisation is almost identical, the frequency and the size of the peaks increase with the burstiness, causing cell drops in routers and switches. We received the following results for response time measurements:

Figure 14.38.  Response time and burstiness.

Response time and burstiness.


The charts in the Appendix A graphically illustrate the relation between response times and various Hurst parameters.

Input buffer level for large number of users.

We also measured the number of cells dropped at a router's input buffer in the ATM network due to surge of bursty cells. We simulated the aggregated traffic of approximately 600 users each sending the same number of bytes in a second as in the measured real network. The number of blocked packets is summarised in the following table:

Figure 14.39.  Relation between the number of cells dropped and burstiness.

Relation between the number of cells dropped and burstiness.

14.8.5 Conclusion

Theis chapter presented a discrete event simulation methodology to measure various network performance parameters while transmitting bursty traffic. It has been proved in recent studies that combining bursty data streams will also produce bursty combined data flow. The studies imply that the methods and models used in traditional network design require modifications. We categorise our modelling methodology as a structural model contrary to a black box model. Structural models focus on the environment in which the models' data was collected; i.e., the complex hierarchies of network components that make up today's communications systems. Although black box models are useful in other contexts, they are not easy to use in designing, managing and controlling today's networks. We implemented a well-known model, the M/Pareto model within the discrete event simulation package COMNET that allows the analysis of the negative impact of self-similar traffic on not just one single queue, but on the overall performance of various interrelated network components as well. Using real network traces, we built and validated a model by which we could measure and graphically illustrate the impact of bursty traffic on link utilisation, message delays, and buffer performance of Frame Relay and ATM networks. We illustrated that increasing burstiness results in extremely high link utilisation, response time, and dropped packets, and measured the various performance parameters by simulation.

The choice of the package emphasises the need for integrated tools that could be useful not just for theoreticians, but also for network engineers and designers. Our paper intends to narrow the gap between existing, well-known theoretical results and their applicability in everyday, practical network analysis and modelling. It is highly desirable that appropriate traffic models should be accessible from measuring, monitoring, and controlling tools. Our model can help network designers and engineers, the ultimate users of traffic modelling, understand the dynamic nature of network traffic and assist them in their everyday practice.

14.9 Appendix A

14.9.1 Measurements for link utilisation

The following charts demonstrate that even though the average link utilisation for the various Hurst parameters is almost identical, the frequency and the size of the peaks increase with the burstiness, causing cell drops in routers and switches. The utilisation is expressed in the [0, 1] scale not in percentages:

Figure 14.40.  Utilisation of the frame relay link for fixed size messages.

Utilisation of the frame relay link for fixed size messages.

Figure 14.41.  Utilisation of the frame relay link for Hurst parameter Utilisation of the frame relay link for Hurst parameter H=0.55 ..

Utilisation of the frame relay link for Hurst parameter H=0.55 .

Figure 14.42.  Utilisation of the frame relay link for Hurst parameter Utilisation of the frame relay link for Hurst parameter H=0.95 (many high peaks). (many high peaks).

Utilisation of the frame relay link for Hurst parameter H=0.95 (many high peaks).

14.9.2 Measurements for message delays

Figures 14.4314.45 illustrate the relation between response time and various Hurst parameters:

Figure 14.43.  Message delay for fixed size message.

Message delay for fixed size message.

Figure 14.44.  Message delay for Message delay for H=0.55 (longer response time peaks). (longer response time peaks).

Message delay for H=0.55 (longer response time peaks).

Figure 14.45.  Message delay for Message delay for H=0.95 (extremely long response time peak). (extremely long response time peak).

Message delay for H=0.95 (extremely long response time peak).

Exercises

14.9-1 Name some attributes, events, activities and state variables that belong to the following concepts:

  • Server

  • Client

  • Ethernet

  • Packet switched network

  • Call set up in cellular mobile network

  • TCP Slow start algorithm

14.9-2 Read the article about the application of the network simulation and write a report about how the article approaches the validation of the model.

14.9-3 For this exercise it is presupposed that there is a network analyser software (e.g., LAN Analyzer for Windows or any similar) available to analyse the network traffic. We use the mentioned software thereinafter.

  • Let's begin to transfer a file between a client and a server on the LAN. Observe the detailed statistics of the utilisation of the datalink and the number of packets per second then save the diagram.

  • Read the Capturing and analysing Packet chapter in the Help of LAN Analyzer.

  • Examine the packets between the client and the server during the file transfer.

  • Save the captured trace information about the packets in .csv format. Analyse this file using spreadsheet manager. Note if there are any too long time intervals between two packets, too many bad packets, etc. in the unusual protocol events.

14.9-4 In this exercise we examine the network analysing and baseline maker functions of the Sniffer. The baseline defines the activities that characterise the network. By being familiar with this we can recognise the non-normal operation. This can be caused by a problem or the growing of the network. Baseline data has to be collected in case of typical network operation. For statistics like bandwidth utilization and number of packets per second we need to make a chart that illustrates the information in a given time interval. This chart is needed because sampled data of a too short time interval can be false. After adding one or more network component a new baseline should be made, so that later the activities before and after the expansion can be compared. The collected data can be exported to be used in spreadsheet managers and modelling tools, that provides further analysing possibilities and is helpful in handling gathered data.

Sniffer is a very effective network analysing tool. It has several integrated functions.

  • Gathering traffic-trace information for detailed analysis.

  • Problem diagnosis with Expert Analyzer.

  • Real-time monitoring of the network activities.

  • Collecting detailed error and utilization statistics of nodes, dialogues or any parts of the network.

  • Storing the previous utilization and fault information for baseline analysis.

  • When a problem occurs it creates visible or audible alert notifications for the administrators.

  • For traffic simulation monitoring of the network with active devices, measuring the response time, hop counting and faults detection.

  • The Histroy Samples option of the Monitor menu allows us to record the network activities within a given time interval. This data can be applied for baseline creation that helps to set some thresholds. In case of non-normal operation by exceeding these thresholds some alerts are triggered. Furthermore this data is useful to determine the long-period alteration of the network load, therefore network expansions can be planned forward.

  • Maximum 10 of network activities can be monitored simultaneously. Multiple statistics can be started for a given activity, accordingly short-period and long-period tendencies can be analysed concurrently. Network activities that are available for previous statistics depends on the adapter selected in the Adapter dialogue box. For example in case of a token ring network the samples of different token ring frame types (e.g, Beacon frames), in Frame Relay networks the samples of different Frame Relay frame types (e.g, LMI frames) can be observed. The available events depend on the adapter.

Practices:

  • Set up a filter (Capture/Define filter) between your PC and a remote Workstation to sample the IP traffic.

  • Set up the following at the Monitor/History Samples/Multiple History: Octets/sec, utilization, Packets/sec, Collisions/sec and Broadcasts/sec.

  • Configure sample interval for 1 sec. (right click on the Multiple icon and Properties/Sample).

  • Start network monitoring (right click on the Multiple icon and Start Sample).

  • Simulate a typical network traffic, e.g, download a large file from a server.

  • Record the “Multiple History” during this period of time. This can be considered as baseline.

  • Set the value of the Octets/sec tenfold of the baseline value at the Tools/Options/MAC/Threshold. Define an alert for the Octets/sec: When this threshold exceeded, a message will be sent to our email address. On Figure 14.46 we suppose that this threshold is 1,000.

    Figure 14.46.  Settings.

    Settings.


  • Alerts can be defined as shown in Figure 14.47.

    Figure 14.47.  New alert action.

    New alert action.


  • Set the SMTP server to its own local mail server (Figure 14.48).

    Figure 14.48.  Mailing information.

    Mailing information.


  • Set the Severity of the problem to Critical (Figure 14.49).

    Figure 14.49.  Settings.

    Settings.


  • Collect tracing information (Capture/Start) about network traffic during file download.

  • Stop capture after finished downloading (Capture/Stop then Display).

  • Analyse the packets' TCP/IP layers with the Expert Decode option.

  • Check the “Alert message” received from Sniffer Pro. Probably a similar message will be arrived that includes the octets/sec threshold exceeded:

From: ...

Subject: Octets/s: current value = 22086, High Threshold = 9000 To: ...

This event occurred on ...

Save the following files:

  • The “Baseline screens”

  • The Baseline Multiple History.csv file

  • The “alarm e-mail”.

14.9-5 The goal of this practice is to build and validate a baseline model using a network modelling tool. It's supposed that a modelling tool such as COMNET or OPNET is available for the modeller.

First collect response time statistics by pinging a remote computer. The ping command measures the time required for a packet to take a round trip between the client and the server. A possible format of the command is the following: ping hostname -n x -l y -w z > filename where “x” is the number of packet to be sent, “y” is the packet length in bytes, “z” is the time value and “filename” is the name of the file that includes the collected statistics.

For example the ping 138.87.169.13 -n 5 -l 64 > c:

ping.txt command results the following file:

Pinging 138.87.169.13 with 64 bytes of data:

Reply from 138.87.169.13: bytes=64 time=178ms TTL=124

Reply from 138.87.169.13: bytes=64 time=133ms TTL=124

Reply from 138.87.169.13: bytes=64 time=130ms TTL=124

Reply from 138.87.169.13: bytes=64 time=127ms TTL=124

Reply from 138.87.169.13: bytes=64 time=127ms TTL=124

  • Create a histogram for these time values and the sequence number of the packets by using a spreadsheet manager.

  • Create a histogram about the number of responses and the response times.

  • Create the cumulative density function of the response times indicating the details at the tail of the distribution.

  • Create the baseline model of the transfers. Define the traffic attributes by the density function created in the previous step.

  • Validate the model.

  • How much is the link utilization in case of messages with length of 32 and 64 bytes?

14.9-6 It is supposed that a modelling tool (e.g., COMNET, OPNET, etc.) is available for the modeller. In this practice we intend to determine the place of some frequently accessed image file in a lab. The prognosis says that the addition of clients next year will triple the usage of these image files. These files can be stored on the server or on the client workstation. We prefer storing them on a server for easier administration. We will create a baseline model of the current network, we measure the link-utilization caused by the file transfers. Furthermore we validate the model with the correct traffic attributes. By scaling the traffic we can create a forecast about the link- utilization in case of trippled traffic after the addition of the new clients.

  • Create the topology of the baseline model.

  • Capture traffic trace information during the transfer and import them.

  • Run and validate the model (The number of transferred messages in the model must be equal to the number in the trace file, the time of simulation must be equal to the sum of the Interpacket Times and the link utilization must be equal to the average utilization during capture).

  • Print reports about the number of transferred messages, the message delays, the link utilization of the protocols and the total utilization of the link.

  • Let's triple the traffic.

  • Print reports about the number of transferred messages, the message delay, the link utilization of the protocols and the total utilization of the link.

  • If the link-utilization is under the baseline threshold then we leave the images on the server otherwise we move them to the workstations.

  • What is your recommendation: Where is better place to store the image files, the client or the server?

14.9-7 The aim of this practice to compare the performance of the shared and the switched Ethernet. It can be shown that transformation of the shared Ethernet to switched one is only reasonable if the number of collisions exceeds a given threshold.

a. Create the model of a client/server application that uses shared Ethernet LAN. The model includes 10Base5 Ethernet that connects one Web server and three group of workstations. Each group has three PCs, furthermore each group has a source that generates “Web Request” messages. The Web server application of the server responds to them. Each “Web Request” generates traffic toward the server. When the “Web Request” message is received by the server a “Web Response” message is generated and sent to the appropriate client.

  • Each “Web Request” means a message with 10,000 bytes of length sent by the source to the Web Server every Exp(5) second. Set the text of the message to “Web Request”.

  • The Web server sends back a message with the “Web Response” text. The size of the message varies between 10,000 and 100,000 bytes that determined by the Geo(10000, 100000) distribution. The server responds only to the received “Web Request” messages. Set the reply message to “Web Response”.

  • For the rest of the parameters use the default values.

  • Select the “Channel Utilization” and the (“Collision Stats”) at the (“Links Repor”).

  • Select the “Message Delay” at the (“Message + Response Source Report”).

  • Run the simulation for 100 seconds. Animation option can be set.

  • Print the report that shows the “Link Utilization”, the “Collision Statistics” and the report about the message delays between the sources of the traffic.

b. In order to reduce the response time transform the shared LAN to switched LAN. By keeping the clien/server parameters unchanged, deploy an Ethernet switch between the clients and the server. (The server is connected to the switch with full duplex 10Base5 connection.)

  • Print the report of “Link Utilization” and “Collision Statistics”, furthermore the report about the message delays between the sources of the traffic.

c. For all of the two models change the 10Base5 connections to 10BaseT. Unlike the previous situations we will experience a non-equivalent improvement of the response times. We have to give explanation.

14.9-8 A part of a corporate LAN consists of two subnets. Each of them serves a department. One operates according to IEEE 802.3 CSMA/CD 10BaseT Ethernet standard, while the other communicates with IEEE 802.5 16Mbps Token Ring standard. The two subnets are connected with a Cisco 2500 series router. The Ethernet LAN includes 10 PCs, one of them functions as a dedicated mail server for all the two departments. The Token Ring LAN includes 10 PC's as well, one of them operates as a file server for the departments.

The corporation plans to engage employees for both departments. Although the current network configuration won't be able to serve the new employees, the corporation has no method to measure the network utilization and its latency. Before engaging the new employees the corporation would like to estimate these current baseline levels. Employees have already complained about the slowness of download from the file server.

According to a survey, most of the shared traffic flown through the LAN originates from the following sources: electronic mailing, file transfers of applications and voice based messaging systems (Leaders can send voice messages to their employees). The conversations with the employees and the estimate of the average size of the messages provides the base for the statistical description of the message parameters.

E-mailing is used by all employees in both departments. The interviews revealed that the time interval of the mail sending can be characterised with an Exponential distribution. The size of the mails can be described with an Uniform distribution accordingly the mail size is between 500 and 2,000 bytes. All of the emails are transferred to the email server located in the Ethernet LAN, where they are be stored in the appropriate user's mailbox.

The users are able to read messages by requesting them from the email server. The checking of the mailbox can be characterised with a Poisson distribution whose mean value is 900 seconds. The size of the messages used for this transaction is 60 bytes. When a user wants to download an email, the server reads the mailbox file that belongs to the user and transfers the requested mail to the user's PC. The time required to read the files and to process the messages inside can be described with an Uniform distribution that gathers its value from the interval of 3 and 5 seconds. The size of the mails can be described with a normal distribution whose mean value is 40,000 bytes and standard deviation is 10,000 bytes.

Both departments have 8 employees, each of them has their own computer, furthermore they download files from the file server. Arrival interval of these requests can be described as an Exponential distribution with a mean value of 900 ms. The requests' size follows Uniform distribution, with a minimum of 10 bytes minimum and a maximum of 20 bytes. The requests are only sent to the file server located in the Token Ring network. When a request arrives to the server, it read the requested file and send to the PC. This processing results in a very low latency. The size of the files can be described with a normal distribution whose mean value is 20,000 bytes and standard deviation is 25,000 bytes.

Voice-based messaging used only by the heads of the two departments, sending such messages only to theirs employees located in the same department. The sender application makes connection to the employee's PC. After successful connection the message will be transferred. The size of these messages can be described by normal distribution with a mean value of 50,000 bytes and a standard deviation of 1,200 bytes. Arrival interval can be described with a Normal distribution whose mean value is 1,000 seconds and standard deviation is 10 bytes.

TCP/IP is used by all message sources, and the estimated time of packet construction is 0.01 ms.

The topology of the network must be similar to the one in COMNET, Figure 14.50.

Figure 14.50.  Network topology.

Network topology.

The following reports can be used for the simulation:

  • Link Reports: Channel Utilization and Collision Statistics for each link.

  • Node Reports: Number of incoming messages for the node.

  • Message and Response Reports: The delay of the messages for each node.

  • Session Source Reports: Message delay for each node.

By running the model, a much higher response time will be observed at the file server. What type of solution can be proposed to reduce the response time when the quality of service level requires a lower response time? Is it a good idea to set up a second file server on the LAN? What else can be modified?

 CHAPTER NOTES 

Law and Kelton's monography [217] provides a good overview about the network systems e.g. we definition of the networks in Section 14.1 is taken from it. About the classification of computer networks we propose two monography, whose authors are Sima, Fountain és Kacsuk [304], and Tanenbaum [313].

Concerning the basis of probability the book of Alfréd, Rényi [286] is recommended. We have summarised the most common statistical distribution by the book of Banks et al. [29]. The review of COMNET simulation modelling tool used to depict the density functions can be found in two publications of CACI (Consolidated Analysis Centers, Inc.) [53], [186].

Concerning the background of mathematical simulation the monography of Ross [291], and concerning the queueing theory the book of Kleinrock [200] are useful.

The definition of channel capacity can be found in the dictionaries that are available on the Internet [173], [340]. Information and code theory related details can be found in Jones and Jones' book [187].

Taqqu and Co. [220], [317] deal with long-range dependency.

Figure 14.1 that describes the estimations of the most common distributions in network modelling is taken from the book of Banks, Carson és Nelson könyvéből [29].

The OPNET software and its documentation can be downloaded from the address found in [259]. Each phase of simulation is discussed fully in this document.

The effect of traffic burstiness is analysed on the basis of Tibor Gyires's and H. Joseph Wenn's articles [151], [152].

Leland and Co., Crovella and Bestavros [77] report measurements about network traffic.

The self-similarity of networks is dealt by Erramilli, Narayan and Willinger [99], Willinger and Co. [344], and Beran [37]. Mandelbrot [234], Paxson és Floyd [267], furthermore the long-range dependent processes was studied by Mandelbrot and van Ness [235].

Traffic routing models can be found in the following publications: [17], [160], [180], [241], [253], [254], [266], [344].

Figure 14.22 is from the article of Listanti, Eramo and Sabella [224]. The papers [38], [92], [147], [267] contains data on traffic. Long-range dependency was analysed by Addie, Zukerman and Neame [5], Duffield and O'Connell [91], and Narayan and Willinger [99]. The expression of black box modelling was introduced by Willinger and Paxson [342] in 1997.

Information about the principle of Ockham's Razor can be found on the web page of Francis Heylighen [164]. More information about Sniffer is on Network Associates' web site [239].

Willinger, Taqqu, Sherman and Wilson [343] analyse a structural model. Crovella and Bestavros [77] analysed the traffic of World Wide Web.

The effect of burstiness to network congestion is dealt by Neuts [253], and Molnár, Vidács, and Nilsson [248].

The pareto-model and the effect of the Hurst parameter is studied by Addie, Zukerman and Neame [5]. The Benoit-package can be downloaded from the Internet [326].

Chapter 15. Parallel Computations

Parallel computations is concerned with solving a problem faster by using multiple processors in parallel. These processors may belong to a single machine, or to different machines that communicate through a network. In either case, the use of parallelism requires to split the problem into tasks that can be solved simultaneously.

In the following, we will take a brief look at the history of parallel computing, and then discuss reasons why parallel computing is harder than sequential computing. We explain differences from the related subjects of distributed and concurrent computing, and mention typical application areas. Finally, we outline the rest of this chapter.

Although the history of parallel computing can be followed back even longer, the first parallel computer is commonly said to be Illiac IV, an experimental 64-processor machine that became operational in 1972. The parallel computing area boomed in the late 80s and early 90s when several new companies were founded to build parallel machines of various types. Unfortunately, software was difficult to develop and non-portable at that time. Therefore, the machines were only adopted in the most compute-intensive areas of science and engineering, a market too small to commence for the high development costs. Thus many of the companies had to give up.

On the positive side, people soon discovered that cheap parallel computers can be built by interconnecting standard PCs and workstations. As networks became faster, these so-called clusters soon achieved speeds of the same order as the special-purpose machines. At present, the Top 500 list, a regularly updated survey of the most powerful computers worldwide, contains 42% clusters. Parallel computing also profits from the increasing use of multiprocessor machines which, while designed as servers for web etc., can as well be deployed in parallel computing. Finally, software portability problems have been solved by establishing widely used standards for parallel programming. The most important standards, MPI and OpenMP, will be explained in Subsections 15.3.1 and 15.3.2 of this book.

In summary, there is now an affordable hardware basis for parallel computing. Nevertheless, the area has not yet entered the mainstream, which is largely due to difficulties in developing parallel software. Whereas writing a sequential program requires to find an algorithm, that is, a sequence of elementary operations that solves the problem, and to formulate the algorithm in a programming language, parallel computing poses additional challenges:

  • Elementary operations must be grouped into tasks that can be solved concurrently.

  • The tasks must be scheduled onto processors.

  • Depending on the architecture, data must be distributed to memory modules.

  • Processes and threads must be managed, i.e., started, stopped and so on.

  • Communication and synchronisation must be organised.

Of course, it is not sufficient to find any grouping, schedule etc. that work, but it is necessary to find solutions that lead to fast programs. Performance measures and general approaches to performance optimisation will be discussed in Section 15.2, where we will also elaborate on the items above. Unlike in sequential computing, different parallel architectures and programming models favour different algorithms.

In consequence, the design of parallel algorithms is more complex than the design of sequential algorithms. To cope with this complexity, algorithm designers often use simplified models. For instance, the Parallel Random Access Machine (see Subsection 15.4.1) provides a model in which opportunities and limitations of parallelisation can be studied, but it ignores communication and synchronisation costs.

We will now contrast parallel computing with the related fields of distributed and concurrent computing. Like parallel computing, distributed computing uses interconnected processors and divides a problem into tasks, but the purpose of division is different. Whereas in parallel computing, tasks are executed at the same time, in distributed computing tasks are executed at different locations, using different resources. These goals overlap, and many applications can be classified as both parallel and distributed, but the focus is different. Parallel computing emphasises homogeneous architectures, and aims at speeding up applications, whereas distributed computing deals with heterogeneity and openness, so that applications profit from the inclusion of different kinds of resources. Parallel applications are typically stand-alone and predictable, whereas distributed applications consist of components that are brought together at runtime.

Concurrent computing is not bound to the existence of multiple processors, but emphasises the fact that several sub-computations are in progress at the same time. The most important issue is guaranteeing correctness for any execution order, which can be parallel or interleaved. Thus, the relation between concurrency and parallelism is comparable to the situation of reading several books at a time. Reading the books concurrently corresponds to having a bookmark in each of them and to keep track of all stories while switching between books. Reading the books in parallel, in contrast, requires to look into all books at the same time (which is probably impossible in practice). Thus, a concurrent computation may or may not be parallel, but a parallel computation is almost always concurrent. An exception is data parallelism, in which the instructions of a single program are applied to different data in parallel. This approach is followed by SIMD architectures, as described below.

For the emphasis on speed, typical application areas of parallel computing are science and engineering, especially numerical solvers and simulations. These applications tend to have high and increasing computational demands, since more computing power allows one to work with more detailed models that yield more accurate results. A second reason for using parallel machines is their higher memory capacity, due to which more data fit into a fast memory level such as cache.

The rest of this chapter is organised as follows: In Section 15.1, we give a brief overview and classification of current parallel architectures. Then, we introduce basic concepts such as task and process, and discuss performance measures and general approaches to the improvement of efficiency in Section 15.2. Next, Section 15.3 describes parallel programming models, with focus on the popular MPI and OpenMP standards. After having given this general background, the rest of the chapter delves into the subject of parallel algorithms from a more theoretical perspective. Based on example algorithms, techniques for parallel algorithm design are introduced. Unlike in sequential computing, there is no universally accepted model for parallel algorithm design and analysis, but various models are used depending on purpose. Each of the models represents a different compromise between the conflicting goals of accurately reflecting the structure of real architectures on one hand, and keeping algorithm design and analysis simple on the other. Section 15.4 gives an overview of the models, Section 15.5 introduces the basic concepts of parallel algorithmics, Sections 15.6 and 15.7 explain deterministic example algorithms for PRAM and mesh computational model.

15.1 Parallel architectures

A simple, but well-known classification of parallel architectures has been given in 1972 by Michael Flynn. He distinguishes computers into four classes: SISD, SIMD, MISD, and MIMD architectures, as follows:

  • SI stands for “single instruction”, that is, the machine carries out a single instruction at a time.

  • MI stands for “multiple instruction”, that is, different processors may carry out different instructions at a time.

  • SD stands for “single data”, that is, only one data item is processed at a time.

  • MD stands for “multiple data”, that is, multiple data items may be processed at a time.

SISD computers are von-Neumann machines. MISD computers have probably never been built. Early parallel computers were SIMD, but today most parallel computers are MIMD. Although the scheme is of limited classification power, the abbreviations are widely used.

The following more detailed classification distinguishes parallel machines into SIMD, SMP, ccNUMA, nccNUMA, NORMA, clusters, and grids.

15.1.1 SIMD architectures

As depicted in Figure 15.1, a SIMD computer is composed of a powerful control processor and several less powerful processing elements (PEs). The PEs are typically arranged as a mesh so that each PE can communicate with its immediate neighbours. A program is a single thread of instructions. The control processor, like the processor of a sequential machine, repeatedly reads a next instruction and decodes it. If the instruction is sequential, the control processor carries out the instruction on data in its own memory. If the instruction is parallel, the control processor broadcasts the instruction to the various PEs, and these simultaneously apply the instruction to different data in their respective memories. As an example, let the instruction be LD reg, 100. Then, all processors load the contents of memory address 100 to reg, but memory address 100 is physically different for each of them. Thus, all processors carry out the same instruction, but read different values (therefore “SIMD”). For a statement of the form if test then if_branch else else_branch, first all processors carry out the test simultaneously, then some carry out if_branch while the rest sits idle, and finally the rest carries out else_branch while the formers sit idle. In consequence, SIMD computers are only suited for applications with a regular structure. The architectures have been important historically, but nowadays have almost disappeared.

Figure 15.1.  SIMD architecture.

SIMD architecture.

15.1.2 Symmetric multiprocessors

Symmetric multiprocessors (SMP) contain multiple processors that are connected to a single memory. Each processor may access each memory location through standard load/store operations of the hardware. Therefore, programs, including the operating system, must only be stored once. The memory can be physically divided into modules, but the access time is the same for each pair of a processor and a memory module (therefore “symmetric”). The processors are connected to the memory by a bus (see Figure 15.2), by a crossbar, or by a network of switches. In either case, there is a delay for memory accesses which, partially due to competition for network resources, grows with the number of processors.

Figure 15.2.  Bus-based SMP architecture.

Bus-based SMP architecture.


In addition to main memory, each processor has one or several levels of cache with faster access. Between memory and cache, data are moved in units of cache lines. Storing a data item in multiple caches (and writing to it) gives rise to coherency problems. In particular, we speak of false sharing if several processors access the same cache line, but use different portions of it. Since coherency mechanisms work at the granularity of cache lines, each processor assumes that the other would have updated its data, and therefore the cache line is sent back and forth.

15.1.3 Cache-coherent NUMA architectures

NUMA stands for Non-Uniform Memory Access, and contrasts with the symmetry property of the previous class. The general structure of ccNUMA architectures is depicted in Figure 15.3.

Figure 15.3.  ccNUMA architecture.

ccNUMA architecture.


As shown in the figure, each processor owns a local memory, which can be accessed faster than the rest called remote memory. All memory is accessed through standard load/store operations, and hence programs, including the operating system, must only be stored once. As in SMPs, each processor owns one or several levels of cache; cache coherency is taken care of by the hardware.

15.1.4 Non-cache-coherent NUMA architectures

nccNUMA (non cache coherent Non-Uniform Memory Access) architectures differ from ccNUMA architectures in that the hardware puts into a processor's cache only data from local memory. Access to remote memory can still be accomplished through standard load/store operations, but it is now up to the operating system to first move the corresponding page to local memory. This difference simplifies hardware design, and thus nccNUMA machines scale to higher processor numbers. On the backside, the operating system gets more complicated, and the access time to remote memory grows. The overall structure of Figure 15.3 applies to nccNUMA architectures as well.

15.1.5 No remote memory access architectures

NORMA (NO Remote Memory Acess) architectures differ from the previous class in that the remote memory must be accessed through slower I/O operations as opposed to load/store operations. Each node, consisting of processor, cache and local memory, as depicted in Figure 15.3, holds an own copy of the operating system, or at least of central parts thereof. Whereas SMP, ccNUMA, and nccNUMA architectures are commonly classified as shared memory machines, SIMD architectures, NORMA architectures, clusters, and grids (see below) fall under the heading of distributed memory.

15.1.6 Clusters

According to Pfister, a cluster is a type of parallel or distributed system that consists of a collection of interconnected whole computers that are used as a single, unified computing resource. Here, the term “whole computer” denotes a PC, workstation or, increasingly important, SMP, that is, a node that consists of processor(s), memory, possibly peripheries, and operating system. The use as a single, unified computing resource is also denoted as single system image SSI. For instance, we speak of SSI if it is possible to login into the system instead of into individual nodes, or if there is a single file system. Obviously, the SSI property is gradual, and hence the borderline to distributed systems is fuzzy. The borderline to NORMA architectures is fuzzy as well, where the classification depends on the degree to which the system is designed as a whole instead of built from individual components.

Clusters can be classified according to their use for parallel computing, high throughput computing, or high availability. Parallel computing clusters can be further divided into dedicated clusters, which are solely built for the use as parallel machines, and campus-wide clusters, which are distributed systems with part-time use as a cluster. Dedicated clusters typically do not contain peripheries in their nodes, and are interconnected through a high-speed network. Nodes of campus-wide clusters, in contrast, are often desktop PCs, and the standard network is used for intra-cluster communication.

15.1.7 Grids

A grid is a hardware/software infrastructure for shared usage of resources and problem solution. Grids enable coordinated access to resources such as processors, memories, data, devices, and so on. Parallel computing is one out of several emerging application areas. Grids differ from other parallel architectures in that they are large, heterogeneous, and dynamic. Management is complicated by the fact that grids cross organisational boundaries.

15.2 Performance in practice

As explained in the introduction, parallel computing splits a problem into tasks that are solved independently. The tasks are implemented as either processes or threads. A detailed discussion of these concepts can be found in operating system textbooks such as Tanenbaum. Briefly stated, processes are programs in execution. For each process, information about resources such as memory segments, files, and signals is stored, whereas threads exist within processes such that multiple threads share resources. In particular, threads of a process have access to shared memory, while processes (usually) communicate through explicit message exchange. Each thread owns a separate PC and other register values, as well as a stack for local variables. Processes can be considered as units for resource usage, whereas threads are units for execution on the CPU. As less information needs to be stored, it is faster to create, destroy and switch between threads than it is for processes.

Whether threads or processes are used, depends on the architecture. On shared-memory machines, threads are usually faster, although processes may be used for program portability. On distributed memory machines, only processes are a priori available. Threads can be used if there is a software layer (distributed shared memory) that implements a shared memory abstraction, but these threads have higher communication costs.

Whereas the notion of tasks is problem-related, the notions of processes and threads refer to implementation. When designing an algorithm, one typically identifies a large number of tasks that can potentially be run in parallel, and then maps several of them onto the same process or thread.

Parallel programs can be written in two styles that can also be mixed: With data parallelism, the same operation is applied to different data at a time. The operation may be a machine instruction, as in SIMD architectures, or a complex operation such as a function application. In the latter case, different processors carry out different instructions at a time. With task parallelism, in contrast, the processes/threads carry out different tasks. Since a function may have an if or case statement as the outermost construct, the borderline between data parallelism and task parallelism is fuzzy.

Parallel programs that are implemented with processes can be further classified as using Single Program Multiple Data (SPMD) or Multiple Program Multiple Data (MPMD) coding styles. With SPMD, all processes run the same program, whereas with MPMD they run different programs. MPMD programs are task-parallel, whereas SPMD programs may be either task-parallel or data-parallel. In SPMD mode, task parallelism is expressed through conditional statements.

As the central goal of parallel computing is to run programs faster, performance measures play an important role in the field. An obvious measure is execution time, yet more frequently the derived measure of speedup is used. For a given problem, speedup is defined by

where denotes the running time of the fastest sequential algorithm, and denotes the running time of the parallel algorithm on processors. Depending on context, speedup may alternatively refer to using processes or threads instead of processors. A related, but less frequently used measure is efficiency, defined by

Unrelated to this definition, the term efficiency is also used informally as a synonym for good performance.

Figure 15.4 shows ideal, typical, and super-linear speedup curves. The ideal curve reflects the assumption that an execution that uses twice as many processors requires half of the time. Hence, ideal speedup corresponds to an efficiency of one. Super-linear speedup may arise due to cache effects, that is, the use of multiple processors increases the total cache size, and thus more data accesses can be served from cache instead of from slower main memory.

Figure 15.4.  Ideal, typical, and super-linear speedup curves.

Ideal, typical, and super-linear speedup curves.


Typical speedup stays below ideal speedup, and grows up to some number of processors. Beyond that, use of more processors slows down the program. The difference between typical and ideal speedups has several reasons:

  • Amdahl's law states that each program contains a serial portion that is not amenable to parallelisation. Hence, , and thus , that is, the speedup is bounded from above by a constant. Fortunately, another observation, called Gustafson-Barsis law reduces the practical impact of Amdahl's law. It states that in typical applications, the parallel variant does not speed up a fixed problem, but runs larger instances thereof. In this case, may grow slower than , so that is no longer constant.

  • Task management, that is, the starting, stopping, interrupting and scheduling of processes and threads, induces a certain overhead. Moreover, it is usually impossible, to evenly balance the load among the processes/threads.

  • Communication and synchronisation slow down the program. Communication denotes the exchange of data, and synchronisation denotes other types of coordination such as the guarantee of mutual exclusion. Even with high-speed networks, communication and synchronisation costs are orders of magnitude higher than computation costs. Apart from physical transmission costs, this is due to protocol overhead and delays from competition for network resources.

Performance can be improved by minimising the impact of the factors listed above. Amdahl's law is hard to circumvent, except that a different algorithm with smaller may be devised, possibly at the price of larger . Algorithmic techniques will be covered in later sections; for the moment, we concentrate on the other performance factors.

As explained in the previous section, tasks are implemented as processes or threads such that a process/thread typically carries out multiple tasks. For high performance, the granularity of processes/threads should be chosen in relation to the architecture. Too many processes/threads unnecessarily increase the costs of task management, whereas too few processes/threads lead to poor machine usage. It is useful to map several processes/threads onto the same processor, since the processor can switch when it has to wait for I/O or other delays. Large-granularity processes/threads have the additional advantage of a better communication-to-computation ratio, whereas fine-granularity processes/threads are more amenable to load balancing.

Load balancing can be accomplished with static or dynamic schemes. If the running time of the tasks can be estimated in advance, static schemes are preferable. In these schemes, the programmer assigns to each process/thread some number of tasks with about the same total costs. An example of a dynamic scheme is master/slave. In this scheme, first a master process assigns one task to each slave process. Then, repeatedly, whenever a slave finishes a task, it reports to the master and is assigned a next task, until all tasks have been processed. This scheme achieves good load balancing at the price of overhead for task management.

The highest impact on performance usually comes from reducing communication/synchronisation costs. Obvious improvements result from changes in the architecture or system software, in particular from reducing latency, that is, the delay for accessing a remote data item, and bandwidth, that is, the amount of data that can be transferred per unit of time.

The algorithm designer or application programmer can reduce communication/synchronisation costs by minimising the number of interactions. An important approach to achieve this minimisation is locality optimisation. Locality, a property of (sequential or parallel) programs, reflects the degree of temporal and spatial concentration of accesses to the same data. In distributed-memory architectures, for instance, data should be stored at the processor that uses the data. Locality can be improved by code transformations, data transformations, or a combination thereof. As an example, consider the following program fragment to be executed on three processors:

       for (i=0; i<N; i++) in parallel
          for (j=0; j<N; j++)
             f(A[i][j]);

Here, the keyword “in parallel” means that the iterations are evenly distributed among the processors so that runs iterations , runs iterations , and runs iterations (rounded if necessary). The function is supposed to be free of side effects.

Figure 15.5.  Locality optimisation by data transformation.

Locality optimisation by data transformation.


With the data distribution of Figure 15.5a), locality is poor, since many accesses refer to remote memory. Locality can be improved by changing the data distribution to that of Figure 15.5b) or, alternatively, by changing the program into

       for (j=0; j<N; j++) in parallel
          for (i=0; i<N; i++)
             f(A[i][j]);

The second alternative, code transformations, has the advantage of being applicable selectively to a portion of code, whereas data transformations influence the whole program so that an improvement in one part may slow down another. Data distributions are always correct, whereas code transformations must respect data dependencies, which are ordering constraints between statements. For instance, in

       a = 3; (1)
       b = a; (2)

a data dependence occurs between statements (1) and (2). Exchanging the statements would lead to an incorrect program.

On shared-memory architectures, a programmer does not the specify data distribution, but locality has a high impact on performance, as well. Programs run faster if data that are used together are stored in the same cache line. On shared-memory architectures, the data layout is chosen by the compiler, e.g. row-wise in C. The programmer has only indirect influence through the manner in which he or she declares data structures.

Another opportunity to reduce communication costs is replication. For instance, it pays off to store frequently used data at multiple processors, or to repeat short computations instead of communicating the result.

Synchronisations are necessary for correctness, but they slow down program execution, first because of their own execution costs, and second because they cause processes to wait for each other. Therefore, excessive use of synchronisation should be avoided. In particular, critical sections (in which processes/threads require exclusive access to some resource) should be kept at a minimum. We speak of sequentialisation if only one process is active at a time while the others are waiting.

Finally, performance can be improved by latency hiding, that is, parallelism between computation and communication. For instance, a process can start a remote read some time before it needs the result (prefetching), or write data to remote memory in parallel to the following computations.

Exercises

15.2-1 For standard matrix multiplication, identify tasks that can be solved in parallel. Try to identify as many tasks as possible. Then, suggest different opportunities for mapping the tasks onto (a smaller number of) threads, and compare these mappings with respect to their efficiency on a shared-memory architecture.

15.2-2 Consider a parallel program that takes as input a number and computes as output the number of primes in range . Task of the program should determine whether is a prime, by systematically trying out all potential factors, that is, dividing by . The program is to be implemented with a fixed number of processes or threads. Suggest different opportunities for this implementation and discuss their pros and cons. Take into account both static and dynamic load balancing schemes.

15.2-3 Determine the data dependencies of the following stencil code:

       for (t=0; t<tmax; t++)
          for (i=0; i<n; i++)
             for (j=0; j<n; j++)
                a[i][j] += a[i-1][j] + a[i][j-1]

Restructure the code so that it can be parallelised.

15.2-4 Formulate and prove the bounds of the speedup known as Amdahl law and Gustafson-Barsis law. Explain the virtual contradiction between these laws. What can you say on the practical speedup?

15.3 Parallel programming

Partly due to the use of different architectures and the novelty of the field, a large number of parallel programming models has been proposed. The most popular models today are message passing as specified in the Message Passing Interface standard (MPI), and structured shared-memory programming as specified in the OpenMP standard. These programming models are discussed in Subsections 15.3.1 and 15.3.2, respectively. Other important models such as threads programming, data parallelism, and automatic parallelisation are outlined in Subsection 15.3.3.

15.3.1 MPI programming

As the name says, MPI is based on the programming model of message passing. In this model, several processes run in parallel and communicate with each other by sending and receiving messages. The processes do not have access to a shared memory, but accomplish all communication through explicit message exchange. A communication involves exactly two processes: one that executes a send operation, and another that executes a receive operation. Beyond message passing, MPI includes collective operations and other communication mechanisms.

Message passing is asymmetric in that the sender must state the identity of the receiver, whereas the receiver may either state the identity of the sender, or declare its willingness to receive data from any source. As both sender and receiver must actively take part in a communication, the programmer must plan in advance when a particular pair of processes will communicate. Messages can be exchanged for several purposes:

  • exchange of data with details such as the size and types of data having been planned in advance by the programmer

  • exchange of control information that concerns a subsequent message exchange, and

  • synchronisation that is achieved since an incoming message informs the receiver about the sender's progress. Additionally, the sender may be informed about the receiver's progress, as will be seen later. Note that synchronisation is a special case of communication.

The MPI standard has been introduced in 1994 by the MPI forum, a group of hardware and software vendors, research laboratories, and universities. A significantly extended version, MPI-2, appeared in 1997. MPI-2 has about the same core functionality as MPI-1, but introduces additional classes of functions.

MPI describes a set of library functions with language binding to C, C++, and Fortran. With notable exceptions in MPI-2, most MPI functions deal with interprocess communication, leaving issues of process management such as facilities to start and stop processes, open. Such facilities must be added outside the standard, and are consequently not portable. For this and other reasons, MPI programs typically use a fixed set of processes that are started together at the beginning of a program run. Programs can be coded in SPMD or MPMD styles. It is possible to write parallel programs using only six base functions:

  • MPI_Init must be called before any other MPI function.

  • MPI_Finalize must be called after the last MPI function.

  • MPI_Comm_size yields the total number of processes in the program.

  • MPI_Comm_rank yields the number of the calling process, with processes being numbered starting from 0.

  • MPI_Send sends a message. The function has the following parameters:

    • address, size, and data type of the message,

    • number of the receiver,

    • message tag, which is a number that characterises the message in a similar way like the subject characterises an email,

    • communicator, which is a group of processes as explained below.

  • MPI_Recv receives a message. The function has the same parameters as MPI_Send, except that only an upper bound is required for the message size, a wildcard may be used for the sender, and an additional parameter called status returns information about the received message, e.g. sender, size, and tag.

Figure 15.6 depicts an example MPI program.

Figure 15.6.  A simple MPI program.

A simple MPI program.


Although the above functions are sufficient to write simple programs, many more functions help to improve the efficiency and/or structure MPI programs. In particular, MPI-1 supports the following classes of functions:

  • Alternative functions for pairwise communication: The base MPI_Send function, also called standard mode send, returns if either the message has been delivered to the receiver, or the message has been buffered by the system. This decision is left to MPI. Variants of MPI_Send enforce one of the alternatives: In synchronous mode, the send function only returns when the receiver has started receiving the message, thus synchronising in both directions. In buffered mode, the system is required to store the message if the receiver has not yet issued MPI_Recv.

    On both the sender and receiver sides, the functions for standard, synchronous, and buffered modes each come in blocking and nonblocking variants. Blocking variants have been described above. Nonblocking variants return immediately after having been called, to let the sender/receiver continue with program execution while the system accomplishes communication in the background. Nonblocking communications must be completed by a call to MPI_Wait or MPI_Test to make sure the communication has finished and the buffer may be reused. Variants of the completion functions allow to wait for multiple outstanding requests.

    MPI programs can deadlock, for instance if a process first issues a send to process and then a receive from ; and does the same with respect to . As a possible way-out, MPI supports a combined send/receive function.

    In many programs, a pair of processes repeatedly exchanges data with the same buffers. To reduce communication overhead in these cases, a kind of address labels can be used, called persistent communication. Finally, MPI functions MPI_Probe and MPI_Iprobe allow to first inspect the size and other characteristics of a message before receiving it.

  • Functions for Datatype Handling: In simple forms of message passing, an array of equally-typed data (e.g. float) is exchanged. Beyond that, MPI allows to combine data of different types in a single message, and to send data from non-contiguous buffers such as every second element of an array. For these purposes, MPI defines two alternative classes of functions: user-defined data types describe a pattern of data positions/types, whereas packaging functions help to put several data into a single buffer. MPI supports heterogeneity by automatically converting data if necessary.

  • Collective communication functions: These functions support frequent patterns of communication such as broadcast (one process sends a data item to all other processes). Although any pattern can be implemented by a sequence of sends/receives, collective functions should be preferred since they improve program compactness/understandability, and often have an optimised implementation. Moreover, implementations can exploit specifics of an architecture, and so a program that is ported to another machine may run efficiently on the new machine as well, by using the optimised implementation of that machine.

  • Group and communicator management functions: As mentioned above, the send and receive functions contain a communicator argument that describes a group of processes. Technically, a communicator is a distributed data structure that tells each process how to reach the other processes of its group, and contains additional information called attributes. The same group may be described by different communicators. A message exchange only takes place if the communicator arguments of MPI_Send and MPI_Recv match. Hence, the use of communicators partitions the messages of a program into disjoint sets that do not influence each other. This way, communicators help structuring programs, and contribute to correctness. For libraries that are implemented with MPI, communicators allow to separate library traffic from traffic of the application program. Groups/communicators are necessary to express collective communications. The attributes in the data structure may contain application-specific information such as an error handler. In addition to the (intra)communicators described so far, MPI supports intercommunicators for communication between different process groups.

    MPI-2 adds four major groups of functions:

  • Dynamic process management functions: With these functions, new MPI processes can be started during a program run. Additionally, independently started MPI programs (each consisting of multiple processes) can get into contact with each other through a client/server mechanism.

  • One-sided communication functions: One-sided communication is a type of shared-memory communication in which a group of processes agrees to use part of their private address spaces as a common resource. Communication is accomplished by writing into and reading from that shared memory. One-sided communication differs from other shared-memory programming models such as OpenMP in that explicit function calls are required for the memory access.

  • Parallel I/O functions: A large set of functions allows multiple processes to collectively read from or write to the same file.

  • Collective communication functions for intercommunicators: These functions generalise the concept of collective communication to intercommunicators. For instance, a process of one group may broadcast a message to all processes of another group.

15.3.2 OpenMP programming

OpenMP derives its name from being an open standard for multiprocessing, that is for architectures with a shared memory. Because of the shared memory, we speak of threads (as opposed to processes) in this section.

Shared-memory communication is fundamentally different from message passing: Whereas message passing immediately involves two processes, shared-memory communication uncouples the processes by inserting a medium in-between. We speak of read/write instead of send/receive, that is, a thread writes into memory, and another thread later reads from it. The threads need not know each other, and a written value may be read by several threads. Reading and writing may be separated by an arbitrary amount of time. Unlike in message passing, synchronisation must be organised explicitly, to let a reader know when the writing has finished, and to avoid concurrent manipulation of the same data by different threads.

OpenMP is one type of shared-memory programming, while others include one-sided communication as outlined in Subsection 15.3.1, and threads programming as outlined in Subsection 15.3.3. OpenMP differs from other models in that it enforces a fork-join structure, which is depicted in Figure 15.7. A program starts execution as a single thread, called master thread, and later creates a team of threads in a so-called parallel region. The master thread is part of the team. Parallel regions may be nested, but the threads of a team must finish together. As shown in the figure, a program may contain several parallel regions in sequence, with possibly different numbers of threads.

Figure 15.7.  Structure of an OpenMP program.

Structure of an OpenMP program.


As another characteristic, OpenMP uses compiler directives as opposed to library functions. Compiler directives are hints that a compiler may or may not take into account. In particular, a sequential compiler ignores the directives. OpenMP supports incremental parallelisation, in which one starts from a sequential program, inserts directives at the most performance-critical sections of code, later inserts more directives if necessary, and so on.

OpenMP has been introduced in 1998, version 2.0 appeared in 2002. In addition to compiler directives, OpenMP uses a few library functions and environment variables. The standard is available for C, C++, and Fortran.

Programming OpenMP is easier than programming MPI since the compiler does part of the work. An OpenMP programmer chooses the number of threads, and then specifies work sharing in one of the following ways:

  • Explicitly: A thread can request its own number by calling the library function omp_get_thread_num. Then, a conditional statement evaluating this number explicitly assigns tasks to the threads, similar as in SPMD-style MPI programs.

  • Parallel loop: The compiler directive #pragma omp parallel for indicates that the following for loop may be executed in parallel so that each thread carries out several iterations (tasks). An example is given in Figure 15.8. The programmer can influence the work sharing by specifying parameters such as schedule(static) or schedule(dynamic). Static scheduling means that each thread gets an about equal-sized block of consecutive iterations. Dynamic scheduling means that first each thread is assigned one iteration, and then, repeatedly, a thread that has finished an iteration gets the next one, as in the master/slave paradigma described before for MPI. Different from master/slave, the compiler decides which thread carries out which tasks, and inserts the necessary communications.

  • Task-parallel sections: The directive #pragma omp parallel sections allows to specify a list of tasks that are assigned to the available threads.

Threads communicate through shared memory, that is, they write to or read from shared variables. Only part of the variables are shared, while others are private to a particular thread. Whether a variable is private or shared is determined by rules that the programmer can overwrite.

Figure 15.8.  Matrix-vector multiply in OpenMP using a parallel loop.

Matrix-vector multiply in OpenMP using a parallel loop.


Many OpenMP directives deal with synchronisation that is necessary for mutual exclusion, and to provide a consistent view of shared memory. Some synchronisations are inserted implicitly by the compiler. For instance, at the end of a parallel loop all threads wait for each other before proceeding with a next loop.

15.3.3 Other programming models

While MPI and OpenMP are the most popular models, other approaches have practical importance as well. Here, we outline threads programming, High Performance Fortran, and automatic parallelisation.

Like OpenMP, threads programming or by Java threads uses shared memory. Threads operate on a lower abstraction level than OpenMP in that the programmer is responsible for all details of thread management and work sharing. In particular, threads are created explicitly, one at a time, and each thread is assigned a function to be carried out. Threads programming focuses on task parallelism, whereas OpenMP programming focuses on data parallelism. Thread programs may be unstructured, that is, any thread may create and stop any other. OpenMP programs are often compiled into thread programs.

Data parallelism provides for a different programming style that is explicitly supported by languages such as High Performance Fortran (HPF). While data parallelism can be expressed in MPI, OpenMP etc., data-parallel languages center on the approach. As one of its major constructs, HPF has a parallel loop whose iterations are carried out independently, that is, without communication. The data-parallel style makes programs easier to understand since there is no need to take care of concurrent activities. On the backside, it may be difficult to force applications into this structure. HPF is targeted at single address space distributed memory architectures, and much of the language deals with expressing data distributions. Whereas MPI programmers distribute data by explicitly sending them to the right place, HPF programmers specify the data distribution on a similar level of abstraction as OpenMP programmers specify the scheduling of parallel loops. Details are left to the compiler. An important concept of OpenMP is the owner-computes rule, according to which the owner of the left-hand side variable of an assignment carries out an operation. Thus, data distribution implies the distribution of computations.

Especially for programs from scientific computing, a significant performance potential comes from parallelising loops. This parallelisation can often be accomplished automatically, by parallelising compilers. In particular, these compilers check for data dependencies. that prevent parallelisation. Many programs can be restructured to circumvent the dependence, for instance by exchanging outer and inner loops. Parallelising compilers find these restructuring for important classes of programs.

Exercises

15.3-1 Sketch an MPI program for the prime number problem of Exercise 15.2-3. The program should deploy the master/slave paradigma. Does your program use SPMD style or MPMD style?

15.3-2 Modify your program from Exercise 15.3-1 so that it uses collective communication.

15.3-3 Compare MPI and OpenMP with respect to programmability, that is, give arguments why or to which extent it is easier to program in either MPI or OpenMP.

15.3-4 Sketch an OpenMP program that implements the stencil code example of Exercise 15.2-3.

15.4 Computational models

15.4.1 PRAM

The most popular computational model is the Parallel Random Access Machine (PRAM) which is a natural generalisation of the Random Access Machine (RAM).

The PRAM model consists of synchronised processors , a shared memory with memory cells and memories of the processors. Figure 15.9. shows processors and the shared random access memory

There are variants of this model. They differ in whether multiple processors are allowed to access the same memory cell in a step, and in how the resulting conflicts are resolved. In particular the following variants are distinguished:

Figure 15.9.  Parallel random access machine.

Parallel random access machine.


Types on the base of the properties of read/write operations are

  • EREW (Exclusive-Read Exclusive-Write) PRAM,

  • ERCW (Exclusive-Read Concurrent-Write) PRAM,

  • CREW (Concurrent-Read Exclusive-Write) PRAM,

  • CRCW (Concurrent-Read Concurrent-Write) PRAM.

Figure 15.10(a) shows the case when at most one processor has access a memory cell (ER), and Figure 15.10(d) shows, when multiple processors have access the same cell (CW).

Figure 15.10.  Types of parallel random access machines.

Types of parallel random access machines.


Types of concurrent writing are common, priority, arbitrary, combined.

15.4.2 BSP, LogP and QSM

Here we consider the models BSP, LogP and QSM.

Bulk-synchronous Parallel Model (BSP) describes a computer as a collection of nodes, each consisting of a processor and memory. BSP supposes the existence of a router and a barrier synchronisation facility. The router transfers messages between the nodes, the barrier synchronises all or a subset of nodes. According to BSP computation is partitioned into supersteps. In a superstep each processor independently performs computations on data in its own memory, and initiates communications with other processors. The communication is guaranteed to complete until the beginning of the next superstep.

is defined such that is the time that is takes to route an -relation under continuous traffic conditions. An -relation is a communication pattern in which each processor sends and receives up to messages.

The cost of a superstep is determined as , where is the maximum number of communications initiated by any processor. The cost of a program is the sum of the costs of the individual supersteps.

BSP contains a cost model that involves three parameters: the number of processors , the cost of a barrier synchronisation and a characteristics of the available bandwidth .

LogP model was motivated by inaccuracies of BSP and the restrictive requirement to follow the superstep structure.

While LogP improves on BSP with respect to reflectivity, QSM improves on it with respect to simplicity. In contrast to BSP, QSM is a shared-memory model. As in BSP, the computation is structured into supersteps, and each processor has its own local memory. In a superstep, a processor performs computations on values in the local memory, and initiates read/write operations to the shared memory. All shared-memory accesses complete until the beginning of the next superstep. QSM allows for concurrent reads and writes. Let the maximum number of accesses to any cell in a superstep be . Then QSM charges costs , with , and being defined in BSP.

15.4.3 Mesh, hypercube and butterfly

Mesh also is a popular computational model. A -dimensional mesh is an sized grid having a processor in each grid point. The edges are the communication lines, working in two directions. Processors are labelled by -tuples, as .

Each processor is a RAM, having a local memory. The local memory of the processor is . Each processor can execute in one step such basic operations as adding, subtraction, multiplication, division, comparison, read and write from/into the local memory, etc. Processors work in synchronised way, according to a global clock.

The simplest mesh is the chain, belonging to the value . Figure 15.11 shows a chain consisting of 6 processors.

Figure 15.11.  A chain consisting of six processors.

A chain consisting of six processors.


The processors of a chain are . is connected with , is connected with , the remaining processors are connected with and .

If , then we get a rectangle. If now , then we get a square. Figure 15.12 shows a square of size .

Figure 15.12.  A square of size A square of size 4\times4 ..

A square of size 4\times4 .


A square contains several chains consisting of processors. The processors having identical first index, form a row of processors, and the processors having the same second index form a column of processors. Algorithms running on a square often consists of such operations, executed only by processors of some rows or columns.

If , then the corresponding mesh is a brick. In the special case the mesh is called cube. Figure 15.13 shows a cube of size .

Figure 15.13.  A 3-dimensional cube of size A 3-dimensional cube of size 2\times2\times2 ..

A 3-dimensional cube of size 2\times2\times2 .


The next model of computation is the d-dimensional hypercube . This model can be considered as the generalisation of the square and cube: the square represented on Figure 15.12 is a 2-dimensional, and the cube, represented on Figure 15.13 is a 3-dimensional hypercube. The processors of can be labelled by a binary number consisting of bits. Two processors of are connected iff the Hamming-distance of their labels equals to 1. Therefore each processors of has neighbours, and the of is . Figure 15.14 represents .

Figure 15.14.  A 4-dimensional hypercube A 4-dimensional hypercube \mathcal{H}_{4} ..

A 4-dimensional hypercube \mathcal{H}_{4} .


The butterfly model consists of processors and edges. The processors can be labelled by a pair , where is the columnindex and is the level of the given processor. Figure 15.15 shows a butterfly model containing 32 processors in 8 columns and in 4 levels.

Figure 15.15.  A butterfly model.

A butterfly model.


Finally Figure 15.16 shows a ring containing 6 processors.

Figure 15.16.  A ring consisting of 6 processors.

A ring consisting of 6 processors.

15.5 Performance in theory

In the previous section we considered the performance measures used in the practice.

In the theoretical investigations the algorithms are tested using abstract computers called computation models.

The required quantity of resources can be characterised using absolute and relative measures.

Let , resp. denote the time necessary in worst case to solve the problem of size by the sequential algorithm A, resp. parallel algorithm P (using processors).

In a similar way let , resp. the time necessary for algorithm A, resp. P in best case to solve the problem of size (algorithm P can use processors).

Let , resp. the time needed by any sequential, resp. parallel algorithm to solve problem of size (algorithm P can use processors). These times represent a lower bound of the corresponding running time.

Let suppose the distribution function of the problem of size is given. Then let , resp. the expected value of the time necessary for algorithm A, resp. P to solve problem of size (algorithm P uses processors).

In the analysis it is often supposed that the input data of equal size have equal probability. For such cases we use the notation , resp. and termin average running time.

The value of the performance measures and depend on the used computation model too. For the simplicity of notations we suppose that the algorithms determine the computation model.

Usually the context shows in a unique way the investigated problem. If so, then the parameter is omitted.

Among these performance measures hold the following inequalities:

In a similar way for the characteristic data of the parallel algorithms the following inequalities are true:

For the expected running time we have

and

These notations can be used not only for the running time, but also for any other resource, as memory requirement, number of messages, etc.

Now we define some relative performance measures.

Speedup shows, how many times is smaller the running time of a parallel algorithm, than the running time of the parallel algorithm solving the same problem.

The speedup (or relative number of steps or relative speed) of a given parallel algorithm P, comparing it with a given sequential algorithm A, is defined as

If for a sequential algorithm A and a parallel algorithm P holds

then the speedup of P comparing with A is linear, if

then the speedup of P comparing with A is sublinear, and if

then the speedup of P comparing with A is superlinear.

In the case of parallel algorithms it is a very important performance measure the work , defined by the product of the running time and the number of the used processors:

This definition is used even then if some processors work only in a small fraction of the running time. Therefore the real work can be much smaller, then given by the formula (15.15).

The efficiency is a measure of the fraction of time for which the processors are usefully employed; it is defined as the ratio of the work of the sequential algorithm to the work of the parallel algorithm P:

One can observe, that the ratio of the speedup and the number of the used parallel processors results the same value. If the parallel work is not less than the sequential one, then efficiency is between zero and one, and the relatively large values are beneficial.

In connection with the analysis of the parallel algorithms the work-efficiency is a central concept. If for a parallel algorithm P and sequential algorithm A holds

then algorithm P work-optimal comparing with A.

This definition is equivalent with the equality

According to this definition a parallel algorithm is work-optimal only if the order of its total work is not greater, than the order of the total work of the considered sequential algorithm.

A weaker requirement is the following. If there exists a finite positive integer such that

then algorithm P is work-efficient comparing with A.

If a sequential algorithm A, resp. a parallel algorithm P uses only , resp. units of a given resource, then A, resp. P is called—for the given resource and the considered model of computation—asymptotically optimal.

If an A sequential or a P parallel algorithm uses only the necessary amount of some resource for all possible size of the input, that is , resp. units, and so we have

for A and

for P, then we say, that the given algorithm is absolute optimal for the given resource and the given computation model. In this case we say, that is the accurate complexity of the given problem.

Comparing two algorithms and having

we say, that the speeds of the growths of algorithms and asymptotically have the same order.

Comparing the running times of two algorithms A and B (e.g. in worst case) sometime the estimation depends on : for some values of algorithm A, while for other values of algorithm B is the better. A possible formal definition is as follows. If the functions and are defined for all positive integer , and for some positive integer hold

  1. ;

  2. ,

then the number is called crossover point of the functions and .

For example multiplying two matrices according to the definition and algorithm of Strassen we get one crossover point, whose value is about 20.

Exercises

15.5-1 Suppose that the parallel algorithms P and Q solve the selection problem. Algorithm P uses processors and its running time is . Algorithm Q uses processors and its running time is . Determine the work, speedup and efficiency for both algorithms. Are these algorithms work-optimal or at least work-efficient?

15.5-2 Analyse the following two assertions.

a) Running time of algorithm P is at least .

b) Since the running time of algorithm P is , and the running time of algorithm B is , therefore algorithm B is more efficient.

15.5-3 Extend the definition of the crossover point to noninteger values and parallel algorithms.

15.6 PRAM algorithms

In this section we consider parallel algorithms solving simple problems as prefix calculation, ranking of the elements of an array, merging, selection and sorting.

In the analysis of the algorithms we try to give the accurate order of the running time in the worst case and try to decide whether the presented algorithm is work-optimal or at least work-efficient or not. When parallel algorithms are compared with sequential algorithms, always the best known sequential algorithm is chosen.

To describe these algorithms we use the following pseudocode conventions.

         IN PARALLEL FOR  TO 
          DO 
             
             .
             .
             .
             

For PRAM ordered into a square grid of size the instruction begin with

         IN PARALLEL FOR  TO ,  TO 
             DO

For a -dimensional mesh of size the similar instruction begins with

         IN PARALLEL FOR  TO  TO 
             DO

It is allowed that in this commands represents a group of processors.

15.6.1 Prefix

Let be a binary associative operator defined over a set . We suppose that the operator needs only one set and the set is closed for this operation.

A binary operation is associative on a set, if for all holds

Let the elements of the sequence be elements of the set . Then the input data are the elements of the sequence , and the prefix problem is the computation of the elements . These elements are called prefixes.

It is worth to remark that in other topics of parallel computations the starting sequences of the sequence are called prefixes.

Example 15.1 Associative operations. If is the set of integer numbers, means addition and the sequence of the input data is , then the sequence of the prefixes is . If the alphabet and the input data are the same, but the operation is the multiplication, then . If the operation is the minimum (it is also an associative operation), then . In this case the last prefix is the minimum of the input data.

The prefix problem can be solved by sequential algorithms in time. Any sequential algorithm A requires time to solve the prefix problem. There are parallel algorithms for different models of computation resulting a work-optimal solution of the prefix problem.

In this subsection at first the algorithm CREW-Prefix is introduced, which solves the prefix problem in time, using CREW PRAM processors.

Next is algorithm EREW-Prefix, having similar quantitative characteristics, but requiring only EREW PRAM processors.

These algorithms solve the prefix problem quicker, then the sequential algorithms, but the order of the necessary work is larger.

Therefore interesting is algorithm Optimal-Prefix, which uses only CREW PRAM processors, and makes only steps. The work of this algorithm is only , therefore its efficiency is , and so it is work-optimal. The speedup of this algorithm equals to .

For the sake of simplicity in the further we write usually instead of .

A CREW PRAM algorithm.

As first parallel algorithm a recursive algorithm is presented, which runs on CREW PRAM model of computation, uses processors and time. Designing parallel algorithm it is often used the principle divide-and-conquer, as we we will see in the case of the next algorithm too

Input is the number of processors and the array , output data are the array . We suppose is a power of 2. Since we use the algorithms always with the same number of processors, therefore we omit the number of processors from the list of input parameters. In the mathematical descriptions we prefer to consider and as sequences, while in the pseudocodes sometimes as arrays.

CREW-Prefix()

  1  IF  
  2    THEN  
  3       RETURN  
  4  IF  
  5    THEN  IN PARALLEL FOR  TO  
             DO    compute recursive ,
                the prefixes, belonging to 
           IN PARALLEL FOR  TO 
             DO compute recursive 
                the prefixes, belonging to 
  6     IN PARALLEL FOR  
             DO read  from the global memory and compute 
  7  RETURN  

Example 15.2 Calculation of prefixes of 8 elements on 8 processors. Let and . The input data of the prefix calculation are 12, 3, 6, 8, 11, 4, 5 and 7, the associative operation is the addition.

The run of the recursive algorithm consists of rounds. In the first round (step 4) the first four processors get the input data 12, 3, 6, 8, and compute recursively the prefixes 12, 15, 21, 29 as output. At the same time the other four processors get the input data 11, 4, 5, 7, and compute the prefixes 11, 15, 20, 27.

According to the recursive structure and work as follows. and get and , resp. and get and as input. Recursivity mean for and , that gets and gets , computing at first and , then updates . After this computes and .

While and , according to step 4, compute the final values and , and compute the local provisional values of and .

In the second round (step 5) the first four processors stay, the second four processors compute the final values of and , adding to the provisional values 11, 15, 20 and 27 and receiving 40, 44, 49 and 56.

In the remaining part of the section we use the notation instead of and give the number of used processors in verbal form. If , then we usually prefer to use .

Theorem 15.1 Algorithm CREW-Prefix uses time on p CREW PRAM processors to compute the prefixes of p elements.

Proof. The lines 4–6 require steps, the line 7 does steps. So we get the following recurrence:

Solution of this recursive equation is .

CREW-prefix is not work-optimal, since its work is and we know sequential algorithm requiring only time, but it is work-effective, since all sequential prefix algorithms require time.

An EREW PRAM algorithm.

In the following algorithm we use exclusive write instead of the parallel one, therefore it can be implemented on the EREW PRAM model. Its input is the number of processors and the sequence , and its output is the sequence containing the prefixes.

EREW-Prefix()

  1   
  2   IN PARALLEL FOR  TO  
  3    DO  
  4   
  5  WHILE  
  6    DO  IN PARALLEL FOR  TO  
  7       DO  
  8            
  9  RETURN  

Theorem 15.2 Algorithm EREW-Prefix computes the prefixes of elements on EREW PRAM processors in time.

Proof. The commands in lines 1–3 and 9 are executed in time. Lines 4–7 are executed so many times as the assignment in line 8, that is times.

A work-optimal algorithm.

Next we consider a recursive work-optimal algorithm, which uses CREW PRAM processors. Input is the length of the input sequence and the sequence , output is the sequence , containing the computed prefixes.

Optimal-Prefix()

  1   IN PARALLEL FOR  TO  
  2    DO compute recursive , 
             the prefixes of the following  input data
             
  3   IN PARALLEL FOR  TO  
  4    DO using CREW-Prefix compute , 
             the prefixes of the following  elements:
             
  5   IN PARALLEL FOR  TO  
  6    DO FOR  TO  
  7          DO  
  8   FOR  TO  
  9    DO  
 10  RETURN  

This algorithm runs in logarithmic time. The following two formulas help to show it:

and

where summing goes using the corresponding associative operation.

Theorem 15.3 (parallel prefix computation in time) Algorithm Optimal-Prefix computes the prefixes of elements on CREW PRAM processors in time.

Proof. Line 1 runs in time, line 2 runs time, line 3 runs time.

This theorem imply that the work of Optimal-Prefix is , therefore Optimal-Prefix is a work-optimal algorithm.

Figure 15.17.  Computation of prefixes of 16 elements using Optimal-Prefix.

Computation of prefixes of 16 elements using Optimal-Prefix.


Let the elements of the sequence be the elements of the alphabet . Then the input data of the prefix computation are the elements of the sequence , and the prefix problem is the computation of the elements . These computable elements are called prefixes.

We remark, that in some books on parallel programming often the elements of the sequence are called prefixes.

Example 15.3 Associative operations. If is the set of integers, denotes the addition and the sequence of the input data is 3, -5, 8, 2, 5, 4, then the prefixes are 3, -2, 6, 8, 13, 17. If the alphabet and the input data are the same, the operation is the multiplication, then the output data (prefixes) are 3, -15, -120, -240, -1200, -4800. If the operation is the minimum (it is also associative), then the prefixes are 3, -5, -5, -5, -5, -5. The last prefix equals to the smallest input data.

Sequential prefix calculation can be solved in time. Any A sequential algorithm needs time. There exist work-effective parallel algorithms solving the prefix problem.

Our first parallel algorithm is CREW-Prefix, which uses CREW PRAM processors and requires time. Then we continue with algorithm EREW-Prefix, having similar qualitative characteristics, but running on EREW PRAM model too.

These algorithms solve the prefix problem quicker, than the sequential algorithms, but the order of their work is larger.

Algorithm Optimal-Prefix requires only CREW PRAM processors and in spite of the reduced numbers of processors requires only time. So its work is , therefore its efficiency is and is work-effective. The speedup of the algorithm is .

15.6.2 Ranking

The input of the list ranking problem is a list represented by an array : each element contains the index of its right neighbour (and maybe further data). The task is to determine the rank of the elements. The rank is defined as the number of the right neighbours of the given element.

Since the further data are not necessary to find the solution, for the simplicity we suppose that the elements of the array contain only the index of the right neighbour. This index is called pointer. The pointer of the rightmost element equals to zero.

Example 15.4 Input of list ranking. Let be the array represented in the first row of Figure 15.18. Then the right neighbour of the element is , the right neighbour of is . is the last element, therefore its rank is 0. The rank of is 1, since only one element, is to right from it. The rank of is 4, since the elements and are right from it. The second row of Figure 15.18 shows the elements of in decreasing order of their ranks.

Figure 15.18.  Input data of array ranking and the the result of the ranking.

Input data of array ranking and the the result of the ranking.


The list ranking problem can be solved in linear time using a sequential algorithm. At first we determine the head of the list which is the unique having the property that does not exist an index with . In our case the head of is . The head of the list has the rank , its right neighbour has a rank and finally the rank of the last element is zero.

In this subsection we present a deterministic list ranking algorithm, which uses EREW PRAM processors and in worst case time. The pseudocode of algorithm Det-Ranking is as follows.

The input of the algorithm is the number of the elements to be ranked , the array containing the index of the right neighbour of the elements of , output is the array containing the computed ranks.

Det-Ranking()

  1  IN PARALLEL FOR  TO  
  2    DO IF  
  3       THEN  
  4       ELSE  
  5  FOR  TO  
  6    DO  IN PARALLEL FOR  TO  
  7       DO IF  
  8          THEN  
  9              
 10  RETURN  

The basic idea behind the algorithm Det-Ranking is the pointer jumping. According to this algorithm at the beginning each element contains the index of its right neighbour, and accordingly its provisional rank equal to 1 (with exception of the last element of the list, whose rank equals to zero). This initial state is represented in the first row of Figure 15.19.

Figure 15.19.  Work of algorithm Det-Ranking on the data of Example 15.4.

Work of algorithm Det-Ranking on the data of Example 15.4.


Then the algorithm modifies the element so, that each element points to the right neighbour of its right neighbour (if it exist, otherwise to the end of the list). This state is represented in the second row of Figure 15.19.

If we have processors, then it can be done in time. After this each element (with exception of the last one) shows to the element whose distance was originally two. In the next step of the pointer jumping the elements will show to such other element whose distance was originally 4 (if there is no such element, then to the last one), as it is shown in the third row of Figure 15.19.

In the next step the pointer part of the elements points to the neighbour of distance 8 (or to the last element, if there is no element of distance 8), according to the last row of Figure 15.19.

In each step of the algorithm each element updates the information on the number of elements between itself and the element pointed by the pointer. Let , resp. the rank, resp. neighbour field of the element . The initial value of is 1 for the majority of the elements, but is 0 for the rightmost element ( in the first line of Figure 15.19). During the pointer jumping gets the new value (if ) gets the new value , if . E.g. in the second row of Figure 15.19) , since its previous rank is 1, and the rank of its right neighbour is also 1. After this will be modified to point to . E.g. in the second row of Figure 15.19 , since the right neighbour of the right neighbour of is .

Theorem 15.4 Algorithm Det-Ranking computes the ranks of an array consisting of elements on EREW PRAM processors in time.

Since the work of Det-Ranking is , this algorithm is not work-optimal, but it is work-efficient.

The list ranking problem corresponds to a list prefix problem, where each element is 1, but the last element of the list is 0. One can easily modify Det-Ranking to get a prefix algorithm.

15.6.3 Merge

The input of the merging problem is two sorted sequences and and the output is one sorted sequence containing the elements of the input.

If the length of the input sequences is , then the merging problem can be solved in time using a sequential processor. Since we have to investigate all elements and write them into the corresponding element of , the running time of any algorithm is . We get this lower bound even in the case when we count only the number of necessary comparisons.

Merge in logarithmic time.

Let and be the input sequences. For the shake of simplicity let be the power of two and let the elements be different.

To merge two sequences of length it is enough to know the ranks of the keys, since then we can write the keys—using processors—into the corresponding memory locations with one parallel write operation. The running time of the following algorithm is a logarithmic, therefore it is called Logarithmic-Merge.

Theorem 15.5 Algorithm Logarithmic-Merge merges two sequences of length on CREW PRAM processors in time.

Proof. Let the rank of element be in (in ). If , then let . If we assign a single processor to the element , then it can determine, using binary search, the number of elements in , which are smaller than . If is known, then computes the rank in the union of and , as . If belongs to , the method is the same.

Summarising the time requirements we get, that using one CREW PRAM processor per element, that is totally processors the running time is .

This algorithm is not work-optimal, only work-efficient.

Odd-even merging algorithm.

This following recursive algorithm Odd-Even-Merge follows the classical divide-and-conquer principle.

Let and be the two input sequences. We suppose that is a power of 2 and the elements of the arrays are different. The output of the algorithm is the sequence , containing the merged elements. This algorithm requires EREW PRAM processors.

Odd-Even-Merge()

  1  IF  
  2    THEN get  by merging  and  with one comparison 
  3       RETURN  
  4  IF  
  5    THEN  IN PARALLEL FOR  TO  
  6       DO merge recursively  and 
  7           to get  
  8     IN PARALLEL FOR  TO  
  9       DO merge recursively  and 
 10           to get  
 11     IN PARALLEL FOR  TO  
 12       DO  
 13           
 14          IF  
 15             THEN  
 16                 
 17                 
 18  RETURN  

Example 15.5 Merge of twice eight numbers. Let = 1, 5, 8, 11, 13, 16, 21, 26 and = 3, 9, 12, 18, 23, 27, 31, 65. Figure 15.20 shows the sort of 16 numbers.

At first elements of with odd indices form the sequence and elements with even indices form the sequence , and in the same way we get the sequences and . Then comes the recursive merge of the two odd sequences resulting and the recursive merge of the even sequences resulting .

After this Odd-Even-Merge shuffles and , resulting the sequence : the elements of with odd indices come from and the elements with even indices come from .

Finally we compare the elements of with even index and the next element (that is with , with etc.) and if necessary (that is they are not in the good order) they are changed.

Figure 15.20.  Sorting of 16 numbers by algorithm Odd-Even-Merge.

Sorting of 16 numbers by algorithm Odd-Even-Merge.

Theorem 15.6 (merging in time) Algorithm Odd-Even-Merge merges two sequences of length elements in time using EREW PRAM processors.

Proof. Let denote the running time of the algorithm by . Step 1 requires time, Step 2 time. Therefore we get the recursive equation

having the solution .

We prove the correctness of this algorithm using the zero-one principle.

A comparison-based sorting algorithm is oblivious, if the sequence of comparisons is fixed (elements of the comparison do not depend on the results of the earlier comparisons). This definition means, that the sequence of the pairs of elements to be compared is given.

Theorem 15.7 (zero-one principle) If a simple comparison-based sorting algorithm correctly sorts an arbitrary 0-1 sequence of length n, then it sorts also correctly any sequence of length n consisting of arbitrary keys.

Proof. Let A be a comparison-based oblivious sorting algorithm and let be such a sequence of elements, sorted incorrectly by A. Let suppose A sorts in increasing order the elements of . Then the incorrectly sorted sequence contains an element on the -th position in spite of the fact that contains at least keys smaller than .

Let be the first (having the smallest index) such element of . Substitute in the input sequence the elements smaller than by 0's and the remaining elements by 1's. This modified sequence is a 0-1 sequence therefore A sorts it correctly. This observation implies that in the sorted 0-1 sequence at least 0's precede the 1, written on the place of .

Now denote the elements of the input sequence smaller than by red colour, and the remaining elements by blue colour (in the original and the transformed sequence too). We can show by induction, that the coloured sequences are identical at the start and remain identical after each comparison. According to colours we have three types of comparisons: blue-blue, red-red and blue-red. If the compared elements have the same colour, in both cases (after a change or not-change) the colours remain unchanged. If we compare elements of different colours, then in both sequences the red element occupy the position with smaller index. So finally we get a contradiction, proving the assertion of the theorem.

Example 15.6 A non comparison-based sorting algorithm. Let be a bit sequence. We can sort this sequence simply counting the zeros, and if we count zeros, then write zeros, then ones. Of course, the general correctness of this algorithm is not guaranteed. Since this algorithm is not comparison-based, therefore this fact does not contradict to the zero-one principle.

But merge is sorting, and Odd-Even-Merge is an oblivious sorting algorithm.

Theorem 15.8 Algorithm Odd-Even-Merge sorts correctly sequences consisting of arbitrary numbers.

Proof. Let and sorted 0-1 sequences of length . Let the number of zeros at the beginning of . Then the number of zeros in equals to , while the number of zeros in is . Therefore the number of zeros in equals to and the number of zeros in equals to .

The difference of and is at most 2. This difference is exactly then 2, if and are both odd numbers. Otherwise the difference is at most 1. Let suppose, that (the proof in the other cases is similar). In this cases contains two additional zeros. When the algorithm shuffles and , begins with an even number of zeros, end an even number of ones, and between the zeros and ones is a short “dirty” part, 0, 1. After the comparison and change in the last step of the algorithm the whole sequence become sorted.

A work-optimal merge algorithm.

Algorithm Work-Optimal-Merge uses only processors, but solves the merging in logarithmic time. This algorithm divides the original problem into parts so, that each part contains approximately elements.

Let and be the input sequences. Divide into parts so, that each part contain at most elements. Let the parts be denoted by . Let the largest element in be .

Assign a processor to each element. These processors determine (by binary search) the correct place (according to the sorting) of in . These places divide to parts (some of these parts can be empty). Let denote these parts by . We call the subset corresponding to in (see Figure 15.21).

Figure 15.21.  A work-optimal merge algorithm Optimal-Merge.

A work-optimal merge algorithm Optimal-Merge.


The algorithm gets the merged sequence merging at first with , with and so on, and then joining these merged sequences.

Theorem 15.9 Algorithm Optimal-Merging merges two sorted sequences of length in time on CREW PRAM processors.

Proof. We use the previous algorithm.

The length of the parts is , but the length of the parts can be much larger. Therefore we repeat the partition. Let an arbitrary pair. If , then and can be merged using one processor in time. But if , then divide into parts—then each part contains at most keys. Assign a processor to each part. This assigned processor finds the subset corresponding to this subsequence in : time is sufficient to do this. So the merge of and can be reduced to subproblems, where each subproblem is the merge of two sequences of length.

The number of the used processors is , and this is at most , what is not larger then .

This theorem imply, that Optimal-Merging is work-optimal.

Corollary 15.10 Optimal-Merging is work-optimal.

15.6.4 Selection

In the selection problem elements and a positive integer are given and the -th smallest element is to be selected. Since selection requires the investigation of all elements, and our operations can handle at most two elements, so .

Since it is known sequential algorithm A requiring only time, so A is asymptotically optimal.

The search problem is similar: in that problem the algorithm has to decide, whether a given element appears in the given sequence, and if yes, then where. Here negative answer is also possible and the features of any element decide, whether it corresponds the requirements or not.

We investigate three special cases and work-efficient algorithms to solve them.

Selection in constant time using processors.

Let , that is we wish to select the largest key. Algorithm Quadratic-Select solves this task in time using CRCW processors.

The input ( different keys) is the sequence , and the selected largest element is returned as .

Quadratic-Select()

  1  IF  
  2    THEN  
  3       RETURN  
  4   IN PARALLEL FOR  TO ,  TO  
          DO IF 
  5       THEN FALSE 
  6       ELSE TRUE 
  7   IN PARALLEL FOR  TO  
  8    DO TRUE 
  9   IN PARALLEL FOR  TO ,  TO  
 10    IF FALSE 
 11       THEN FALSE 
 12   IN PARALLEL FOR  TO  
 13    DO IF TRUE 
 14       THEN  
 15  RETURN  

In the first round (lines 4–6) the keys are compared in parallel manner, using all the processors. so, that processor computes the logical value . We suppose that the keys are different. If the elements are not different, then we can use instead of the pair (this solution requires an additional number of length bits. Since there is a unique key for which all comparison result FALSE, this unique key can be found with a logical OR operation is lines 7–11.

Theorem 15.11 (selection in time) Algorithm Quadratic-Select determines the largest key of different keys in time using CRCW common PRAM processors.

Proof. First and third rounds require unit time, the second round requires time, so the total running time is .

The speedup of this algorithm is . The work of the algorithm is . So the efficiency is . It follows that this algorithm is not work-optimal, even it is not work-effective.

Selection in logarithmic time on processors.

Now we show that the maximal element among keys can be found, using even only common CRCW PRAM processors and time. The used technique is the divide-and-conquer. For the simplicity let be a square number.

The input and the output are the same as at the previous algorithm.

Quick-Selection()

  1  IF  
  2    THEN  
  3       RETURN  
  4  IF  
  5    THEN divide the input into groups  and 
             divide the processors into groups 
  6   IN PARALLEL FOR  TO  
  7    DO recursively determines the maximal element  of the group  
  8  Quadratic-Select() 
  9  RETURN  

The algorithm divides the input into groups so, that each group contains elements , and divides the processors into groups so, that group contains processors . Then the group of processors computes recursively the maximum of group . Finally the previous algorithm Quadratic-Select gets as input the sequence and finds the maximum y of the input sequence .

Theorem 15.12 (selection in time) Algorithm Quick-Select determines the largest of different elements in time using common CRCW PRAM processors.

Let the running time of the algorithm denoted by . Step 1 requires time, step 2 requires time. Therefore satisfies the recursive equation

having the solution .

The total work of algorithm Quick-Select is , so its efficiency is , therefore Quick-Select is not work-optimal, it is only work-effective.

Selection from integer numbers.

If the problem is to find the maximum of keys when the keys consist of one bit, then the problem can be solved using a logical OR operation, and so requires only constant time using processors. Now we try to extend this observation. Let be a given positive integer constant, and we suppose the keys are integer numbers, belonging to the interval . Then the keys can be represented using at most bits. For the simplicity we suppose that all the keys are given as binary numbers of length bits.

The following algorithm Integer-Selection requires only constant time and CRCW PRAM processors to find the maximum.

The basic idea is to partition the bits of the numbers into parts of length . The -th part contains the bits , the number of the parts is . Figure 15.22 shows the partition.

Figure 15.22.  Selection of maximal integer number.

Selection of maximal integer number.


The input of Integer-Selection is the number of processors and the sequence containing different integer numbers, and output is the maximal number .

Integer-Selection()

  1  FOR  TO  
  2    DO compute the maximum  of the remaining numbers on the base of 
             their -th part
  3       delete the numbers whose -th part is smaller than  
  4  one of the remaining numbers 
  5  RETURN  

The algorithm starts with searching the maximum on the base of the first part of the numbers. Then it delete the numbers, whose first part is smaller, than the maximum. Then this is repeated for the second, ..., last part of the numbers. Any of the non deleted numbers is maximal.

Theorem 15.13 (selection from integer numbers) If the numbers are integers drawn from the interval , then algorithm Integer-Selection determines the largest number among numbers for any positive in time using CRCW PRAM processors.

Proof. Let suppose that we start with the selection of numbers, whose most significant bits are maximal. Let this maximum in the first part denoted by . It is sure that the numbers whose first part is smaller than are not maximal, therefore can be deleted. If we execute this basis operation for all parts (that is times), then exactly those numbers will be deleted, what are not maximal, and all maximal element remain.

If a key contains at most bits, then its value is at most . So algorithm Integer-Select in its first step determines the maximum of integer numbers taken from the interval . The algorithm assigns a processor to each number and uses common memory locations , containing initially . In one step processor writes into . Later the maximum of all numbers can be determined from memory cells using processors by Theorem 15.11 in constant time.

General selection.

Let the sequence contain different numbers and the problem is to select the th smallest element of . Let we have CREW processors.

General-Selection()

  1  divide the  processors into  groups  so, that group  
       contains the processors  and divide
       the  elements into  groups  so, that group 
       contains the elements 
  2   IN PARALLEL FOR  TO  
  3    DO determine  (how many elements of  are smaller, than ) 
  4   IN PARALLEL FOR  TO  
  5    DO using Optimal-Prefix determine  
          (how many elements of  are smaller, than )
  6   IN PARALLEL FOR  TO  
  7    DO IF  
  8          THEN RETURN  

Theorem 15.14 (general selection) The algorithm General-Selection determines the -th smallest of different numbers in time using processors.

Proof. In lines 2–3 works as a sequential processor, therefore these lines require time. Lines 4–5 require time according to Theorem 15.3. Lines 6–8 can be executed in constant time, so the total running time is .

The work of General-Selection is , therefore this algorithm is not work-effective.

15.6.5 Sorting

Given a sequence the sorting problem is to rearrange the elements of e.g. in increasing order.

It is well-known that any A sequential comparison-based sorting algorithm needs comparisons, and there are comparison-based sorting algorithms with running time.

There are also algorithms, using special operations or sorting numbers with special features, which solve the sorting problem in linear time. If we have to investigate all elements of and permitted operations can handle at most 2 elements, then we get . So it is true, that among the comparison-based and also among the non-comparison-based sorting algorithms are asymptotically optimal sequential algorithms. In this subsection we consider three different sorting algorithm.

Sorting in logarithmic time using processors.

Using the ideas of algorithms Quadratic-Selection and Optimal-Prefix we can sort elements using processors in time.

Quadratic-Sort()

  1  IF  
  2    THEN  
  3       RETURN  
  4   IN PARALLEL FOR  TO ,  TO  
          DO IF 
  5       THEN  
  6       ELSE  
  7  divide the processors into  groups  so, that group  contains 
          processors 
  8   IN PARALLEL FOR  TO  
  9    DO compute  
 10   IN PARALLEL FOR  TO  
 11    DO  
 12  RETURN  

In lines 4–7 the algorithm compares all pairs of the elements (as Quadratic-Selection), then in lines 7–9 (in a similar way as Optimal-Prefix works) it counts, how many elements of is smaller, than the investigated , and finally in lines 10–12 one processor of each group writes the final result into the corresponding memory cell.

Theorem 15.15 (sorting in time) Algorithm Quadratic-Sort sorts elements using CRCW PRAM processors in time.

Proof. Lines 8–9 require time, and the remaining lines require only constant time.

Since the work of Quadratic-Sort is , this algorithm is not work-effective.

Odd-even algorithm with running time.

The next algorithm uses the Odd-Even-Merge algorithm and the classical divide-and-conquer principle. The input is the sequence , containing the numbers to be sorted, and the output is the sequence , containing the sorted numbers.

Odd-Even-Sort()

  1  IF  
  2    THEN  
  3  IF  
  4    THEN let  and . 
  5     IN PARALLEL FOR  TO  
  6       DO sort recursively  to get  
  7     IN PARALLEL FOR  TO  
  8       DO sort recursively  to get  
  9     IN PARALLEL FOR  TO  
 10       DO merge  and  using Odd-Even-Merge() 
 11  RETURN  

The running time of this EREW PRAM algorithm is .

Theorem 15.16 (sorting in time) Algorithm Odd-Even-Sort sorts elements in time using EREW PRAM processors.

Proof. Let be the running time of the algorithm. Lines 3–4 require time, Lines 5–8 require time, and lines 9–10 require time, line 11 require time. Therefore satisfies the recurrence

having the solution .

Example 15.7 Sorting on 16 processors. Sort using 16 processors the following numbers: 62, 19, 8, 5, 1, 13, 11, 16, 23, 31, 9, 3, 18, 12, 27, 34. At first we get the odd and even parts, then the first 8 processors gets the sequence , while the other 8 processors get . The output of the first 8 processors is , while the output of the second 8 processors is . The merged final result is .

The work of the algorithm is , its efficiency is , and its speedup is . The algorithm is not work-optimal, but it is work-effective.

Algorithm of Preparata with running time.

If we have more processors, then the running time can be decreased. The following recursive algorithm due to Preparata uses CREW PRAM processors and time. Input is the sequence , and the output is the sequence containing the sorted elements.

Preparata()

  1  IF  
  2    THEN sort  using  processors and Odd-Even-Sort 
  3    RETURN  
  4  divide the  elements into  parts  so, that each part 
       contains  elements, and divide the processors into  groups
         so, that each group contains  processors
  5   IN PARALLEL FOR  TO  
  6    DO sort the part  recursively to get a sorted sequence  
  7       divide the processors into  groups  
          containing  processors
  8   IN PARALLEL FOR  TO  TO  
  9    DO merge  and  
 10  divide the processors into  groups  so, that each group 
       contains  processors
 11   IN PARALLEL FOR  TO  
 12    DO determine the ranks of the  element in  using the local ranks 
             received in line 9 and using the algorithm Optimal-Prefix
 13       the elements of  having a rank  
 14  RETURN  

This algorithm uses the divide-and-conquer principle. It divides the input into parts, then merges each pair of parts. This merge results local ranks of the elements. The global rank of the elements can be computed summing up these local ranks.

Theorem 15.17 (sorting in time) Algorithm Preparata sorts elements in time using CREW PRAM processors.

Proof. Let the running time be . Lines 4–6 require time, lines 7–12 together . Therefore satisfies the equation

having the solution .

The work of Preparata is the same, as the work of Odd-Even-Sort, but the speedup is better: . The efficiency of both algorithms is .

Exercises

15.6-1 The memory cell of the global memory contains some data. Design an algorithm, which copies this data to the memory cells in time, using EREW PRAM processors.

15.6-2 Design an algorithm which solves the previous Exercise 15.6-1 using only EREW PRAM processors saving the running time.

15.6-3 Design an algorithm having running time and determining the maximum of numbers using common CRCW PRAM processors.

15.6-4 Let be a sequence containing keys. Design an algorithm to determine the rank of any key using CREW PRAM processors and time.

15.6-5 Design an algorithm having running time, which decides using common CRCW PRAM processors, whether element 5 is contained by a given array , and if is contained, then gives the largest index , for which holds.

15.6-6 Design algorithm to merge two sorted sequence of length in time, using CREW PRAM processors.

15.6-7 Determine the running time, speedup, work, and efficiency of all algorithms, discussed in this section.

15.7 Mesh algorithms

To illustrate another model of computation we present two algorithms solving the prefix problem on meshes.

15.7.1 Prefix on chain

Let suppose that processor of the chain stores element in its local memory, and after the parallel computations the prefix will be stored in the local memory of . At first we introduce a naive algorithm. Its input is the sequence of elements , and its output is the sequence , containing the prefixes.

Chain-Prefix()

  1   sends  to  
  2   IN PARALLEL FOR  TO  
  3  FOR  TO  
  4    DO gets  from , then computes and stores  
             stores , and sends  to 
  5   gets  from , then computes and stores  

Saying the truth, this is not a real parallel algorithm.

Theorem 15.18 Algorithm Chain-Prefix determines the prefixes of p elements using a chain in time.

Proof. The cycle in lines 2–5 requires time, line 1 and line 6 requires time.

Since the prefixes can be determined in time using a sequential processor, and , so CHAIN-Prefix is not work-effective.

15.7.2 Prefix on square

An algorithm, similar to Chain-Prefix, can be developed for a square too.

Let us consider a square of size . We need an indexing of the processors. There are many different indexing schemes, but for the next algorithm Square-Prefix sufficient is the one of the simplest solutions, the row-major indexing scheme, where processor gets the index .

The input and the output are the same, as in the case of Chain-Prefix.

The processors form the processor row and the processors form the processor column . The input stored by the processors of row is denoted by , and the similar output is denoted by .

The algorithm works in 3 rounds. In the first round (lines 1–8) processor rows compute the row-local prefixes (working as processors of Chain-Prefix). In the second round (lines 9–17) the column computes the prefixes using the results of the first round, and the processors of this column send the computed prefix to the neighbour . Finally in the third round the rows determine the final prefixes.

Square-Prefix()

  1   IN PARALLEL FOR  TO  
  2    DO sends  to  
  3   IN PARALLEL FOR  TO  
  4    FOR  TO  
  5       DO gets  from , then computes and 
  6          stores , and sends  to  
  7   IN PARALLEL FOR  TO  
  8    DO gets  from , then computes and stores  
  9   sends  to  
 10   IN PARALLEL FOR  TO  
 11    FOR  TO  
 12       DO gets  from , then computes and stores 
                stores , and sends  to 
 13   gets  from , then computes and stores  
 14   IN PARALLEL FOR  TO  
 15    DO send  to  
 16   IN PARALLEL FOR  TO  
 17    DO sends  to  
 18   IN PARALLEL FOR  DOWNTO  
 19    FOR  TO  
 20       DO gets  from , then computes and 
 21          stores , and sends  to  
 22   IN PARALLEL FOR  TO  
 23    DO gets  from , then computes and stores  

Theorem 15.19 Algorithm Square-Prefix solves the prefix problem using a square of size , major row indexing in time.

Proof. In the first round lines 1–2 contain 1 parallel operation, lines 3–6 require operations, and line 8 again 1 operation, that is all together operations. In a similar way in the third round lines 18–23 require time units, and in round 2 lines 9–17 require time units. The sum of the necessary time units is .

Example 15.8 Prefix computation on square of size Figure 15.23(a) shows 16 input elements. In the first round Square-Prefix computes the row-local prefixes, part (b) of the figure show the results. Then in the second round only the processors of the fourth column work, and determine the column-local prefixes – results are in part (c) of the figure. Finally in the third round algorithm determines the final results shown in part (d) of the figure.

Figure 15.23.  Prefix computation on square.

Prefix computation on square.

 CHAPTER NOTES 

Basic sources of this chapter are for architectures and models the book of Leopold [221], and the book of Sima, Fountaine and Kacsuk [304], for parallel programming the book due to Kumar et al. [141] and [221], for parallel algorithms the books of Berman and Paul, [41] Cormen, Leiserson and Rivest [72], the book written by Horowitz, Sahni and Rajasekaran [167] and the book [176], and the recent book due to Casanova, Legrand and Robert [58].

The website [324] contains the Top 500 list, a regularly updated survey of the most powerful computers worldwide [324]. It contains 42% clusters.

Described classifications of computers are proposed by Flynn [113], and Leopold [221]. The Figures 15.1, 15.2, 15.3, 15.4, 15.5, 15.7 are taken from the book of Leopold [221], the program 15.6 from the book written by Gropp et al. [145].

The clusters are characterised using the book of Pfister [273], grids are presented on the base of the book and manuscript of Foster and Kellerman [117], [118].

With the problems of shared memory deal the book written by Hwang and Xu [172], the book due to Kleiman, Shah, and Smaalders [199], and the textbook of Tanenbaum and van Steen [315].

Details on concepts as tasks, processes and threads can be found in many textbook, e.g. in [303], [314]. Decomposition of the tasks into smaller parts is analysed by Tanenbaum and van Steen [315].

The laws concerning the speedup were described by Amdahl [15], Gustafson-Barsis [150] and Brent [47]. Kandemir, Ramanujam and Choudray review the different methods of the improvement of locality [188]. Wolfe [346] analyses in details the connection between the transformation of the data and the program code. In connection with code optimisation the book published by Kennedy and Allen [197] is a useful source.

The MPI programming model is presented according to Gropp, Snir, Nitzberg, and Lusk [145], while the base of the description of the OpenMP model is the paper due to Chandra, Dragum, Kohr, Dror, McDonald and Menon [60], further a review found on the internet [258].

Lewis and Berg [222] discuss pthreads, while Oaks and Wong [257] the Java threads in details. Description of High Performance Fortran can be found in the book Koelbel et al. [204]. Among others Wolfe [346] studied the parallelising compilers.

The concept of PRAM is due to Fortune and Wyllie and is known since 1978 [116]. BSP was proposed in 1990 by Valiant [334]. LogP has been suggested as an alternative of BSP by Culler et al. in 1993 [78]. QSM was introduced in 1999 by Gibbons, Matias and Ramachandran [132].

The majority of the pseudocode conventions used in Section 15.6 and the description of crossover points and comparison of different methods of matrix multiplication can be found in [72].

The Readers interested in further programming models, as skeletons, parallel functional programming, languages of coordination and parallel mobile agents, can find a detailed description in [221]. Further problems and parallel algorithms are analysed in the books of Leighton [218], [219] and in the chapter Memory Management of this book [28] and in the book of Horowitz, Sahni and Rajasekaran [167]. A model of scheduling of parallel processes is discussed in [130], [174], [345].

Cost-optimal parallel merge is analysed by Wu and Olariu in [349]. New ideas (as the application of multiple comparisons to get a constant time sorting algoritm) of parallel sorting can be found in the paper of Gararch, Golub and Kruskal [126].

Chapter 16. Systolic Systems

Systolic arrays probably constitute a perfect kind of special purpose computer. In their simplest appearance, they may provide only one operation, that is repeated over and over again. Yet, systolic arrays show an abundance of practice-oriented applications, mainly in fields dominated by iterative procedures: numerical mathematics, combinatorial optimisation, linear algebra, algorithmic graph theory, image and signal processing, speech and text processing, et cetera.

For a systolic array can be tailored to the structure of its one and only algorithm thus accurately! So that time and place of each executed operation are fixed once and for all. And communicating cells are permanently and directly connected, no switching required. The algorithm has in fact become hardwired. Systolic algorithms in this respect are considered to be hardware algorithms. Please note that the term systolic algorithms usually does not refer to a set of concrete algorithms for solving a single specific computational problem, as for instance sorting. And this is quite in contrast to terms like sorting algorithms. Rather, systolic algorithms constitute a special style of specification, programming, and computation. So algorithms from many different areas of application can be systolic in style. But probably not all well-known algorithms from such an area might be suited to systolic computation.

Hence, this chapter does not intend to present all systolic algorithms, nor will it introduce even the most important systolic algorithms from any field of application. Instead, with a few simple but typical examples, we try to lay the foundations for the Readers' general understanding of systolic algorithms. The rest of this chapter is organised as follows: Section 16.1 shows some basic concepts of systolic systems by means of an introductory example. Section 16.2 explains how systolic arrays formally emerge from space-time transformations. Section 16.3 deals with input/output schemes. Section 16.4 is devoted to all aspects of control in systolic arrays. In Section 16.5 we study the class of linear systolic arrays, raising further questions.

16.1 Basic concepts of systolic systems

The designation systolic follows from the operational principle of the systolic architecture. The systolic style is characterised by an intensive application of both pipelining and parallelism, controlled by a global and completely synchronous clock. Data streams pulsate rhythmically through the communication network, like streams of blood are driven from the heart through the veins of the body. Here, pipelining is not constrained to a single space axis but concerns all data streams possibly moving in different directions and intersecting in the cells of the systolic array.

A systolic system typically consists of a host computer, and the actual systolic array. Conceptionally, the host computer is of minor importance, just controlling the operation of the systolic array and supplying the data. The systolic array can be understood as a specialised network of cells rapidly performing data-intensive computations, supported by massive parallelism. A systolic algorithm is the program collaboratively executed by the cells of a systolic array.

Systolic arrays may appear very differently, but usually share a couple of key features: discrete time scheme, synchronous operation, regular (frequently two-dimensional) geometric layout, communication limited to directly neighbouring cells, and spartan control mechanisms.

In this section, we explain fundamental phenomena in context of systolic arrays, driven by a running example. A computational problem usually allows several solutions, each implemented by a specific systolic array. Among these, the most attractive designs (in whatever respect) may be very complex. Note, however, that in this educational text we are less interested in advanced solutions, but strive to present important concepts compactly and intuitively.

16.1.1 An introductory example: matrix product

Figure 16.1 shows a rectangular systolic array consisting of 15 cells for multiplying a matrix by an matrix . The parameter is not reflected in the structure of this particular systolic array, but in the input scheme and the running time of the algorithm.

The input scheme depicted is based on the special choice of parameter . Therefore, Figure 16.1 gives a solution to the following problem instance:

where

and

The cells of the systolic array can exchange data through links, drawn as arrows between the cells in Figure 16.1(a). Boundary cells of the systolic array can also communicate with the outside world. All cells of the systolic array share a common connection pattern for communicating with their environment. The completely regular structure of the systolic array (placement and connection pattern of the cells) induces regular data flows along all connecting directions.

Figure 16.1(b) shows the internal structure of a cell. We find a multiplier, an adder, three registers, and four ports, plus some wiring between these units. Each port represents an interface to some external link that is attached to the cell. All our cells are of the same structure.

Each of the registers A, B, C can store a single data item. The designations of the registers are suggestive here, but arbitrary in principle. Registers A and B get their values from input ports, shown in Figure 16.1(b) as small circles on the left resp. upper border of the cell.

The current values of registers A and B are used as operands of the multiplier and, at the same time, are passed through output ports of the cell, see the circles on the right resp. lower border. The result of the multiplication is supplied to the adder, with the second operand originating from register C. The result of the addition eventually overwrites the past value of register C.

Figure 16.1.  Rectangular systolic array for matrix product. (a) Array structure and input scheme. (b) Cell structure.

Rectangular systolic array for matrix product. (a) Array structure and input scheme. (b) Cell structure.

16.1.2 Problem parameters and array parameters

The 15 cells of the systolic array are organised as a rectangular pattern of three rows by five columns, exactly as with matrix . Also, these dimensions directly correspond to the number of rows of matrix and the number of columns of matrix . The size of the systolic array, therefore, corresponds to the size of some data structures for the problem to solve. If we had to multiply an matrix by an matrix in the general case, then we would need a systolic array with rows and columns.

The quantities are parameters of the problem to solve, because the number of operations to perform depends on each of them; they are thus problem parameters. The size of the systolic array, in contrast, depends on the quantities and , only. For this reason, and become also array parameters, for this particular systolic array, whereas is not an array parameter.

Remark. For matrix product, we will see another systolic array in Section 16.2, with dimensions dependent on all three problem parameters .

An systolic array as shown in Figure 16.1 would also permit to multiply an matrix by an matrix , where and . This is important if we intend to use the same systolic array for the multiplication of matrices of varying dimensions. Then we would operate on a properly dimensioned rectangular subarray, only, consisting of rows and columns, and located, for instance, in the upper left corner of the complete array. The remaining cells would also work, but without any contribution to the solution of the whole problem; they should do no harm, of course.

16.1.3 Space coordinates

Now let's assume that we want to assign unique space coordinates to each cell of a systolic array, for characterising the geometric position of the cell relative to the whole array. In a rectangular systolic array, we simply can use the respective row and column numbers, for instance. The cell marked with in Figure 16.1 thus would get the coordinates (1,1), the cell marked with would get the coordinates (1,2), cell would get (2,1), and so on. For the remainder of this section, we take space coordinates constructed in such a way for granted.

In principle it does not matter where the coordinate origin lies, where the axes are pointing to, which direction in space corresponds to the first coordinate, and which to the second. In the system presented above, the order of the coordinates has been chosen corresponding to the designation of the matrix components. Thus, the first coordinate stands for the rows numbered top to bottom from position 1, the second component stands for the columns numbered left to right, also from position 1.

Of course, we could have made a completely different choice for the coordinate system. But the presented system perfectly matches our particular systolic array: the indices of a matrix element computed in a cell agree with the coordinates of this cell. The entered rows of the matrix carry the same number as the first coordinate of the cells they pass; correspondingly for the second coordinate, concerning the columns of the matrix $B$. All links (and thus all passing data flows) are in parallel to some axis, and towards ascending coordinates.

It is not always so clear how expressive space coordinates can be determined; we refer to the systolic array from Figure 16.3(a) as an example. But whatsoever the coordinate system is chosen: it is important that the regular structure of the systolic array is obviously reflected in the coordinates of the cells. Therefore, almost always integral coordinates are used. Moreover, the coordinates of cells with minimum Euclidean distance should differ in one component, only, and then with distance 1.

16.1.4 Serialising generic operators

Each active cell from Figure 16.1 computes exactly the element of the result matrix . Therefore, the cell must evaluate the dot product

This is done iteratively: in each step, a product is calculated and added to the current partial sum for . Obviously, the partial sum has to be cleared—or set to another initial value, if required—before starting the accumulation. Inspired by the classical notation of imperative programming languages, the general proceeding could be specified in pseudocode as follows:

Matrix-Product()

  1  FOR  TO  
  2    DO FOR  TO  
  3       DO  
  4          FOR  TO  
  5             DO  
  6  RETURN  

If , we have to perform multiplications, additions, and assignments, each. Hence the running time of this algorithm is of order for any sequential processor.

The sum operator is one of the so-called generic operators, that combine an arbitrary number of operands. In the systolic array from Figure 16.1, all additions contributing to a particular sum are performed in the same cell. However, there are plenty of examples where the individual operations of a generic operator are spread over several cells—see, for instance, the systolic array from Figure 16.3.

Remark. Further examples of generic operators are: product, minimum, maximum, as well as the Boolean operators AND, OR, and EXCLUSIVE OR.

Thus, generic operators usually have to be serialised before the calculations to perform can be assigned to the cells of the systolic array. Since the distribution of the individual operations to the cells is not unique, generic operators generally must be dealt with in another way than simple operators with fixed arity, as for instance the dyadic addition.

16.1.5 Assignment-free notation

Instead of using an imperative style as in algorithm Matrix-product, we better describe systolic programs by an assignment-free notation which is based on an equational calculus. Thus we avoid side effects and are able to directly express parallelism. For instance, we may be bothered about the reuse of the program variable from algorithm Matrix-product. So, we replace with a sequence of instances , that stand for the successive states of . This approach yields a so-called recurrence equation We are now able to state the general matrix product from algorithm Matrix-product by the following assignment-free expressions:

System (16.1) explicitly describes the fine structure of the executed systolic algorithm. The first equation specifies all input data, the third equation all output data. The systolic array implements these equations by input/output operations. Only the second equation corresponds to real calculations.

Each equation of the system is accompanied, on the right side, by a quantification. The quantification states the set of values the iteration variables and (and, for the second equation, also ) should take. Such a set is called a domain. The iteration variables of the second equation can be combined in an iteration vector . For the input/output equations, the iteration vector would consist of the components and , only. To get a closed representation, we augment this vector by a third component , that takes a fixed value. Inputs then are characterised by , outputs by . Overall we get the following system:

Note that although the domains for the input/output equations now are formally also of dimension 3, as a matter of fact they are only two-dimensional in the classical geometric sense.

16.1.6 Elementary operations

From equations as in system (16.2), we directly can infer the atomic entities to perform in the cells of the systolic array. We find these operations by instantiating each equation of the system with all points of the respective domain. If an equation contains several suboperations corresponding to one point of the domain, these are seen as a compound operation, and are always processed together by the same cell in one working cycle.

In the second equation of system (16.2), for instance, we find the multiplication and the successive addition . The corresponding elementary operations—multiplication and addition—are indeed executed together as a multiply-add compound operation by the cell of the systolic array shown in Figure 16.1(b).

Now we can assign a designation to each elementary operation, also called coordinates. A straight-forward method to define suitable coordinates is provided by the iteration vectors used in the quantifications.

Applying this concept to system (16.1), we can for instance assign the tuple of coordinates to the calculation . The same tuple is assigned to the input operation , but with setting . By the way: all domains are disjoint in this example.

If we always use the iteration vectors as designations for the calculations and the input/output operations, there is no further need to distinguish between coordinates and iteration vectors. Note, however, that this decision also mandates that all operations belonging to a certain point of the domain together constitute a compound operation—even when they appear in different equations and possibly are not related. For simplicity, we always use the iteration vectors as coordinates in the sequel.

16.1.7 Discrete timesteps

The various elementary operations always happen in discrete timesteps in the systolic cells. All these timesteps driving a systolic array are of equal duration. Moreover, all cells of a systolic array work completely synchronous, i.e., they all start and finish their respective communication and calculation steps at the same time. Successive timesteps controlling a cell seamlessly follow each other.

Remark. But haven't we learned from Albert Einstein that strict simultaneity is physically impossible? Indeed, all we need here are cells that operate almost simultaneously. Technically this is guaranteed by providing to all systolic cells a common clock signal that switches all registers of the array. Within the bounds of the usually achievable accuracy, the communication between the cells happens sufficiently synchronised, and thus no loss of data occurs concerning send and receive operations. Therefore, it should be justified to assume a conceptional simultaneity for theoretical reasoning.

Now we can slice the physical time into units of a timestep, and number the timesteps consecutively. The origin on the time axis can be arbitrarily chosen, since time is running synchronously for all cells. A reasonable decision would be to take as the time of the first input in any cell. Under this regime, the elementary compound operation of system (16.2) designated by would be executed at time . On the other hand, it would be evenly justified to assign the time to the coordinates ; because this change would only induce a global time shift by three time units.

So let us assume for the following that the execution of an instance starts at time . The first calculation in our example then happens at time , the last at time . The running time thus amounts to timesteps.

16.1.8 External and internal communication

Normally, the data needed for calculation by the systolic array initially are not yet located inside the cells of the array. Rather, they must be infused into the array from the outside world. The outside world in this case is a host computer, usually a scalar control processor accessing a central data storage. The control processor, at the right time, fetches the necessary data from the storage, passes them to the systolic array in a suitable way, and eventually writes back the calculated results into the storage.

Each cell must access the operands and during the timestep concerning index value . But only the cells of the leftmost column of the systolic array from Figure 16.1 get the items of the matrix directly as input data from the outside world. All other cells must be provided with the required values from a neighbouring cell. This is done via the horizontal links between neighbouring cells, see Figure 16.1(a). The item successively passes the cells . Correspondingly, the value enters the array at cell , and then flows through the vertical links, reaching the cells up to cell . An arrowhead in the figure shows in which direction the link is oriented.

Frequently, it is considered problematic to transmit a value over large distances within a single timestep, in a distributed or parallel architecture. Now suppose that, in our example, cell got the value during timestep from cell , or from the outside world. For the reasons described above, is not passed from cell to cell in the same timestep , but one timestep later, i.e., at time . This also holds for the values . The delay is visualised in the detail drawing of the cell from Figure 16.1(b): input data flowing through a cell always pass one register, and each passed register induces a delay of exactly one timestep.

Remark. For systolic architectures, it is mandatory that any path between two cells contains at least one register—even when forwarding data to a neighbouring cell, only. All registers in the cells are synchronously switched by the global clock signal of the systolic array. This results in the characteristic rhythmical traffic on all links of the systolic array. Because of the analogy with pulsating veins, the medical term systole has been reused for the name of the concept.

To elucidate the delayed forwarding of values, we augment system (16.1) with further equations. Repeatedly used values like are represented by separate instances, one for each access. The result of this proceeding—that is very characteristic for the design of systolic algorithms—is shown as system (16.3).

Each of the partial sums in the progressive evaluation of is calculated in a certain timestep, and then used only once, namely in the next timestep. Therefore, cell must provide a register (named C in Figure 16.1(b)) where the value of can be stored for one timestep. Once the old value is no longer needed, the register holding can be overwritten with the new value . When eventually the dot product is completed, the register contains the value , that is the final result . Before performing any computation, the register has to be cleared, i.e., preloaded with a zero value—or any other desired value.

In contrast, there is no need to store the values and permanently in cell . As we can learn from Figure 16.1(a), each row of the matrix is delayed by one timestep with respect to the preceding row. And so are the columns of the matrix . Thus the values and arrive at cell exactly when the calculation of is due. They are put to the registers A resp. B, then immediately fetched from there for the multiplication, and in the same cycle forwarded to the neighbouring cells. The values and are of no further use for cell after they have been multiplied, and need not be stored there any longer. So A and B are overwritten with new values during the next timestep.

It should be obvious from this exposition that we urgently need to make economic use of the memory contained in a cell. Any calculation and any communication must be coordinated in space and time in such a way that storing of values is limited to the shortest-possible time interval. This goal can be achieved by immediately using and forwarding the received values. Besides the overall structure of the systolic array, choosing an appropriate input/output scheme and placing the corresponding number of delays in the cells essentially facilitates the desired coordination. Figure 16.1(b) in this respect shows the smallest possible delay by one timestep.

Geometrically, the input scheme of the example resulted from skewing the matrices and . Thereby some places in the input streams for matrix became vacant and had to be filled with zero values; otherwise, the calculation of the would have been garbled. The input streams in length depend on the problem parameter .

As can been seen in Figure 16.1, the items of matrix are calculated stationary, i.e., all additions contributing to an item happen in the same cell. Stationary variables don't move at all during the calculation in the systolic array. Stationary results eventually must be forwarded to a border of the array in a supplementary action for getting delivered to the outside world. Moreover, it is necessary to initialise the register for item . Performing these extra tasks requires a high expenditure of runtime and hardware. We will further study this problem in Section 16.4.

16.1.9 Pipelining

The characteristic operating style with globally synchronised discrete timesteps of equal duration and the strict separation in time of the cells by registers suggest systolic arrays to be special cases of pipelined systems. Here, the registers of the cells correspond to the well-known pipeline registers. However, classical pipelines come as linear structures, only, whereas systolic arrays frequently extend into more spatial dimensions—as visible in our example. A multi-dimensional systolic array can be regarded as a set of interconnected linear pipelines, with some justification. Hence it should be apparent that basic properties of one-dimensional pipelining also apply to multi-dimensional systolic arrays.

A typical effect of pipelining is the reduced utilisation at startup and during shut-down of the operation. Initially, the pipe is empty, no pipeline stage active. Then, the first stage receives data and starts working; all other stages are still idle. During the next timestep, the first stage passes data to the second stage and itself receives new data; only these two stages do some work. More and more stages become active until all stages process data in every timestep; the pipeline is now fully utilised for the first time. After a series of timesteps at maximum load, with duration dependent on the length of the data stream, the input sequence ceases; the first stage of the pipeline therefore runs out of work. In the next timestep, the second stage stops working, too. And so on, until eventually all stages have been fallen asleep again. Phases of reduced activity diminish the average performance of the whole pipeline, and the relative contribution of this drop in productivity is all the worse, the more stages the pipeline has in relation to the length of the data stream.

We now study this phenomenon to some depth by analysing the two-dimensional systolic array from Figure 16.1. As expected, we find a lot of idling cells when starting or finishing the calculation. In the first timestep, only cell performs some useful work; all other cells in fact do calculations that work like null operations—and that's what they are supposed to do in this phase. In the second timestep, cells and come to real work, see Figure 16.2(a). Data is flooding the array until eventually all cells are doing work. After the last true data item has left cell , the latter is no longer contributing to the calculation but merely reproduces the finished value of . Step by step, more and more cells drop off. Finally, only cell makes a last necessary computation step; Figure 16.2(b) shows this concluding timestep.

Figure 16.2.  Two snapshots for the systolic array from Figure 16.1.

Two snapshots for the systolic array from Figure 16.1.

Exercises

16.1-1 What must be changed in the input scheme from Figure 16.1(a) to multiply a matrix by a matrix on the same systolic array? Could the calculations be organised such that the result matrix would emerge in the lower right corner of the systolic array?

16.1-2 Why is it necessary to clear spare slots in the input streams for matrix , as shown in Figure 16.1? Why haven't we done the same for matrix also?

16.1-3 If the systolic array from Figure 16.1 should be interpreted as a pipeline: how many stages would you suggest to adequately describe the behaviour?

16.2 Space-time transformation and systolic arrays

Although the approach taken in the preceding section should be sufficient for a basic understanding of the topic, we have to work harder to describe and judge the properties of systolic arrays in a quantitative and precise way. In particular the solution of parametric problems requires a solid mathematical framework. So, in this section, we study central concepts of a formal theory on uniform algorithms, based on linear transformations.

16.2.1 Further example: matrix product

System (16.3) can be computed by a multitude of other systolic arrays, besides that from Figure 16.1. In Figure 16.3, for example, we see such an alternative systolic array. Whereas the same function is evaluated by both architectures, the appearance of the array from Figure 16.3 is very different:

  • The number of cells now is considerably larger, altogether 36, instead of 15.

  • The shape of the array is hexagonal, instead of rectangular.

  • Each cell now has three input ports and three output ports.

  • The input scheme is clearly different from that of Figure 16.1(a).

  • And finally: the matrix here also flows through the whole array.

The cell structure from Figure 16.3(b) at first view does not appear essentially distinguished from that in Figure 16.1(b). But the differences matter: there are no cyclic paths in the new cell, thus stationary variables can no longer appear. Instead, the cell is provided with three input ports and three output ports, passing items of all three matrices through the cell. The direction of communication at the ports on the right and left borders of the cell has changed, as well as the assignment of the matrices to the ports.

Figure 16.3.  Hexagonal systolic array for matrix product. (a) Array structure and principle of the data input/output. (b) Cell structure.

Hexagonal systolic array for matrix product. (a) Array structure and principle of the data input/output. (b) Cell structure.

16.2.2 The space-time transformation as a global view

How system (16.3) is related to Figure 16.3? No doubt that you were able to fully understand the operation of the systolic array from Section 16.1 without any special aid. But for the present example this is considerably more difficult—so now you may be sufficiently motivated for the use of a mathematical formalism.

We can assign two fundamental measures to each elementary operation of an algorithm for describing the execution in the systolic array: the time when the operation is performed, and the position of the cell where the operation is performed. As will become clear in the sequel, after fixing the so-called space-time transformation there are hardly any degrees of freedom left for further design: practically all features of the intended systolic array strictly follow from the chosen space-time transformation.

As for the systolic array from Figure 16.1, the execution of an instance in the systolic array from Figure 16.3 happens at time . We can represent this expression as the dot product of a time vector

by the iteration vector

hence

so in this case

The space coordinates of the executed operations in the example from Figure 16.1 can be inferred as from the iteration vector according to our decision in Subsection 16.1.3. The chosen map is a projection of the space along the axis. This linear map can be described by a projection matrix

To find the space coordinates, we multiply the projection matrix by the iteration vector , written as

The projection direction can be represented by any vector perpendicular to all rows of the projection matrix,

For the projection matrix from (16.8), one of the possible projection vectors would be .

Projections are very popular for describing the space coordinates when designing a systolic array. Also in our example from Figure 16.3(a), the space coordinates are generated by projecting the iteration vector. Here, a feasible projection matrix is given by

A corresponding projection vector would be .

We can combine the projection matrix and the time vector in a matrix , that fully describes the space-time transformation,

The first and second rows of are constituted by the projection matrix , the third row by the time vector .

For the example from Figure 16.1, the matrix giving the space-time transformation reads as

for the example from Figure 16.3 we have

Space-time transformations may be understood as a global view to the systolic system. Applying a space-time transformation—that is linear, here, and described by a matrix —to a system of recurrence equations directly yields the external features of the systolic array, i.e., its architecture—consisting of space coordinates, connection pattern, and cell structure.

Remark. Instead of purely linear maps, we alternatively may consider general affine maps, additionally providing a translative component, . Though as long as we treat all iteration vectors with a common space-time transformation, affine maps are not really required.

16.2.3 Parametric space coordinates

If the domains are numerically given and contain few points in particular, we can easily calculate the concrete set of space coordinates via equation (16.9). But when the domains are specified parametrically as in system (16.3), the positions of the cells must be determined by symbolic evaluation. The following explanation especially dwells on this problem.

Suppose that each cell of the systolic array is represented geometrically by a point with space coordinates in the two-dimensional space . From each iteration vector of the domain , by equation (16.9) we get the space coordinates of a certain processor, : the operations denoted by are projected onto cell . The set of space coordinates states the positions of all cells in the systolic array necessary for correct operation.

To our advantage, we normally use domains that can be described as the set of all integer points inside a convex region, here a subset of —called dense convex domains. The convex hull of such a domain with a finite number of domain points is a polytope, with domain points as vertices. Polytopes map to polytopes again by arbitrary linear transformations. Now we can make use of the fact that each projection is a linear transformation. Vertices of the destination polytope then are images of vertices of the source polytope.

Remark. But not all vertices of a source polytope need to be projected to vertices of the destination polytope, see for instance Figure 16.4.

Figure 16.4.  Image of a rectangular domain under projection. Most interior points have been suppressed for clarity. Images of previous vertex points are shaded.

Image of a rectangular domain under projection. Most interior points have been suppressed for clarity. Images of previous vertex points are shaded.


When projected by an integer matrix , the lattice maps to the lattice if can be extended by an integer time vector to a unimodular space-time matrix . Practically any dense convex domain, apart from some exceptions irrelevant to usual applications, thereby maps to another dense convex set of space coordinates, that is completely characterised by the vertices of the hull polytope. To determine the shape and the size of the systolic array, it is therefore sufficient to apply the matrix to the vertices of the convex hull of .

Remark. Any square integer matrix with determinant is called unimodular. Unimodular matrices have unimodular inverses.

We apply this method to the integer domain

from system (16.3). The vertices of the convex hull here are

For the projection matrix from (16.11), the vertices of the corresponding image have the positions

Since has eight vertices, but the image only six, it is obvious that two vertices of have become interior points of the image, and thus are of no relevance for the size of the array; namely the vertices and . This phenomenon is sketched in Figure 16.4.

The settings , , and yield the vertices (3,0), (3,-2), (0,-2), (-4,2), (-4,4), and (-1,4). We see that space coordinates in principle can be negative. Moreover, the choice of an origin—that here lies in the interior of the polytope—might not always be obvious.

As the image of the projection, we get a systolic array with hexagonal shape and parallel opposite borders. On these, we find , , and integer points, respectively; cf. Figure 16.5. Thus, as opposed to our first example, all problem parameters here are also array parameters.

Figure 16.5.  Partitioning of the space coordinates.

Partitioning of the space coordinates.


The area function of this region is of order , and thus depends on all three matrix dimensions. So this is quite different from the situation in Figure 16.1(a), where the area function—for the same problem—is of order .

Improving on this approximate calculation, we finally count the exact number of cells. For this process, it might be helpful to partition the entire region into subregions for which the number of cells comprised can be easily determined; see Figure 16.5. The points (0,0), , , and are the vertices of a rectangle with cells. If we translate this point set up by cells and right by cells, we exactly cover the whole region. Each shift by one cell up and right contributes just another cells. Altogether this yields cells.

For , , and we thereby get a number of 36 cells, as we have already learned from Figure 16.3(a).

16.2.4 Symbolically deriving the running time

The running time of a systolic algorithm can be symbolically calculated by an approach similar to that in Subsection 16.2.3. The time transformation according to formula (16.6) as well is a linear map. We find the timesteps of the first and the last calculations as the minimum resp. maximum in the set of execution timesteps. Following the discussion above, it thereby suffices to vary over the vertices of the convex hull of .

The running time is then given by the formula

Adding one is mandatory here, since the first as well as the last timestep belong to the calculation.

For the example from Figure 16.3, the vertices of the polytope as enumerated in (16.16) are mapped by (16.7) to the set of images

With the basic assumption , we get a minimum of 3 and a maximum of , thus a running time of timesteps, as for the systolic array from Figure 16.1—no surprise, since the domains and the time vectors agree.

For the special problem parameters , , and , a running time of timesteps can be derived.

If , the systolic algorithm shows a running time of order , using systolic cells.

16.2.5 How to unravel the communication topology

The communication topology of the systolic array is induced by applying the space-time transformation to the data dependences of the algorithm. Each data dependence results from a direct use of a variable instance to calculate another instance of the same variable, or an instance of another variable.

Remark. In contrast to the general situation where a data dependence analysis for imperative programming languages has to be performed by highly optimising compilers, data dependences here always are flow dependences. This is a direct consequence from the assignment-free notation employed by us.

The data dependences can be read off the quantified equations in our assignment-free notation by comparing their right and left sides. For example, we first analyse the equation from system (16.3).

The value is calculated from the values , , and . Thus we have a data flow from to , a data flow from to , and a data flow from to .

All properties of such a data flow that matter here can be covered by a dependence vector, which is the iteration vector of the calculated variable instance minus the iteration vector of the correspondingly used variable instance.

The iteration vector for is ; that for is . Thus, as the difference vector, we find

Correspondingly, we get

and

In the equation from system (16.3), we cannot directly recognise which is the calculated variable instance, and which is the used variable instance. This example elucidates the difference between equations and assignments. When fixing that should follow from by a copy operation, we get the same dependence vector as in (16.20). Correspondingly for the equation .

A variable instance with iteration vector is calculated in cell . If for this calculation another variable instance with iteration vector is needed, implying a data dependence with dependence vector , the used variable instance is provided by cell . Therefore, we need a communication from cell to cell . In systolic arrays, all communication has to be via direct static links between the communicating cells. Due to the linearity of the transformation from (16.9), we have .

If , communication happens exclusively inside the calculating cell, i.e., in time, only—and not in space. Passing values in time is via registers of the calculating cell.

Whereas for , a communication between different cells is needed. Then a link along the flow direction must be provided from/to all cells of the systolic array. The vector , oriented in counter flow direction, leads from space point to space point .

If there is more than one dependence vector , we need an appropriate link for each of them at every cell. Take for example the formulas (16.19), (16.20), and (16.21) together with (16.11), then we get , , and . In Figure 16.3(a), terminating at every cell, we see three links corresponding to the various vectors . This results in a hexagonal communication topology—instead of the orthogonal communication topology from the first example.

16.2.6 Inferring the structure of the cells

Now we apply the space-related techniques from Subsection 16.2.5 to time-related questions. A variable instance with iteration vector is calculated in timestep . If this calculation uses another variable instance with iteration vector , the former had been calculated in timestep . Hence communication corresponding to the dependence vector must take exactly timesteps.

Since (16.6) describes a linear map, we have . According to the systolic principle, each communication must involve at least one register. The dependence vectors are fixed, and so the choice of a time vector is constrained by

In case , we must provide registers for stationary variables in all cells. But each register is overwritten with a new value in every timestep. Hence, if , the old value must be carried on to a further register. Since this is repeated for timesteps, the cell needs exactly registers per stationary variable. The values of the stationary variable successively pass all these registers before eventually being used. If , the transport of values analogously goes by registers, though these are not required to belong all to the same cell.

For each dependence vector , we thus need an appropriate number of registers. In Figure 16.3(b), we see three input ports at the cell, corresponding to the dependence vectors , , and . Since for these we have . Moreover, due to (16.7) and (16.4). Thus, we need one register per dependence vector. Finally, the regularity of system (16.3) forces three output ports for every cell, opposite to the corresponding input ports.

Good news: we can infer in general that each cell needs only a few registers, because the number of dependence vectors is statically bounded with a system like (16.3), and for each of the dependence vectors the amount of registers has a fixed and usually small value.

The three input and output ports at every cell now permit the use of three moving matrices. Very differently from Figure 16.1, a dot product here is not calculated within a single cell, but dispersed over the systolic array. As a prerequisite, we had to dissolve the sum into a sequence of single additions. We call this principle a distributed generic operator.

Apart from the three input ports with their registers, and the three output ports, Figure 16.3(b) shows a multiplier chained to an adder. Both units are induced in each cell by applying the transformation (16.9) to the domain of the equation from system (16.3). According to this equation, the addition has to follow the calculation of the product, so the order of the hardware operators as seen in Figure 16.3(b) is implied.

The source cell for each of the used operands follows from the projection of the corresponding dependence vector. Here, variable is related to the dependence vector . The projection constitutes the flow direction of matrix . Thus the value to be used has to be expected, as observed by the calculating cell, in opposite direction , in this case from the port in the lower left corner of the cell, passing through register A. All the same, comes from the right via register B, and from above through register C. The calculated values , , and are output into the opposite directions through the appropriate ports: to the upper right, to the left, and downwards.

If alternatively we use the projection matrix from (16.8), then for we get the direction . The formula results in the requirement of exactly one register C for each item of the matrix . This register provides the value for the calculation of , and after this calculation receives the value . All this reasoning matches with the cell from Figure 16.1(b). Figure 16.1(a) correspondingly shows no links for matrix between the cells: for the matrix is stationary.

Exercises

16.2-1 Each projection vector induces several corresponding projection matrices .

  • a. Show that

    also is a projection matrix fitting with projection vector .

  • b. Use this projection matrix to transform the domain from system (16.3).

  • c. The resulting space coordinates differ from that in Subsection 16.2.3. Why, in spite of this, both point sets are topologically equivalent?

  • d. Analyse the cells in both arrangements for common and differing features.

16.2-2 Apply all techniques from Section 16.2 to system (16.3), employing a space-time matrix

16.3 Input/output schemes

In Figure 16.3(a), the input/output scheme is only sketched by the flow directions for the matrices . The necessary details to understand the input/output operations are now provided by Figure 16.6.

Figure 16.6.  Detailed input/output scheme for the systolic array from Figure 16.3(a).

Detailed input/output scheme for the systolic array from Figure 16.3(a).


The input/output scheme in Figure 16.6 shows some new phenomena when compared with Figure 16.1(a). The input and output cells belonging to any matrix are no longer threaded all on a single straight line; now, for each matrix, they lie along two adjacent borders, that additionally may differ in the number of links to the outside world. The data structures from Figure 16.6 also differ from that in Figure 16.1(a) in the angle of inclination. Moreover, the matrices and from Figure 16.6 arrive at the boundary cells with only one third of the data rate, compared to Figure 16.1(a).

Spending some effort, even here it might be possible in principle to construct—item by item—the appropriate input/output scheme fitting the present systolic array. But it is much more safe to apply a formal derivation. The following subsections are devoted to the presentation of the various methodical steps for achieving our goal.

16.3.1 From data structure indices to iteration vectors

First, we need to construct a formal relation between the abstract data structures and the concrete variable instances in the assignment-free representation.

Each item of the matrix can be characterised by a row index and a column index . These data structure indices can be comprised in a data structure vector . Item in system (16.3) corresponds to the instances , with any . The coordinates of these instances all lie on a line along direction in space . Thus, in this case, the formal change from data structure vector to coordinates can be described by the transformation

In system (16.3), the coordinate vector of every variable instance equals the iteration vector of the domain point representing the calculation of this variable instance. Thus we also may interpret formula (16.23) as a relation between data structure vectors and iteration vectors. Abstractly, the desired iteration vectors can be inferred from the data structure vector by the formula

The affine vector is necessary in more general cases, though always null in our example.

Because of , the representation for matrix correspondingly is

Concerning matrix , each variable instance may denote a different value. Nevertheless, all instances to a fixed index pair can be regarded as belonging to the same matrix item , since they all stem from the serialisation of the sum operator for the calculation of . Thus, for matrix , following formula (16.24) we may set

16.3.2 Snapshots of data structures

Each of the three matrices is generated by two directions with regard to the data structure indices: along a row, and along a column. The difference vector (0,1) thereby describes a move from an item to the next item of the same row, i.e., in the next column: . Correspondingly, the difference vector (1,0) stands for sliding from an item to the next item in the same column and next row: .

Input/output schemes of the appearance shown in Figures 16.1(a) and 16.6 denote snapshots: all positions of data items depicted, with respect to the entire systolic array, are related to a common timestep.

As we can notice from Figure 16.6, the rectangular shapes of the abstract data structures are mapped to parallelograms in the snapshot, due to the linearity of the applied space-time transformation. These parallelograms can be described by difference vectors along their borders, too.

Next we will translate difference vectors from data structure vectors into spatial difference vectors for the snapshot. Therefore, by choosing the parameter in formula (16.24), we pick a pair of iteration vectors that are mapped to the same timestep under our space-time transformation. For the moment it is not important which concrete timestep we thereby get. Thus, we set up

implying

and thus

Due to the linearity of all used transformations, the wanted spatial difference vector hence follows from the difference vector of the data structure as

or

With the aid of formula (16.31), we now can determine the spatial difference vectors for matrix . As mentioned above, we have

Noting , we get

For the rows, we have the difference vector , yielding the spatial difference vector . Correspondingly, from for the columns we get . If we check with Figure 16.6, we see that the rows of in fact run along the vector , the columns along the vector .

Similarly, we get for the rows of , and for the columns of ; as well as for the rows of , and for the columns of .

Applying these instruments, we are now able to reliably generate appropriate input/output schemes—although separately for each matrix at the moment.

16.3.3 Superposition of input/output schemes

Now, the shapes of the matrices for the snapshot have been fixed. But we still have to adjust the matrices relative to the systolic array—and thus, also relative to each other. Fortunately, there is a simple graphical method for doing the task.

We first choose an arbitrary iteration vector, say . The latter we map with the projection matrix to the cell where the calculation takes place,

The iteration vector (1,1,1) represents the calculations , , and ; these in turn correspond to the data items , , and . We now lay the input/output schemes for the matrices on the systolic array in a way that the entries , , and all are located in cell .

In principle, we would be done now. Unfortunately, our input/output schemes overlap with the cells of the systolic array, and are therefore not easily perceivable. Thus, we simultaneously retract the input/output schemes of all matrices in counter flow direction, place by place, until there is no more overlapping. With this method, we get exactly the input/output scheme from Figure 16.6.

As an alternative to this nice graphical method, we also could formally calculate an overlap-free placement of the various input/output schemes.

Only after specifying the input/output schemes, we can correctly calculate the number of timesteps effectively needed. The first relevant timestep starts with the first input operation. The last relevant timestep ends with the last output of a result. For the example, we determine from Figure 16.6 the beginning of the calculation with the input of the data item in timestep 0, and the end of the calculation after output of the result in timestep 14. Altogether, we identify 15 timesteps—five more than with pure treatment of the real calculations.

16.3.4 Data rates induced by space-time transformations

The input schemes of the matrices and from Figure 16.1(a) have a dense layout: if we drew the borders of the matrices shown in the figure, there would be no spare places comprised.

Not so in Figure 16.6. In any input data stream, each data item is followed by two spare places there. For the input matrices this means: the boundary cells of the systolic array receive a proper data item only every third timestep.

This property is a direct result of the employed space-time transformation. In both examples, the abstract data structures themselves are dense. But how close the various items really come in the input/output scheme depends on the absolute value of the determinant of the transformation matrix : in every input/output data stream, the proper items follow each other with a spacing of exactly places. Indeed for Figure 16.1; as for Figure 16.6, we now can rate the fluffy spacing as a practical consequence of .

What to do with spare places as those in Figure 16.6? Although each cell of the systolic array from Figure 16.3 in fact does useful work only every third timestep, it would be nonsense to pause during two out of three timesteps. Strictly speaking, we can argue that values on places marked with dots in Figure 16.6 have no influence on the calculation of the shown items , because they never reach an active cell at time of the calculation of a variable . Thus, we may simply fill spare places with any value, no danger of disturbing the result. It is even feasible to execute three different matrix products at the same time on the systolic array from Figure 16.3, without interference. This will be our topic in Subsection 16.3.7.

16.3.5 Input/output expansion

When further studying Figure 16.6, we can identify another problem. Check, for example, the itinerary of through the cells of the systolic array. According to the space-time transformation, the calculations contributing to the value of happen in the cells , , , and . But the input/output scheme from Figure 16.6 tells us that also passes through cell before, and eventually visits cell , too.

This may be interpreted as some spurious calculations being introduced into the system (16.3) by the used space-time transformation, here, for example, at the new domain points (2,2,0) and (2,2,5). The reason for this phenomenon is that the domains of the input/output operations are not in parallel to the chosen projection direction. Thus, some input/output operations are projected onto cells that do not belong to the boundary of the systolic array. But in the interior of the systolic array, no input/output operation can be performed directly. The problem can be solved by extending the trajectory, in flow or counter flow direction, from these inner cells up to the boundary of the systolic array. But thereby we introduce some new calculations, and possibly also some new domain points. This technique is called input/output expansion.

We must avoid that the additional calculations taking place in the cells (-2,0) and (3,0) corrupt the correct value of . For the matrix product, this is quite easy—though the general case is more difficult. The generic sum operator has a neutral element, namely zero. Thus, if we can guarantee that by new calculations only zero is added, there will be no harm. All we have to do is providing always at least one zero operand to any spurious multiplication; this can be achieved by filling appropriate input slots with zero items.

Figure 16.7.  Extended input/output scheme, correcting Figure 16.6.

Extended input/output scheme, correcting Figure 16.6.


Figure 16.7 shows an example of a properly extended input/output scheme. Preceding and following the items of matrix , the necessary zero items have been filled in. Since the entered zeroes count like data items, the input/output scheme from Figure 16.6 has been retracted again by one place. The calculation now begins already in timestep , but ends as before with timestep 14. Thus we need 16 timesteps altogether.

16.3.6 Coping with stationary variables

Let us come back to the example from Figure 16.1(a). For inputting the items of matrices and , no expansion is required, since these items are always used in boundary cells first. But not so with matrix ! The items of are calculated in stationary variables, hence always in the same cell. Thus most results are produced in inner cells of the systolic array, from where they have to be moved—in a separate action—to boundary cells of the systolic array.

Although this new challenge, on the face of it, appears very similar to the problem from Subsection 16.3.5, and thus very easy to solve, in fact we here have a completely different situation. It is not sufficient to extend existing data flows forward or backward up to the boundary of the systolic array. Since for stationary variables the dependence vector is projected to the null vector, which constitutes no extensible direction, there can be no spatial flow induced by this dependency. Possibly, we can construct some auxiliary extraction paths, but usually there are many degrees of freedom. Moreover, we then need a control mechanism inside the cells. For all these reasons, the problem is further dwelled on in Section 16.4.

16.3.7 Interleaving of calculations

As can be easily noticed, the utilisation of the systolic array from Figure 16.3 with input/output scheme from Figure 16.7 is quite poor. Even without any deeper study of the starting phase and the closing phase, we cannot ignore that the average utilisation of the array is below one third—after all, each cell at most in every third timestep makes a proper contribution to the calculation.

A simple technique to improve this behaviour is to interleave calculations. If we have three independent matrix products, we can successively input their respective data, delayed by only one timestep, without any changes to the systolic array or its cells. Figure 16.8 shows a snapshot of the systolic array, with parts of the corresponding input/output scheme. Now we must check by a formal derivation whether this idea is really working. Therefore, we slightly modify system (16.3). We augment the variables and the domains by a fourth dimension, needed to distinguish the three matrix products:

Figure 16.8.  Interleaved calculation of three matrix products on the systolic array from Figure 16.3.

Interleaved calculation of three matrix products on the systolic array from Figure 16.3.


Obviously, in system (16.32), problems with different values of are not related. Now we must preserve this property in the systolic array. A suitable space-time matrix would be

Notice that is not square here. But for calculating the space coordinates, the fourth dimension of the iteration vector is completely irrelevant, and thus can simply be neutralised by corresponding zero entries in the fourth column of the first and second rows of .

The last row of again constitutes the time vector . Appropriate choice of embeds the three problems to solve into the space-time continuum, avoiding any intersection. Corresponding instances of the iteration vectors of the three problems are projected to the same cell with a respective spacing of one timestep, because the fourth entry of equals 1.

Finally, we calculate the average utilisation—with or without interleaving—for the concrete problem parameters , , and . For a single matrix product, we have to perform calculations, considering a multiplication and a corresponding addition as a compound operation, i.e., counting both together as only one calculation; input/output operations are not counted at all. The systolic array has 36 cells.

Without interleaving, our systolic array altogether takes 16 timesteps for calculating a single matrix product, resulting in an average utilisation of calculations per timestep and cell. When applying the described interleaving technique, the calculation of all three matrix products needs only two timesteps more, i.e., 18 timesteps altogether. But the number of calculations performed thereby has tripled, so we get an average utilisation of the cells amounting to calculations per timestep and cell. Thus, by interleaving, we were able to improve the utilisation of the cells to 267 per cent!

Exercises

16.3-1 From equation (16.31), formally derive the spatial difference vectors of matrices and for the input/output scheme shown in Figure 16.6.

16.3-2 Augmenting Figure 16.6, draw an extended input/output scheme that forces both operands of all spurious multiplications to zero.

16.3-3 Apply the techniques presented in Section 16.3 to the systolic array from Figure 16.1.

16.3-4 Proof the properties claimed in Subsection 16.3.7 for the special space-time transformation (16.33) with respect to system (16.32).

16.4 Control

Figure 16.9.  Resetting registers via global control. (a) Array structure. (b) Cell structure.

Resetting registers via global control. (a) Array structure. (b) Cell structure.


So far we have assumed that each cell of a systolic array behaves in completely the same way during every timestep. Admittedly there are some relevant examples of such systolic arrays. However, in general the cells successively have to work in several operation modes, switched to by some control mechanism. In the sequel, we study some typical situations for exerting control.

16.4.1 Cells without control

The cell from Figure 16.3(b) contains the registers A, B, and C, that—when activated by the global clock signal—accept the data applied to their inputs and then reliably reproduce these values at their outputs for one clock cycle. Apart from this system-wide activity, the function calculated by the cell is invariant for all timesteps: a fused multiply-add operation is applied to the three input operands , , and , with result passed to a neighbouring cell; during the same cycle, the operands and are also forwarded to two other neighbouring cells. So in this case, the cell needs no control at all.

The initial values for the execution of the generic sum operator—which could also be different from zero here—are provided to the systolic array via the input streams, see Figure 16.7; the final results continue to flow into the same direction up to the boundary of the array. Therefore, the input/output activities for the cell from Figure 16.3(b) constitute an intrinsic part of the normal cell function. The price to pay for this extremely simple cell function without any control is a restriction in all three dimensions of the matrices: on a systolic array like that from Figure 16.3, with fixed array parameters , an matrix can only be multiplied by an matrix if the relations , , and hold.

16.4.2 Global control

In this respect, constraints for the array from Figure 16.1 are not so restrictive: though the problem parameters and also are bounded by and , there is no constraint for . Problem parameters unconstrained in spite of fixed array parameters can only emerge in time but not in space, thus mandating the use of stationary variables.

Before a new calculation can start, each register assigned to a stationary variable has to be reset to an initial state independent from the previously performed calculations. For instance, concerning the systolic cell from Figure 16.3(b), this should be the case for register C. By a global signal similar to the clock, register C can be cleared in all cells at the same time, i.e., reset to a zero value. To prevent a corruption of the reset by the current values of A or B, at least one of the registers A or B must be cleared at the same time, too. Figure 16.9 shows an array structure and a cell structure implementing this idea.

16.4.3 Local control

Figure 16.10.  Output scheme with delayed output of results.

Output scheme with delayed output of results.


Unfortunately, for the matrix product the principle of the global control is not sufficient without further measures. Since the systolic array presented in Figure 16.1 even lacks another essential property: the results are not passed to the boundary but stay in the cells.

At first sight, it seems quite simple to forward the results to the boundary: when the calculation of an item is finished, the links from cell to the neighbouring cells and are no longer needed to forward items of the matrices and . These links can be reused then for any other purpose. For example, we could pass all items of through the downward-directed links to the lower border of the systolic array.

But it turns out that leading through results from the upper cells is hampered by ongoing calculations in the lower parts of the array. If the result , finished in timestep , would be passed to cell in the next timestep, a conflict would be introduced between two values: since only one value per timestep can be sent from cell via the lower port, we would be forced to keep either or , the result currently finished in cell . This effect would spread over all cells down.

To fix the problem, we could slow down the forwarding of items . If it would take two timesteps for to pass a cell, no collisions could occur. Then, the results stage a procession through the same link, each separated from the next by one timestep. From the lower boundary cell of a column, the host computer first receives the result of the bottom row, then that of the penultimate row; this procedure continues until eventually we see the result of the top row. Thus we get the output scheme shown in Figure 16.10.

How can a cell recognise when to change from forwarding items of matrix to passing items of matrix through the lower port? We can solve this task by an automaton combining global control with local control in the cell:

If we send a global signal to all cells at exactly the moment when the last items of and are input to cell , each cell can start a countdown process: in each successive timestep, we decrement a counter initially set to the number of the remaining calculation steps. Thereby cell still has to perform calculations before changing to propagation mode. Later, the already mentioned global reset signal switches the cell back to calculation mode.

Figure 16.11 presents a systolic array implementing this local/global principle. Basically, the array structure and the communication topology have been preserved. But each cell can run in one of two states now, switched by a control logic:

Figure 16.11.  Combined local/global control. (a) Array structure. (b) Cell structure.

Combined local/global control. (a) Array structure. (b) Cell structure.

  1. In calculation mode, as before, the result of the addition is written to register C. At the same time, the value in register B—i.e., the operand used for the multiplication—is forwarded through the lower port of the cell.

  2. In propagation mode, registers B and C are connected in series. In this mode, the only function of the cell is to guide each value received at the upper port down to the lower port, thereby enforcing a delay of two timesteps.

The first value output from cell in propagation mode is the currently calculated value , stored in register C. All further output values are results forwarded from cells above. A formal description of the algorithm implemented in Figure 16.11 is given by the assignment-free system (16.34).

It rests to explain how the control signals in a cell are generated in this model. As a prerequisite, the cell must contain a state flip-flop indicating the current operation mode. The output of this flip-flop is connected to the control inputs of both multiplexors, see Figure 16.11(b). The global reset signal clears the state flip-flop, as well as the registers A and C: the cell now works in calculation mode.

The global ready signal starts the countdown in all cells, so in every timestep the counter is diminished by 1. The counter is initially set to the precalculated value , dependent on the position of the cell. When the counter reaches zero, the flip-flop is set: the cell switches to propagation mode.

If desisting from a direct reset of the register C, the last value passed, before the reset, from register B to register C of a cell can be used as a freely decidable initial value for the next dot product to evaluate in the cell. We then even calculate, as already in the systolic array from Figure 16.3, the more general problem

detailed by the following equation system:

16.4.4 Distributed control

The method sketched in Figure 16.11 still has the following drawbacks:

  1. The systolic array uses global control signals, requiring a high technical accuracy.

  2. Each cell needs a counter with counting register, introducing a considerable hardware expense.

  3. The initial value of the counter varies between the cells. Thus, each cell must be individually designed and implemented.

  4. The input data of any successive problem must wait outside the cells until all results from the current problem have left the systolic array.

These disadvantages can be avoided, if control signals are propagated like data—meaning a distributed control. Therefore, we preserve the connections of the registers B and C with the multiplexors from Figure 16.11(b), but do not generate any control signals in the cells; also, there will be no global reset signal. Instead, a cell receives the necessary control signal from one of the neighbours, stores it in a new one-bit register S, and appropriately forwards it to further neighbouring cells. The primary control signals are generated by the host computer, and infused into the systolic array by boundary cells, only. Figure 16.12(a) shows the required array structure, Figure 16.12(b) the modified cell structure.

Switching to the propagation mode occurs successively down one cell in a column, always delayed by one timestep. The delay introduced by register S is therefore sufficient.

Reset to the calculation mode is performed via the same control wire, and thus also happens with a delay of one timestep per cell. But since the results sink down at half speed, only, we have to wait sufficiently long with the reset: if a cell is switched to calculation mode in timestep , it goes to propagation mode in timestep , and is reset back to calculation mode in timestep .

So we learned that in a systolic array, distributed control induces a different macroscopic timing behaviour than local/global control. Whereas the systolic array from Figure 16.12 can start the calculation of a new problem (16.35) every timesteps, the systolic array from Figure 16.11 must wait for timesteps. The time difference resp. is called the period, its reciprocal being the throughput.

System (16.37 and 16.38), divided into two parts during the typesetting, formally describes the relations between distributed control and calculations. We thereby assume an infinite, densely packed sequence of matrix product problems, the additional iteration variable being unbounded. The equation headed variables with alias describes but pure identity relations.

Figure 16.12.  Matrix product on a rectangular systolic array, with output of results and distributed control. (a) Array structure. (b) Cell structure.

Matrix product on a rectangular systolic array, with output of results and distributed control. (a) Array structure. (b) Cell structure.

Formula (16.39) shows the corresponding space-time matrix. Note that one entry of is not constant but depends on the problem parameters:

Interestingly, also the cells in a row switch one timestep later when moving one position to the right. Sacrificing some regularity, we could use this circumstance to relieve the host computer by applying control to the systolic array at cell (1,1), only. We therefore would have to change the control scheme in the following way:

Figure 16.13.  Matrix product on a rectangular systolic array, with output of results and distributed control. (a) Array structure. (b) Cell on the upper border.

Matrix product on a rectangular systolic array, with output of results and distributed control. (a) Array structure. (b) Cell on the upper border.


Figure 16.13 shows the result of this modification. We now need cells of two kinds: cells on the upper border of the systolic array must be like that in Figure 16.13(b); all other cells would be as before, see Figure 16.13(c). Moreover, the communication topology on the upper border of the systolic array would be slightly different from that in the regular area.

16.4.5 The cell program as a local view

The chosen space-time transformation widely determines the architecture of the systolic array. Mapping recurrence equations to space-time coordinates yields an explicit view to the geometric properties of the systolic array, but gives no real insight into the function of the cells. In contrast, the processes performed inside a cell can be directly expressed by a cell program. This approach is particularly of interest if dealing with a programmable systolic array, consisting of cells indeed controlled by a repetitive program.

Like the global view, i.e., the structure of the systolic array, the local view given by a cell program in fact is already fixed by the space-time transformation. But, this local view is only induced implicitly here, and thus, by a further mathematical transformation, an explicit representation must be extracted, suitable as a cell program.

In general, we denote instances of program variables with the aid of index expressions, that refer to iteration variables. Take, for instance, the equation

from system (16.3). The instance of the program variable is specified using the index expressions , , and , which can be regarded as functions of the iteration variables .

As we have noticed, the set of iteration vectors from the quantification becomes a set of space-time coordinates when applying a space-time transformation (16.12) with transformation matrix from (16.14),

Since each cell is denoted by space coordinates , and the cell program must refer to the current time , the iteration variables in the index expressions for the program variables are not suitable, and must be translated into the new coordinates . Therefore, using the inverse of the space-time transformation from (16.41), we express the iteration variables as functions of the space-time coordinates ,

The existence of such an inverse transformation is guaranteed if the space-time transformation is injective on the domain—and that it should always be: if not, some instances must be calculated by a cell in the same timestep. In the example, reversibility is guaranteed by the square, non singular matrix , even without referral to the domain. With respect to the time vector and any projection vector , the property is sufficient.

Replacing iteration variables by space-time coordinates, which might be interpreted as a transformation of the domain, frequently yields very unpleasant index expressions. Here, for example, from we get

But, by a successive transformation of the index sets, we can relabel the instances of the program variables such that the reference to cell and time appears more evident. In particular, it seems worthwhile to transform the equation system back into output normal form, i.e., to denote the results calculated during timestep in cell by instances of the program variables. We best gain a real understanding of this approach via an abstract mathematical formalism, that we can fit to our special situation.

Therefore, let

be a quantified equation over a domain , with program variables and . The index functions and generate the instances of the program variables as tuples of index expressions.

By transforming the domain with a function that is injective on , equation (16.43) becomes

where is a function that constitutes an inverse of on . The new index functions are and . Transformations of index sets don't touch the domain; they can be applied to each program variable separately, since only the instances of this program variable are renamed, and in a consistent way. With such renamings and , equation (16.44) becomes

If output normal form is desired, has to be the identity.

In the most simple case (as for our example), is the identity, and is an affine transformation of the form , with constant —the already known dependence vector. then can be represented in the same way, with . Transformation of the domains happens by the space-time transformation , with an invertible matrix . For all index transformations, we choose the same . Thus equation (16.45) becomes

For the generation of a cell program, we have to know the following information for every timestep: the operation to perform, the source of the data, and the destination of the results—known from assembler programs as opc, src, dst.

The operation to perform (opc) follows directly from the function . For a cell with control, we must also find the timesteps when to perform this individual function . The set of these timesteps, as a function of the space coordinates, can be determined by projecting the set onto the time axis; for general polyhedric with the aid of a Fourier-Motzkin elimination, for example.

In system (16.46), we get a new dependence vector , consisting of two components: a (vectorial) spatial part, and a (scalar) timely part. The spatial part , as a difference vector, specifies which neighbouring cell has calculated the operand. We directly can translate this information, concerning the input of operands to cell , into a port specifier with port position , serving as the src operand of the instruction. In the same way, the cell calculating the operand, with position , must write this value to a port with port position , used as the dst operand in the instruction.

The timely part of specifies, as a time difference , when the calculation of the operand has been performed. If , this information is irrelevant, because the reading cell always gets the output of the immediately preceding timestep from neighbouring cells. However, for , the value must be buffered for timesteps, either by the producer cell , or by the consumer cell —or by both, sharing the burden. This need can be realised in the cell program, for example, with copy instructions executed by the producer cell , preserving the value of the operand until its final output from the cell by passing it through registers.

Applying this method to system (16.37 and 16.38), with transformation matrix as in (16.39), yields

The iteration variable l, being relevant only for the input/output scheme, can be set to a fixed value prior to the transformation. The cell program for the systolic array from Figure 16.12, performed once in every timestep, reads as follows:

Cell-Program

  1   
  2   
  3   
  4   
  5   
  6  IF  
  7    THEN  
  8        
  9    ELSE  
 10        

The port specifiers stand for local input/output to/from the cell. For each, a pair of qualifiers is derived from the geometric position of the ports relative to the centre of the cell. Port is situated on the left border of the cell, on the right border; is above the centre, below. Each port specifier can be augmented by a bit range: stands for bit 0 of the port, only; denotes the bits 1 to . The designations without port qualifiers stand for registers of the cell.

By application of matrix from (16.13) to system (16.36), we get

Now the advantages of distributed control become obvious. The cell program for (16.47) can be written with referral to the respective timestep , only. And thus, we need no reaction to global control signals, no counting register, no counting operations, and no coding of the local cell coordinates.

Exercises

16.4-1 Specify appropriate input/output schemes for performing, on the systolic arrays presented in Figures 16.11 and 16.12, two evaluations of system (16.36) that follow each other closest in time.

16.4-2 How could we change the systolic array from Figure 16.12, to efficiently support the calculation of matrix products with parameters or ?

16.4-3 Write a cell program for the systolic array from Figure 16.3.

16.4-4 Which throughput allows the systolic array from Figure 16.3 for the assumed values of ? Which for general ?

16.4-5 Modify the systolic array from Figure 16.1 such that the results stored in stationary variables are output through additional links directed half right down, i.e., from cell to cell . Develop an assignment-free equation system functionally equivalent to system (16.36), that is compatible with the extended structure. How looks the resulting input/output scheme? Which period is obtained?

16.5 Linear systolic arrays

Figure 16.14.  Bubble sort algorithm on a linear systolic array. (a) Array structure with input/output scheme. (b) Cell structure.

Bubble sort algorithm on a linear systolic array. (a) Array structure with input/output scheme. (b) Cell structure.


Explanations in the sections above heavily focused on two-dimensional systolic arrays, but in principle also apply to one-dimensional systolic arrays, called linear systolic arrays in the sequel. The most relevant difference between both kinds concerns the boundary of the systolic array. Linear systolic arrays can be regarded as consisting of boundary cells, only; under this assumption, input from and output to the host computer needs no special concern. However, the geometry of a linear systolic array provides one full dimension as well as one fictitious dimension, and thus communication along the full-dimensional axis may involve similar questions as in Subsection 16.3.5. Eventually, the boundary of the linear systolic array can also be defined in a radically different way, namely to consist of both end cells, only.

16.5.1 Matrix-vector product

If we set one of the problem parameters or to value 1 for a systolic array as that from Figure 16.1, the matrix product means to multiply a matrix by a vector, from left or right. The two-dimensional systolic array then degenerates to a one-dimensional systolic array. The vector by which to multiply is provided as an input data stream through an end cell of the linear systolic array. The matrix items are input to the array simultaneously, using the complete broadside.

As for full matrix product, results emerge stationary. But now, they either can be drained along the array to one of the end cells, or they are sent directly from the producer cells to the host computer. Both methods result in different control mechanisms, time schemes, and running time.

Now, would it be possible to provide all inputs via end cells? The answer is negative if the running time should be of complexity . Matrix contains items, thus there are items per timestep to read. But the number of items receivable through an end cell during one timestep is bounded. Thus, the input/output data rate—of order , here—may already constrain the possible design space.

16.5.2 Sorting algorithms

For sorting, the task is to bring the elements from a set , subset of a totally ordered basic set , into an ascending order where for . A solution to this problem is described by the following assignment-free equation system, where denotes the maximum in :

By completing a projection along direction to a space-time transformation

we get the linear systolic array from Figure 16.14, as an implementation of the bubble sort algorithm.

Correspondingly, the space-time matrix

would induce another linear systolic array, that implements insertion sort. Eventually, the space-time matrix

would lead to still another linear systolic array, this one for selection sort.

For the sorting problem, we have input items, output items, and timesteps. This results in an input/output data rate of order . In contrast to the matrix-vector product from Subsection 16.5.1, the sorting problem with any prescribed input/output data rate in principle allows to perform the communication exclusively through the end cells of a linear systolic array.

Note that, in all three variants of sorting described so far, direct input is necessary to all cells: the values to order for bubble sort, the constant values for insertion sort, and both for selection sort. However, instead of inputting the constants, the cells could generate them, or read them from a local memory.

All three variants require a cell control: insertion sort and selection sort use stationary variables; bubble sort has to switch between the processing of input data and the output of calculated values.

16.5.3 Lower triangular linear equation systems

System (16.53) below describes a localised algorithm for solving the linear equation system , where the matrix is a lower triangular matrix.

All previous examples had in common that, apart from copy operations, the same kind of calculation had to be performed for each domain point: fused multiply/add for the matrix algorithms, minimum and maximum for the sorting algorithms. In contrast, system (16.53) contains some domain points where multiply and subtract is required, as well as some others needing division. When projecting system (16.53) to a linear systolic array, depending on the chosen projection direction we get fixed or varying cell functions. Peculiar for projecting along , we see a single cell with divider; all other cells need a multiply/subtract unit. Projection along or yields identical cells, all containing a divider as well as a multiply/subtract unit. Projection vector results in a linear systolic array with three different cell types: both end cells need a divider, only; all other cells contain a multiply/subtract unit, with or without divider, alternatingly. Thus, a certain projection can introduce inhomogeneities into a systolic array—that may be desirable, or not.

Exercises

16.5-1 For both variants of matrix-vector product as in Subsection 16.5.1—output of the results by an end cell versus communication by all cells—specify a suitable array structure with input/output scheme and cell structure, including the necessary control mechanisms.

16.5-2 Study the effects of further projection directions on system (16.53).

16.5-3 Construct systolic arrays implementing insertion sort and selection sort, as mentioned in Subsection 16.5.2. Also draw the corresponding cell structures.

16.5-4 The systolic array for bubble sort from Figure 16.14 could be operated without control by cleverly organising the input streams. Can you find the trick?

16.5-5 What purpose serves the value in system (16.49)? How system (16.49) could be formulated without this constant value? Which consequences this would incur for the systolic arrays described?

 PROBLEMS 

16-1 Band matrix algorithms

In Sections 16.1, 16.2, and Subsections 16.5.1, and 16.5.3, we always assumed full input matrices, i.e., each matrix item used could be nonzero in principle. (Though in a lower triangular matrix, items above the main diagonal are all zero. Note, however, that these items are not inputs to any of the algorithms described.)

In contrast, practical problems frequently involve band matrices, cf. Kung/Leiserson [207]. In such a matrix, most diagonals are zero, left alone a small band around the main diagonal. Formally, we have for all with or , where and are positive integers. The band width, i.e., the number of diagonals where nonzero items may appear, here amounts to .

Now the question arises whether we could profit from the band structure in one or more input matrices to optimise the systolic calculation. One opportunity would be to delete cells doing no useful work. Other benefits could be shorter input/output data streams, reduced running time, or higher throughput.

Study all systolic arrays presented in this chapter for improvements with respect to these criteria.

 CHAPTER NOTES 

The term systolic array has been coined by Kung and Leiserson in their seminal paper [207].

Karp, Miller, and Winograd did some pioneering work [190] for uniform recurrence equations.

Essential stimuli for a theory on the systematic design of systolic arrays have been Rao's PhD dissertation [282] and the work of Quinton [281].

The contribution of Teich and Thiele [319] shows that a formal derivation of the cell control can be achieved by methods very similar to those for a determination of the geometric array structure and the basic cell function.

The up-to-date book by Darte, Robert, and Vivien [79] joins advanced methods from compiler design and systolic array design, dealing also with the analysis of data dependences.

The monograph [358] still seems to be the most comprehensive work on systolic systems.

Each systolic array can also be modelled as a cellular automaton. The registers in a cell together hold the state of the cell. Thus, a factorised state space is adequate. Cells of different kind, for instance with varying cell functionality or position-dependent cell control, can be described with the aid of further components of the state space.

Each systolic algorithm also can be regarded as a PRAM algorithm with the same timing behaviour. Thereby, each register in a systolic cell corresponds to a PRAM memory cell, and vice versa. The EREW PRAM model is sufficient, because in every timestep exactly one systolic cell reads from this register, and then exactly one systolic cell writes to this register.

Each systolic system also is a special kind of synchronous network as defined by Lynch [228]. Time complexity measures agree. Communication complexity usually is no topic with systolic arrays. Restriction to input/output through boundary cells, frequently demanded for systolic arrays, also can be modelled in a synchronous network. The concept of failures is not required for systolic arrays.

The book written by Sima, Fountain and Kacsuk [304] considers the systolic systems in details.

Part V. DATA BASES

Chapter 17. Memory Management

The main task of computers is to execute programs (even usually several programs running simultaneously). These programs and their data must be in the main memory of the computer during the execution.

Since the main memory is usually too small to store all these data and programs, modern computer systems have a secondary storage too for the provisional storage of the data and programs.

In this chapter the basic algorithms of memory management will be covered. In Section 17.1 static and dynamic partitioning, while in Section 17.2 the most popular paging methods will be discussed.

In Section 17.3 the most famous anomaly of the history of operating systems— the stunning features of FIFO page changing algorithm, interleaved memory and processing algorithms with lists—will be analysed.

Finally in Section 17.4 the discussion of the optimal and approximation algorithms for the optimisation problem in which there are files with given size to be stored on the least number of disks can be found.

17.1 Partitioning

A simple way of sharing the memory between programs is to divide the whole address space into slices, and assign such a slice to every process. These slices are called partitions. The solution does not require any special hardware support, the only thing needed is that programs should be ready to be loaded to different memory addresses, i.e., they should be relocatable. This must be required since it cannot be guaranteed that a program always gets into the same partition, because the total size of the executable programs is usually much more than the size of the whole memory. Furthermore, we cannot determine which programs can run simultaneously and which not, for processes are generally independent of each other, and in many cases their owners are different users. Therefore, it is also possible that the same program is executed by different users at the same time, and different instances work with different data, which can therefore not be stored in the same part of the memory. Relocation can be easily performed if the linker does not work with absolute but with relative memory addresses, which means it does not use exact addresses in the memory but a base address and an offset. This method is called base addressing, where the initial address is stored in the so called base register. Most processors know this addressing method, therefore, the program will not be slower than in the case using absolute addresses. By using base addressing it can also be avoided that—due to an error or the intentional behaviour of a user—the program reads or modifies the data of other programs stored at lower addresses of the memory. If the solution is extended by another register, the so called limit register which stores the biggest allowed offset, i.e. the size of the partition, then it can be assured that the program cannot access other programs stored at higher memory addresses either.

Partitioning was often used in mainframe computer operating systems before. Most of the modern operating systems, however, use virtual memory management which requires special hardware support.

Partitioning as a memory sharing method is not only applicable in operating systems. When writing a program in a language close to machine code, it can happen that different data structures with variable size—which are created and cancelled dynamically—have to be placed into a continuous memory space. These data structures are similar to processes, with the exception that security problems like addressing outside their own area do not have to be dealt with. Therefore, most of the algorithms listed below with some minor modifications can be useful for application development as well.

Basically, there are two ways of dividing the address space into partitions. One of them divides the initially empty memory area into slices, the number and size of which is predetermined at the beginning, and try to place the processes and other data structures continuously into them, or remove them from the partitions if they are not needed any more. These are called fixed partitions, since both their place and size have been fixed previously, when starting the operating system or the application. The other method is to allocate slices from the free parts of the memory to the newly created processes and data structures continuously, and to deallocate the slices again when those end. This solution is called dynamic partitioning, since partitions are created and destroyed dynamically. Both methods have got advantages as well as disadvantages, and their implementations require totally different algorithms. These will be discussed in the following.

17.1.1 Fixed partitions

Using fixed partitions the division of the address space is fixed at the beginning, and cannot be changed later while the system is up. In the case of operating systems the operator defines the partition table which is activated at next reboot. Before execution of the first application, the address space is already partitioned. In the case of applications partitioning has to be done before creation of the first data structure in the designated memory space. After that data structures of different sizes can be placed into these partitions.

In the following we examine only the case of operating systems, while we leave to the Reader the rewriting of the problem and the algorithms according to given applications, since these can differ significantly depending on the kind of the applications.

The partitioning of the address space must be done after examination of the sizes and number of possible processes running on the system. Obviously, there is a maximum size, and programs exceeding it cannot be executed. The size of the largest partition corresponds to this maximum size. To reach the optimal partitioning, often statistic surveys have to be carried out, and the sizes of the partitions have to be modified according to these statistics before restarting the system next time. We do not discuss the implementation of this solution now.

Since there are a constant number () of partitions, their data can be stored in one or more arrays with constant lengths. We do not deal with the particular place of the partitions on this level of abstraction either; we suppose that they are stored in a constant array as well. When placing a process in a partition, we store the index of that partition in the process header instead of its starting address. However, concrete implementation can differ from this method, of course. The sizes of the partitions are stored in array . Our processes are numbered from to . The array keeps track of the processes executed in the individual partitions, while its inverse, array stores the places where individual processes are executed. A process is either running, or waiting for a partition. This information is stored in Boolean array : if process number is waiting, then TRUE, else FALSE. The space requirements of the processes are different. Array stores the minimum sizes of partitions required to execute the individual processes.

Having partitions of different sizes and processes with different space requirements, we obviously would not like small processes to be placed into large partitions, while smaller partitions are empty, in which larger processes do not fit. Therefore, our goal is to assign each partition to a process fitting into it in a way that there is no larger process that would fit into it as well. This is ensured by the following algorithm:

Largest-Fit()

  1  FOR  TO  
  2    DO IF  
  3       THEN Load-Largest() 

Finding the largest process the whose space requirement is not larger than a particular size is a simple conditional maximum search. If we cannot find any processes meeting the requirements, we must leave the the partition empty.

Load-Largest()

  1   
  2   
  3  FOR  TO  
  4    DO IF  and  and  
  5       THEN  
  6           
  7  IF  
  8    THEN  
  9        
 10       FALSE 

The basic criteria of the correctness of all the algorithms loading the processes into the partitions is that they should not load a process into a partition which does not fit. This requirement is fulfilled by the above algorithm, since it can be derived from the conditional maximum search theorem exactly with the mentioned condition.

Another essential criterion is that it should not load more than one processes into the same partition, and also should not load one single process into more partitions simultaneously. The first case can be excluded, because we call the Load-Largest algorithm only for the partitions for which and if we load a process into partition number , then we give the index of the loaded process as a value, which is a positive integer. The second case can be proved similarly: the condition of the conditional maximum search excludes the processes for which FALSE, and if the process number is loaded into one of the partitions, then the value of is set to FALSE.

However, the fact that the algorithm does not load a process into a partition where it does not fit, does not load more then one processes into the same partition, or one single process into more partitions simultaneously is insufficient. These requirements are fulfilled even by an empty algorithm. Therefore, we have to require something more: namely that it should not leave a partition empty, if there is a process that would fit into it. To ensure this, we need an invariant, which holds during the whole loop, and at the end of the loop it implies our new requirement. Let this invariant be the following: after examination of partitions, there is no positive , for which , and for which there is a positive , such as TRUE, and .

  • Initialisation: At the beginning of the algorithm we have examined partitions, so there is not any positive .

  • Maintenance: If the invariant holds for at the beginning of the loop, first we have to check whether it holds for the same at the end of the loop as well. It is obvious, since the first partitions are not modified when examining the -th one, and for the processes they contain FALSE, which does not satisfy the condition of the conditional maximum search in the Load-Largest algorithm. The invariant holds for the -th partition at the end of the loop as well, because if there is a process which fulfills the condition, the conditional maximum search certainly finds it, since the condition of our conditional maximum search corresponds to the requirement of our invariant set on each partition.

  • Termination: Since the loop traverses a fixed interval by one, it will certainly stop. Since the loop body is executed exactly as many times as the number of the partitions, after the end of the loop there is no positive , for which , and for which there is a positive , such that TRUE and , which means that we did not fail to fill any partitions that could be assigned to a process fitting into it.

The loop in rows 1–3 of the Largest-Fit algorithm is always executed in its entirety, so the loop body is executed times. The loop body performs a conditional maximum search on the empty partitions – or on partitions for which . Since the condition in row 4 of the Load-Largest algorithm has to be evaluated for each , the conditional maximum search runs in . Although the loading algorithm will not be called for partitions for which , as far as running time is concerned, in the worst case even all the partitions might be empty, therefore the time complexity of our algorithm is .

Unfortunately, the fact that the algorithm fills all the empty partitions with waiting processes fitting into them whenever possible is not always sufficient. A very usual requirement is that the execution of every process should be started within a determined time limit. The above algorithm does not ensure it, even if there is an upper limit for the execution time of the processes. The problem is that whenever the algorithm is executed, there might always be new processes that prevent the ones waiting for long from execution. This is shown in the following example.

Example 17.1 Suppose that we have two partitions with sizes of 5 kB and 10 kB. We also have two processes with space requirements of 8 kB and 9 kB. The execution time of both processes is 2 seconds. But at the end of the first second a new process appears with space requirement of 9 kB and execution time of 2 seconds again, and the same happens in every 2 seconds, i. e., in the third, fifth, etc. second. If we have a look at our algorithm, we can see that it always has to choose between two processes, and the one with space requirement of 9 kB will always be the winner. The other one with 8 kB will never get into the memory, although there is no other partition into which it would fit.

To be able to fulfill this new requirement mentioned above, we have to slightly modify our algorithm: the long waiting processes must be preferred over all the other processes, even if their space requirement is smaller than that of the others. Our new algorithm will process all the partitions, just like the previous one.

Largest-or-Long-Waiting-Fit()

  1  FOR  TO  
  2    DO IF  
  3       THEN Load-Largest-or-Long-Waiting() 

However, this time we keep track on the waiting time of each process. Since the algorithm is only executed when one or more partitions become free, we cannot examine the concrete time, but the number of cases where the process would have fit into a partition but we have chosen another process to fill it. To implement this, the conditional maximum search algorithm has to be modified: operations have to be performed also on items that meet the requirement (they are waiting for memory and they would fit), but they are not the largest ones among those. This operation is a simple increment of the value of a counter. We assume that the value of the counter is 0 when the process starts. The condition of the search has to be modified as well: if the value of the counter of a process is too high, (i. e., higher than a certain ), and it is higher than the value of the counter of the process with the largest space requirement found so far, then we replace it with this new process. The pseudo code of the algorithm is the following:

Load-Largest-or-Long-Waiting()

  1   
  2   
  3  FOR  TO  
  4    DO IF  and  
  5       THEN IF ( and ) or 
                
  6          THEN  
  7              
  8              
  9          ELSE  
 10  IF  
 11    THEN  
 12        
 13       FALSE 

The fact that the algorithm does not place multiple processes into the same partition can be proved the same way as for the previous algorithm, since the outer loop and the condition of the branch has not been changed. To prove the other two criteria (namely that a process will be placed neither into more then one partitions, nor into a partition into which it does not fit), we have to see that the condition of the conditional maximum search algorithm has been modified in a way that this property stays. It is easy to see that the condition has been split into two parts, so the first part corresponds exactly to our requirement, and if it is not satisfied, the algorithm certainly does not place the process into the partition. The property that there are no partitions left empty also stays, since the condition for choosing a process has not been restricted, but extended. Therefore, if the previous algorithm found all the processes that met the requirements, the new one finds them as well. Only the order of the processes fulfilling the criteria has been altered. The time complexity of the loops has not changed either, just like the condition, according to which the inner loop has to be executed. So the time complexity of the algorithm is the same as in the original case.

We have to examine whether the algorithm satisfies the condition that a process can wait for memory only for a given time, if we suppose that there is some upper limit for the execution time of the processes (otherwise the problem is insoluble, since all the partitions might be taken by an infinite loop). Furthermore, let us suppose that the system is not overloaded, i. e., we can find a upper estimation for the number of the waiting processes in every instant of time. Knowing both limits it is easy to see that in the worst case to get assigned to a given partition a process has to wait for the processes with higher counters than its own one (at most many), and at most many processes larger than itself. Therefore, it is indeed possible to give an upper limit for the maximum waiting time for memory in the worst case: it is .

Example 17.2 In our previous example the process with space requirement of 8 kB has to wait for other processes, all of which lasts for 2 seconds, i. e., the process with space requirement of 8 kB has to wait exactly for 2k seconds to get into the partition with size of 10 kB.

In our algorithms so far the absolute space requirement of the processes served as the basis of their priorities. However this method is not fair: if there is a partition, into which two processes would fit, and neither of them fits into a smaller partition, then the difference in their size does not matter, since sooner or later also the smaller one has to be placed into the same, or into another, but not smaller partition. Therefore, instead of the absolute space requirement, the size of the smallest partition into which the given process fits should be taken into consideration when determining the priorities. Furthermore, if the partitions are increasingly ordered according to their sizes, then the index of the smallest partition in this ordered list is the priority of the process. It is called the rank of the process. The following algorithm calculates the ranks of all the processes.

Calculate-Rank()

  1  Sort() 
  2  FOR  TO  
  3    DO  
  4        
  5        
  6       WHILE  or  
  7          DO IF  
  8             THEN  
  9             ELSE  
 10           

It is easy to see that this algorithm first orders the partitions increasingly according to their sizes, and then calculates the rank for each process. However, this has to be done only at the beginning, or when a new process comes. In the latter case the inner loop has to be executed only for the new processes. Ordering of the partitions does not have to be performed again, since the partitions do not change. The only thing that must be calculated is the smallest partition the process fits into. This can be solved by a logarithmic search, an algorithm whose correctness is proved. The time complexity of the rank calculation is easy to determine: the ordering of the partition takes steps, while the logarithmic search , which has to be executed for processes. Therefore the total number of steps is .

After calculating the ranks we have to do the same as before, but for ranks instead of space requirements.

Long-Waiting-or-Not-Fit-Smaller()

  1  FOR  TO  
  2    DO IF