Preface Due to the development of hardware technologies (such as VLSI) in the early 1980s, the interest in parallel and distributive computing has been rapidly growingandinthelate1980sthestudyofparallelalgorithmsandarchitectures became one of the main topics in computer science. To bring the topic to educatorsandstudents,severalbooksonparallelcomputingwerewritten. The involvedtextbook“IntroductiontoParallelAlgorithmsandArchitectures”by F. Thomson Leighton in 1992 was one of the milestones in the development of parallel architectures and parallel algorithms. But in the last decade or so the main interest in parallel and distributive computing moved from the design of parallel algorithms and expensive parallel computers to the new distributive reality – the world of interconnected computers that cooperate (often asynchronously) in order to solve di?erent tasks. Communication became one of the most frequently used terms of computer science because of the following reasons: (i) Considering the high performance of current computers, the communi- tion is often moretime consuming than the computing time of processors. As a result, the capacity of communication channels is the bottleneck in the execution of many distributive algorithms. (ii) Many tasks in the Internet are pure communication tasks. We do not want to compute anything, we only want to execute some information - change or to extract some information as soon as possible and as cheaply as possible. Also, we do not have a central database involving all basic knowledge. Instead, wehavea distributed memorywherethe basickno- edgeisdistributedamongthelocalmemoriesofalargenumberofdi?erent computers. The growing importance of solving pure communication tasks in the - terconnected world is the main motivation forwriting this book.

Theoretical Computer Science

by Juraj Hromkovic

Published 18 September 2003

The aim of this textbook is not only to provide an elegant route through the theoretical fundamentals of computer science, but also to show that theoretical computer science is a fascinating discipline, full of spectacular contributions and miracles, and depth in research, and yet directly applicable. Thus, we aim to excite people about its study. To achieve these goals we do not hesitate to take a lot of space to present motivations, and especially to give the informal development of crucial ideas and concepts and their transparent, but rigorous presentation.

An additional aim is to present the development of the computer scientist's way of thinking, so we do not restrict this book to the classic areas like computability and automata theory but we also present fundamental concepts such as approximation and randomization in algorithmics and we explain the basic ideas of cryptography and interconnection network design.


Algorithmic design, especially for hard problems, is more essential for success in solving them than any standard improvement of current computer tech­ nologies. Because of this, the design of algorithms for solving hard problems is the core of current algorithmic research from the theoretical point of view as well as from the practical point of view. There are many general text books on algorithmics, and several specialized books devoted to particular approaches such as local search, randomization, approximation algorithms, or heuristics. But there is no textbook that focuses on the design of algorithms for hard computing tasks, and that systematically explains, combines, and compares the main possibilities for attacking hard algorithmic problems. As this topic is fundamental for computer science, this book tries to close this gap. Another motivation, and probably the main reason for writing this book, is connected to education. The considered area has developed very dynami­ cally in recent years and the research on this topic discovered several profound results, new concepts, and new methods. Some of the achieved contributions are so fundamental that one can speak about paradigms which should be in­ cluded in the education of every computer science student. Unfortunately, this is very far from reality. This is because these paradigms are not sufficiently known in the computer science community, and so they are insufficiently com­ municated to students and practitioners.

The communication complexity of two-party protocols is an only 15 years old complexity measure, but it is already considered to be one of the fundamen tal complexity measures of recent complexity theory. Similarly to Kolmogorov complexity in the theory of sequential computations, communication complex ity is used as a method for the study of the complexity of concrete computing problems in parallel information processing. Especially, it is applied to prove lower bounds that say what computer resources (time, hardware, memory size) are necessary to compute the given task. Besides the estimation of the compu tational difficulty of computing problems the proved lower bounds are useful for proving the optimality of algorithms that are already designed. In some cases the knowledge about the communication complexity of a given problem may be even helpful in searching for efficient algorithms to this problem. The study of communication complexity becomes a well-defined indepen dent area of complexity theory. In addition to a strong relation to several funda mental complexity measures (and so to several fundamental problems of com plexity theory) communication complexity has contributed to the study and to the understanding of the nature of determinism, nondeterminism, and random ness in algorithmics. There already exists a non-trivial mathematical machinery to handle the communication complexity of concrete computing problems, which gives a hope that the approach based on communication complexity will be in strumental in the study of several central open problems of recent complexity theory.