Book 83

Source Coding Theory

by Robert M. Gray

Published 31 October 1989
Source coding theory has as its goal the characterization of the optimal performance achievable in idealized communication systems which must code an information source for transmission over a digital communication or storage channel for transmission to a user. The user must decode the information into a form that is a good approximation to the original. A code is optimal within some class if it achieves the best possible fidelity given whatever constraints are imposed on the code by the available channel. In theory, the primary constraint imposed on a code by the channel is its rate or resolution, the number of bits per second or per input symbol that it can transmit from sender to receiver. In the real world, complexity may be as important as rate. The origins and the basic form of much of the theory date from Shan non's classical development of noiseless source coding and source coding subject to a fidelity criterion (also called rate-distortion theory) [73] [74]. Shannon combined a probabilistic notion of information with limit theo rems from ergodic theory and a random coding technique to describe the optimal performance of systems with a constrained rate but with uncon strained complexity and delay. An alternative approach called asymptotic or high rate quantization theory based on different techniques and approx imations was introduced by Bennett at approximately the same time [4]. This approach constrained the delay but allowed the rate to grow large.

Book 159

Herb Caen, a popular columnist for the San Francisco Chronicle, recently quoted a Voice of America press release as saying that it was reorganizing in order to "eliminate duplication and redundancy. " This quote both states a goal of data compression and illustrates its common need: the removal of duplication (or redundancy) can provide a more efficient representation of data and the quoted phrase is itself a candidate for such surgery. Not only can the number of words in the quote be reduced without losing informa­ tion, but the statement would actually be enhanced by such compression since it will no longer exemplify the wrong that the policy is supposed to correct. Here compression can streamline the phrase and minimize the em­ barassment while improving the English style. Compression in general is intended to provide efficient representations of data while preserving the essential information contained in the data. This book is devoted to the theory and practice of signal compression, i. e. , data compression applied to signals such as speech, audio, images, and video signals (excluding other data types such as financial data or general­ purpose computer data). The emphasis is on the conversion of analog waveforms into efficient digital representations and on the compression of digital information into the fewest possible bits. Both operations should yield the highest possible reconstruction fidelity subject to constraints on the bit rate and implementation complexity.

Book 322

The Fourier transform is one of the most important mathematical tools in a wide variety of fields in science and engineering. In the abstract it can be viewed as the transformation of a signal in one domain (typically time or space) into another domain, the frequency domain. Applications of Fourier transforms, often called Fourier analysis or harmonic analysis, provide useful decompositions of signals into fundamental or "primitive" components, provide shortcuts to the computation of complicated sums and integrals, and often reveal hidden structure in data. Fourier analysis lies at the base of many theories of science and plays a fundamental role in practical engineering design. The origins of Fourier analysis in science can be found in Ptolemy's decomposing celestial orbits into cycles and epicycles and Pythagorus' de composing music into consonances. Its modern history began with the eighteenth century work of Bernoulli, Euler, and Gauss on what later came to be known as Fourier series. J. Fourier in his 1822 Theorie analytique de la Chaleur [16] (still available as a Dover reprint) was the first to claim that arbitrary periodic functions could be expanded in a trigonometric (later called a Fourier) series, a claim that was eventually shown to be incorrect, although not too far from the truth. It is an amusing historical sidelight that this work won a prize from the French Academy, in spite of serious concerns expressed by the judges (Laplace, Lagrange, and Legendre) re garding Fourier's lack of rigor.

Book 571

In the current age of information technology, the issues of distributing and utilizing images efficiently and effectively are of substantial concern. Solutions to many of the problems arising from these issues are provided by techniques of image processing, among which segmentation and compression are topics of this book.
Image segmentation is a process for dividing an image into its constituent parts. For block-based segmentation using statistical classification, an image is divided into blocks and a feature vector is formed for each block by grouping statistics of its pixel intensities. Conventional block-based segmentation algorithms classify each block separately, assuming independence of feature vectors.
Image Segmentation and Compression Using Hidden Markov Models presents a new algorithm that models the statistical dependence among image blocks by two dimensional hidden Markov models (HMMs). Formulas for estimating the model according to the maximum likelihood criterion are derived from the EM algorithm. To segment an image, optimal classes are searched jointly for all the blocks by the maximum a posteriori (MAP) rule. The 2-D HMM is extended to multiresolution so that more context information is exploited in classification and fast progressive segmentation schemes can be formed naturally.
The second issue addressed in the book is the design of joint compression and classification systems using the 2-D HMM and vector quantization. A classifier designed with the side goal of good compression often outperforms one aimed solely at classification because overfitting to training data is suppressed by vector quantization.
Image Segmentation and Compression Using Hidden Markov Models is an essential reference source for researchers and engineers working in statistical signal processing or image processing, especially those who are interested in hidden Markov models. It is also of value to those working on statistical modeling.