Monte Carlo simulation has become one of the most important tools in all fields of science. Simulation methodology relies on a good source of numbers that appear to be random. These "pseudorandom" numbers must pass statistical tests just as random samples would. Methods for producing pseudorandom numbers and transforming those numbers to simulate samples from various distributions are among the most important topics in statistical computing.

This book surveys techniques of random number generation and the use of random numbers in Monte Carlo simulation. The book covers basic principles, as well as newer methods such as parallel random number generation, nonlinear congruential generators, quasi Monte Carlo methods, and Markov chain Monte Carlo. The best methods for generating random variates from the standard distributions are presented, but also general techniques useful in more complicated models and in novel settings are described. The emphasis throughout the book is on practical methods that work well in current computing environments.

The book includes exercises and can be used as a test or supplementary text for various courses in modern statistics. It could serve as the primary test for a specialized course in statistical computing, or as a supplementary text for a course in computational statistics and other areas of modern statistics that rely on simulation. The book, which covers recent developments in the field, could also serve as a useful reference for practitioners. Although some familiarity with probability and statistics is assumed, the book is accessible to a broad audience.

The second edition is approximately 50% longer than the first edition. It includes advances in methods for parallel random number generation, universal methods for generation of nonuniform variates, perfect sampling, and software for random number generation.


Will provide a more elementary introduction to these topics than other books available; Gentle is the author of two other Springer books

Computational Statistics

by James E. Gentle

Published 1 January 2009

Computational inference is based on an approach to statistical methods that uses modern computational power to simulate distributional properties of estimators and test statistics. This book describes computationally intensive statistical methods in a unified presentation, emphasizing techniques, such as the PDF decomposition, that arise in a wide range of methods.


Accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Regardless of the software system used, the book describes and gives examples of the use of modern computer software for numerical linear algebra. It begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, factorisations, matrix and vector norms, and other topics in linear algebra. The book is essentially self- contained, with the topics addressed constituting the essential material for an introductory course in statistical computing. Numerous exercises allow the text to be used for a first course in statistical computing or as supplementary text for various courses that emphasise computations.

This is the fourth in a series of books on statistical computing by James Gentle.