Bootstrap methods are computer-intensive methods of statistical analysis, which use simulation to calculate standard errors, confidence intervals, and significance tests. The methods apply for any level of modelling, and so can be used for fully parametric, semiparametric, and completely nonparametric analysis. This 1997 book gives a broad and up-to-date coverage of bootstrap methods, with numerous applied examples, developed in a coherent way with the necessary theoretical basis. Applications include stratified data; finite populations; censored and missing data; linear, nonlinear, and smooth regression models; classification; time series and spatial problems. Special features of the book include: extensive discussion of significance tests and confidence intervals; material on various diagnostic methods; and methods for efficient computation, including improved Monte Carlo simulation. Each chapter includes both practical and theoretical exercises. S-Plus programs for implementing the methods described in the text are available from the supporting website.

Statistical Models

by A. C. Davison

Published 1 January 2003
Models and likelihood are the backbone of modern statistics. This 2003 book gives an integrated development of these topics that blends theory and practice, intended for advanced undergraduate and graduate students, researchers and practitioners. Its breadth is unrivaled, with sections on survival analysis, missing data, Markov chains, Markov random fields, point processes, graphical models, simulation and Markov chain Monte Carlo, estimating functions, asymptotic approximations, local likelihood and spline regressions as well as on more standard topics such as likelihood and linear and generalized linear models. Each chapter contains a wide range of problems and exercises. Practicals in the S language designed to build computing and data analysis skills, and a library of data sets to accompany the book, are available over the Web.

Applied Asymptotics

by A R Brazzale, A. C. Davison, and N Reid

Published 1 January 2007
In fields such as biology, medical sciences, sociology, and economics researchers often face the situation where the number of available observations, or the amount of available information, is sufficiently small that approximations based on the normal distribution may be unreliable. Theoretical work over the last quarter-century has led to new likelihood-based methods that lead to very accurate approximations in finite samples, but this work has had limited impact on statistical practice. This book illustrates by means of realistic examples and case studies how to use the new theory, and investigates how and when it makes a difference to the resulting inference. The treatment is oriented towards practice and comes with code in the R language (available from the web) which enables the methods to be applied in a range of situations of interest to practitioners. The analysis includes some comparisons of higher order likelihood inference with bootstrap or Bayesian methods.