Mathematical theory of discrete time decision processes, also known as stochastic control, is based on two major ideas: backward induction and conditioning. It has a large number of applications in almost all branches of the natural sciences. The aim of these notes is to give a self-contained introduction to this theory and its applications. Our intention was to give a global and mathematically precise picture of the subject and present well motivated examples. We cover systems with complete or partial information as well as with complete or partial observation. We have tried to present in a unified way several topics such as dynamic programming equations, stopping problems, stabilization, Kalman-Bucy filter, linear regulator, adaptive control and option pricing. The notes discuss a large variety of models rather than concentrate on general existence theorems.

The notes are based on lectures on stochastic processes given at Scuola Normale Superiore in 1999 and 2000. Some new material was added and only selected, less standard results were presented. We did not include several applications to statistical mechanics and mathematical finance, covered in the lectures, as we hope to write part two of the notes devoted to applications of stochastic processes in modelling. The main themes of the notes are constructions of stochastic processes. We present different approaches to the existence question proposed by Kolmogorov, Wiener, Ito and Prohorov. Special attention is also paid to Levy processes. The lectures are basically self-contained and rely only on elementary measure theory and functional analysis. They might be used for more advanced courses on stochastic processes.