The problem of errors in computations has attracted the attention of mathematicians for some time. With the wide application of computers its meaning has taken on a new dimension. To the classical problem of errors in numerical analysis is added the new problem of errors in computer arithmetics. There is now an error classification, according to which, each problem in numerical analysis is connected with three types of error. This classification has stimulated the development of numerical analysis. The author questions the completeness of this classification and suggests improvements. He applies these to linear and non-linear problems in numerical analysis, and also to systems of linear equations. His results form the main contents of the monograph.