The structure of approximate solutions of autonomous discrete-time optimal control problems and individual turnpike results for optimal control problems without convexity (concavity) assumptions are examined in this book. In particular, the book focuses on the properties of approximate solutions which are independent of the length of the interval, for all sufficiently large intervals; these results apply to the so-called turnpike property of the optimal control problems. By encompassing the so-called turnpike property the approximate solutions of the problems are determined primarily by the objective function and are fundamentally independent of the choice of interval and endpoint conditions, except in regions close to the endpoints. This book also explores the turnpike phenomenon for two large classes of autonomous optimal control problems. It is illustrated that the turnpike phenomenon is stable for an optimal control problem if the corresponding infinite horizon optimal control problem possesses an asymptotic turnpike property. If an optimal control problem belonging to the first class possesses the turnpike property, then the turnpike is a singleton (unit set). The stability of the turnpike property under small perturbations of an objective function and of a constraint map is established. For the second class of problems where the turnpike phenomenon is not necessarily a singleton the stability of the turnpike property under small perturbations of an objective function is established. Containing solutions of difficult problems in optimal control and presenting new approaches, techniques and methods this book is of interest for mathematicians working in optimal control and the calculus of variations. It also can be useful in preparation courses for graduate students.

​Structure of Solutions of Variational Problems is devoted to recent progress made in the studies of the structure of approximate solutions of variational problems considered on subintervals of a real line. Results on properties of approximate solutions which are independent of the length of the interval, for all sufficiently large intervals are presented in a clear manner. Solutions, new approaches, techniques and methods to a number of difficult problems in the calculus of variations are illustrated throughout this book. This book also contains significant results and information about the turnpike property of the variational problems. This well-known property is a general phenomenon which holds for large classes of variational problems. The author examines the following in relation to the turnpike property in individual (non-generic) turnpike results, sufficient and necessary conditions for the turnpike phenomenon as well as in the non-intersection property for extremals of variational problems. This book appeals to mathematicians working in optimal control and the calculus as well as with graduate students.​​​

This book is devoted to the study of optimal control problems arising in forest management, an important and fascinating topic in mathematical economics studied by many researchers over the years. The volume studies the forest management problem by analyzing a class of optimal control problems that contains it and showing the existence of optimal solutions over infinite horizon. It also studies the structure of approximate solutions on finite intervals and their turnpike properties, as well as the stability of the turnpike phenomenon and the structure of approximate solutions on finite intervals in the regions close to the end points. The book is intended for mathematicians interested in the optimization theory, optimal control and their applications to the economic theory.

This focused monograph presents a study of subgradient algorithms for constrained minimization problems in a Hilbert space. The book is of interest for experts in applications of optimization to engineering and economics. The goal is to obtain a good approximate solution of the problem in the presence of computational errors. The discussion takes into consideration the fact that for every algorithm its iteration consists of several steps and that computational errors for different steps are different, in general. The book is especially useful for the reader because it contains solutions to a number of difficult and interesting problems in the numerical optimization. The subgradient projection algorithm is one of the most important tools in optimization theory and its applications. An optimization problem is described by an objective function and a set of feasible points. For this algorithm each iteration consists of two steps. The first step requires a calculation of a subgradient of the objective function; the second requires a calculation of a projection on the feasible set. The computational errors in each of these two steps are different. This book shows that the algorithm discussed, generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. Moreover, if computational errors for the two steps of the algorithm are known, one discovers an approximate solution and how many iterations one needs for this. In addition to their mathematical interest, the generalizations considered in this book have a significant practical meaning.


The book is devoted to the study of constrained minimization  problems on closed and convex sets in Banach spaces with a Frechet differentiable objective function. Such  problems are well studied in a  finite-dimensional space and in an infinite-dimensional Hilbert space. When the space is Hilbert there are many algorithms for solving optimization problems including the gradient projection algorithm which  is one of the most important tools in the optimization theory, nonlinear analysis and their applications. An optimization problem is described by an  objective function  and a set of feasible points. For the gradient projection algorithm each iteration consists of two steps. The first step is a calculation of a gradient of the objective function while in the second one  we calculate a projection on the feasible  set. In each of these two steps there is a computational error. In our recent research we show that the gradient projection algorithm generates a good approximate solution, if all the computational errors are bounded from above by a small positive constant. It should be mentioned that  the properties of a Hilbert space play an important role. When we consider an optimization problem in a general Banach space the situation becomes more difficult and less understood. On the other hand such problems arise in the approximation theory. The book is of interest for mathematicians working in  optimization. It also can be useful in preparation courses for graduate students.  The main feature of the book which appeals specifically to this audience is the study of algorithms for convex and nonconvex minimization problems in a general Banach space. The book is of interest for experts in applications of optimization to the approximation theory.
In this book the goal is to obtain a good approximate solution of the constrained optimization problem in a general Banach space under  the presence of computational errors.  It is shown that the algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and  prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems  are studied in Chapter 3. In Chapter 4 we study  continuous   algorithms for minimization problems under the presence of computational errors. The algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a small constant. The book consists of four chapters. In the first we discuss several algorithms which are studied in the book and  prove a convergence result for an unconstrained problem which is a prototype of our results for the constrained problem. In Chapter 2 we analyze convex optimization problems. Nonconvex optimization problems  are studied in Chapter 3. In Chapter 4 we study  continuous   algorithms for minimization problems under the presence of computational errors.