Communications and Control Engineering
2 total works
Controlling Chaos achieves three goals: the suppression, synchronisation and generation of chaos, each of which is the focus of a separate part of the book. The text deals with the well-known Lorenz, Rössler and Hénon attractors and the Chua circuit and with less celebrated novel systems. Modelling of chaos is accomplished using difference equations and ordinary and time-delayed differential equations. The methods directed at controlling chaos benefit from the influence of advanced nonlinear control theory: inverse optimal control is used for stabilization; exact linearization for synchronization; and impulsive control for chaotification. Notably, a fusion of chaos and fuzzy systems theories is employed. Time-delayed systems are also studied. The results presented are general for a broad class of chaotic systems.
This monograph is self-contained with introductory material providing a review of the history of chaos control and the necessary mathematical preliminaries for working with dynamical systems.
Adaptive Dynamic Programming for Control
by Huaguang Zhang, Derong Liu, Yanhong Luo, and Ding Wang
• infinite-horizon control for which the difficulty of solving partial differential Hamilton–Jacobi–Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
• finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;
• nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time:
• establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm;
• demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and
• shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.