Foundations and Trends (R) in Optimization
3 total works
Dynamic Network Energy Management via Proximal Message Passing
by Matt Kraning, Eric Chu, Javad Lavaei, and Stephen Boyd
Published 1 January 2014
Presents a fully decentralized method for dynamic network energy management based on message passing between devices. It considers a network of devices, such as generators, fixed loads, deferrable loads, and storage devices, each with its own dynamic constraints and objective, connected by AC and DC lines. The problem is to minimize the total network objective subject to the device and line constraints, over a given time horizon. This is a large optimization problem, with variables for consumption or generation for each device, power flow for each line, and voltage phase angles at AC buses, in each time period.
This text develops a decentralized method for solving this problem called proximal message passing. The method is iterative: at each step, each device exchanges simple messages with its neighbors in the network and then solves its own optimization problem, minimizing its own objective function, augmented by a term determined by the messages it has received. It is shown that this message passing method converges to a solution when the device objective and constraints are convex. The method is completely decentralized, and needs no global coordination other than synchronizing iterations; the problems to be solved by each device can typically be solved extremely efficiently and in parallel. The method is fast enough that even a serial implementation can solve substantial problems in reasonable time frames.
Results for several numerical experiments are reported, demonstrating the method's speed and scaling, including the solution of a problem instance with over ten million variables in under fifty minutes for a serial implementation; with decentralized computing, the solve time would be less than one second.
This text develops a decentralized method for solving this problem called proximal message passing. The method is iterative: at each step, each device exchanges simple messages with its neighbors in the network and then solves its own optimization problem, minimizing its own objective function, augmented by a term determined by the messages it has received. It is shown that this message passing method converges to a solution when the device objective and constraints are convex. The method is completely decentralized, and needs no global coordination other than synchronizing iterations; the problems to be solved by each device can typically be solved extremely efficiently and in parallel. The method is fast enough that even a serial implementation can solve substantial problems in reasonable time frames.
Results for several numerical experiments are reported, demonstrating the method's speed and scaling, including the solution of a problem instance with over ten million variables in under fifty minutes for a serial implementation; with decentralized computing, the solve time would be less than one second.
Proximal Algorithms discusses proximal operators and proximal algorithms, and illustrates their applicability to standard and distributed convex optimization in general and many applications of recent interest in particular. Much like Newton's method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets.
Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods.
Proximal Algorithms discusses different interpretations of proximal operators and algorithms, looks at their connections to many other topics in optimization and applied mathematics, surveys some popular algorithms, and provides a large number of examples of proximal operators that commonly arise in practice.
Proximal methods sit at a higher level of abstraction than classical algorithms like Newton's method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods.
Proximal Algorithms discusses different interpretations of proximal operators and algorithms, looks at their connections to many other topics in optimization and applied mathematics, surveys some popular algorithms, and provides a large number of examples of proximal operators that commonly arise in practice.
Performance Bounds and Suboptimal Policies for Multi-Period Investment
by Stephen Boyd, Mark T. Mueller, Brendan Donoghue, and Yang Wang
Published 1 January 2014
Examines dynamic trading of a portfolio of assets in discrete periods over a finite time horizon, with arbitrary time-varying distribution of asset returns. The goal is to maximize the total expected revenue from the portfolio, while respecting constraints on the portfolio like a required terminal portfolio and leverage and risk limits. The revenue takes into account the gross cash generated in trades, transaction costs, and costs associated with the positions, such as fees for holding short positions.
The model that is presented takes the form of a stochastic control problem with linear dynamics and convex cost function and constraints. While this problem can be tractably solved in several special cases - for example, when all costs are convex quadratic, or when there are no transaction costs - the focus is on the more general case, with nonquadratic cost terms and transaction costs.
Performance Bounds and Suboptimal Policies for Multi-Period Investment shows how to use linear matrix inequality techniques and semidefinite programming to produce a quadratic bound on the value function, which in turn gives a bound on the optimal performance. This performance bound can be used to judge the performance obtained by any suboptimal policy.
As a by-product of the performance bound computation, an approximate dynamic programming policy is obtained that requires the solution of a convex optimization problem, often a quadratic program, to determine the trades to carry out in each step.
The model that is presented takes the form of a stochastic control problem with linear dynamics and convex cost function and constraints. While this problem can be tractably solved in several special cases - for example, when all costs are convex quadratic, or when there are no transaction costs - the focus is on the more general case, with nonquadratic cost terms and transaction costs.
Performance Bounds and Suboptimal Policies for Multi-Period Investment shows how to use linear matrix inequality techniques and semidefinite programming to produce a quadratic bound on the value function, which in turn gives a bound on the optimal performance. This performance bound can be used to judge the performance obtained by any suboptimal policy.
As a by-product of the performance bound computation, an approximate dynamic programming policy is obtained that requires the solution of a convex optimization problem, often a quadratic program, to determine the trades to carry out in each step.