Abstract: In recent years Model Predictive Control (MPC) has emerged as a powerful methodology for on-line control of complex systems. The central idea is to formulate a finite horizon optimal control problem based on a model of the system dynamics. The optimal control problem is then solved (either on-line, using numerical optimization tools, or off-line, using multi-parametric programming), the initial part of the optimal input sequence is applied, a measurement of the state is taken and the process is repeated. The periodic state measurements provide the feedback necessary to make the process robust against disturbances, including errors in the system model used in the optimization. Over the years, MPC for deterministic systems has become a mature technology, with countless applications in a wide range of domains. More recently, robust MPC (where the system model includes bounded, worst case uncertainty) has also flourished, exploiting advances in robust optimization. In comparison, MPC for systems with stochastic, potentially unbounded uncertainty has received relatively little attention. Dealing with stochastic uncertainty is important, since it will allow the MPC methodology to extend to application areas such as finance, air traffic management, and insurance, which naturally lend themselves to an MPC approach but also naturally involve stochastic models. The extension of MPC to stochastic systems poses several challenges, both conceptual and practical: How should state constraints be interpreted in the finite and infinite horizon context, how can input constraints be enforced, what cost functions and policies should one consider for the optimal control problem, under what conditions are the resulting optimization problems convex, etc. In this talk we highlight these challenges and outline methods that can be used to overcome them.
Automatic Control Laboratory, ETH Zurich, ETL I 22, Physikstrasse 3, CH-8092 Zurich, Switzerland, Tel.: +41 44 632 89 70, +41 44 632 89 70, Fax.: +41 44 632 12 11, Email: firstname.lastname@example.org, www.control.ee.ethz.ch
John Lygeros received a B.Eng. degree in Electrical Engineering and an M.Sc. degree in Automatic Control from Imperial College, London, U.K., in 1990 and 1991, respectively. He then received a Ph.D. degree from the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, in 1996. He held a series of postdoctoral research appointments with the National Automated Highway Systems Consortium, the Massachusetts Institute of Technology, and the University of California, Berkeley. In parallel, he was also a part-time Research Engineer with SRI International, Menlo Park, CA, and a Visiting Professor with the Department of Mathematics, Universite de Bretagne Occidentale, Brest, France. Between July 2000 and March 2003, he was a university lecturer with the Department of Engineering, University of Cambridge, Cambridge, U.K. and a fellow of Churchill College. Between March 2003 and July 2006, he was an assistant professor with the Department of Electrical and Computer Engineering, University of Patras, Patras, Greece. In July 2006, he joined the Automatic Control Laboratory, ETH Zurich, Switzerland as an associate professor. His research interests include modeling, analysis, and control of hierarchical, hybrid and stochastic systems with applications to biochemical networks and large-scale engineering systems such as highway and air traffic management.
Distinguished Lecture Program
Talk Title: Stochastic model predictive control
Talk Title: Randomized optimization of deterministic and expected value criteria.
Abstract: Simulated annealing, Markov Chain Monte Carlo, and genetic algorithms are all randomized methods that can be used in practice to solve (albeit approximately) complex optimization problems. They rely on constructing appropriate Markov chains, whose stationary distribution concentrates on "good" parts of the parameter space (i.e. near the optimizers). Many of these methods come with asymptotic convergence guarantees, that establish conditions under which the Markov chain converges to a globally optimal solution in an appropriate probabilistic sense. An interesting question that is usually not covered by asymptotic convergence results is the rate of convergence: How long should the randomized algorithm be executed to obtain a near optimal solution with high probability? Answering this question would allow one to determine a level of accuracy and confidence with which approximate optimality claims can be made as a function of the amount of time available for computation. In this talk we present some new results on finite sample bounds of this type, primarily in the context of stochastic optimization with expected value criteria using Markov Chain Monte Carlo methods. The discussion will be motivated by the application of these methods to collision avoidance in air traffic management and parameter identification for biological systems.
Talk Title: Stochastic hybrid systems: From theory to applications.
Abstract: The term stochastic hybrid systems defines a class of control systems that involve the interaction of continuous dynamics, discrete dynamics and probabilistic uncertainty. Over the last decade stochastic hybrid systems have emerged as a powerful modeling paradigm in a wide range of application areas. This talk will provide an overview of recent developments in this rapidly evolving field of research. We will present the theoretical foundations and challenges of stochastic hybrid systems. We will also outline computational methods that can be used to analyze and control such systems, based primarily on randomized algorithms. The discussion will be motivated by applied modeling, analysis and control problems from the areas of systems biology and air traffic management.
Talk Title: Air traffic management: Challenges and opportunities for automatic control.
Abstract: Increasing levels of traffic are pushing the current Air Traffic Management (ATM) system to its limits. It is widely recognized that safely accommodating this increase in demand will require (in addition to technological advances) operational changes as well as novel, advanced decision support algorithms to assist the human operators. The operation of ATM is characterized by a hierarchy of tasks, which are exceptional benchmarks for control methodologies aiming to tackle complexity. Ongoing control research in the area of ATM includes large scale and distributed optimization for the management of traffic flows, innovative filtering, prediction and control methods for collision avoidance, systems methods for improving the situational awareness of human operators, optimal control for the design of safe coordination maneuvers, etc. In addition, models and simulation tools covering all levels of ATM are being developed, either to test and validate new methods, or to perform risk assessment for existing operations. This talk will highlight the problems and challenges that ATM poses for control engineers and outline advanced control and filtering methods that have been developed to improve the accuracy of aircraft trajectory prediction, detect potential safety problems, and compute maneuvers to resolve them.