Using concepts from differential topology and information theory, I shall describe a theoretical framework for search strategies aimed at rapid discovery of topological features (locations of critical points and critical level sets) of a priori unknown differentiable random fields. The theory enables study of efficient reconnaissance strategies in which the tradeoff between speed and accuracy can be understood. The proposed approach to rapid discovery of topological features has led in a natural way to to the creation of parsimonious reconnaissance routines that do not rely on any prior knowledge of the environment. The design of topologyguided search protocols uses a mathematical framework that quantifies the relationship between what has been discovered and what remains to be discovered. The quantification rests on an information theory inspired model whose properties allow us to treat search as a problem in optimal information acquisition.
Distinguished Lecturers Program
Program Description
The Control Systems Society is continuing to fund a Distinguished Lecture Series.
The primary purpose is to help Society chapters provide interesting and informative programs for the membership, but the Distinguished Lecture Series may also be of interest to industry, universities, and other parties.
The Control Systems Society has agreed to a cost sharing plan which may be used by IEEE Chapters, sections, subsections, and student groups. IEEE student groups are especially encouraged to make use of this opportunity to have excellent speakers at moderate cost.
At the request of a Society Chapter, (or other IEEE groups as mentioned above), a lecture will be scheduled at a place and time that is mutually agreeable to both the Chapter and the Distinguished Lecturer. Eighty percent (80%) of the funds for the normal travel expenses for a lecture will be paid by the Society, the remaining travel expenses will be provided by the chapter. Lecturers will receive no honorarium. Note that the group organizing the lecture must have some IEEE affiliation and the lecture must be free to attend by IEEE members.
The society will provide 80% of the expenses for qualified users of the program up to a maximum limit of $2000 for within a continent visit to be paid by the society and $4000 otherwise.
Procedures
When you wish to use this program, you may contact the speaker directly to make arrangements. Then, you must submit a formal proposal to the Distinguished Lecturer Program Chair for his/her approval. The proposal should be sent to the Distinguished Lecturer Program Chair by someone in the local chapter, who should identify their role in the chapter, and provide some details of the invitation, including the dates. The proposal should contain budgetary quotations for air fare and accommodation from authorized sources (air line/ travel agent for the former, hotel for the latter), and a clear commitment of what the local chapter will contribute. If the trip is approved, then IEEE CSS would pay a maximum of 80% of the air fare and accommodation. The local hosts should pick up a minimum of 20% of the air fare and accommodation, plus all local costs such as airport transfer, local transportation, and meals. Procedures for unusual situations (such as when the speaker has other business on the trip) should be cleared through the Distinguished Lecturer Program Chair.
The expense claim filed by the distinguished lecturer upon the conclusion of the trip should contain receipts for the air fare and hotel.
Each distinguished lecturer will be limited to two trips per year, out of which at most one can be intercontinental.
Distinguished Lecturers Program Chair

Distinguished Lecturer Chair; Award Recipient
Distinguished Lecturers
Distinguished Lecturer
The interaction of information and control has been a topic of interest to system theorists that can be traced back to the 1950’s when the fields of communications, control, and information theory were new but developing rapidly. Recent advances in our understanding of this interplay have emerged from work on the dynamical effect of state quantization and a corresponding understanding of how communication channel data rates affect system stability. While a large body of research has now emerged dealing with communication constrained feedback channels and optimal design of information flows in networks, less attention has been paid to ways in which control systems should be designed in order to optimally mediate computation and communication. Such optimization problems are of interest in the context of quantum computing, and similar problems have recently been discussed in connection with protocols for assembly of molecular components in synthetic biology.
Recently W.S. Wong has proposed the concept of control communication complexity (CCC) as a formal approach for understanding how a group of distributed agents can choose independent actions from a prescribed "action code book" that cooperatively realize common goals and objectives. A prototypical goal is the computation of a function, and CCC provides a promising new approach to understanding complexity in terms of the cost of realizing a selected evaluation. This lecture will introduce control communication complexity in terms of what are called standard parts optimal control problems. Problems in optimal ensemble averaged motion sequences and distributed control of dynamical systems defined on Lie groups are discussed.
Distinguished Lecturer
Abstract: Nanometer length scale analogues of most traditional control elements, such as sensors, actuators, and feedback controllers, have been enabled by recent advancements in device manufacturing and fundamental materials research. However, combining these new control elements in classical systems frameworks remains elusive. Methods to address the new generation of systems issues particular to nanoscale systems is termed here as systems nanotechnology. This presentation discusses some promising control strategies and theories that have been developed to address the challenges that arise in systems nanotechnology. Specific examples are provided where the identification, estimation, and control of complex nanoscale systems have been demonstrated in experimental implementations or in highfidelity simulations. Some control theory problems are also described that, if resolved, would facilitate further applications.
Abstract: Most high value products such as in the pharmaceuticals, microelectronic, and nanotechnology industries are manufactured in a series of processing steps that operate over finite time. These processes are usually distributed parameter systems in which tight control is required. Computationally efficient methods are proposed for the robust optimal control of finitetime distributed parameter systems (DPS), in which robustness is ensured for either deterministic or stochastic parametric uncertainties. In the deterministic case, the effects of uncertainties on the states and product quality are quantified by power series expansions combined with linear matrix inequality or structured singular value analysis. In the stochastic case, the effects of uncertainties are quantified by power series or polynomial chaos expansions. The robust performance analysis have been incorporated into fixed controllers and model predictive control algorithms. The approaches are illustrated for several applications problems.
Abstract: An overview is provided on advances in the control of molecular purity, crystal structure, and particle size distribution of pharmaceutical crystals. A nonlinear feedback control strategy is described that is robust to ordersofmagnitude variations in the crystallization kinetics, which enables the same feedback controller to apply to completely different pharmaceutical compounds by just updating the drug solubility. The control strategy enables the manufacture of large drug crystals of uniform size and specified crystal structure, regardless of whether the crystal structure is thermodynamically stable or metastable. The control strategy forces the closedloop operations to be within a tightly constrained trajectory bundle that connects the initial states to the desired final states. A modification of the methodology is able to achieve a target crystal size distribution by employing continual manipulation of seeds manufactured using a dualimpinging jet mixer. The methodology has been evaluated in theoretical, simulation, and experimental studies for a wide variety of pharmaceutical compounds. The presentation ends with a discussion of directions towards the simultaneous control of multiple properties, by the integration of feedback control with process design.
Distinguished Lecturer
Control of complex networks, including unmanned vehicle networks, social networks, and biological systems, is an evergrowing challenge. A standard approach is to directly control a subset of leader nodes, which then influence the remaining (follower) nodes. While the choice of leader nodes is known to impact the performance, controllability, and security of complex networks, efficient algorithms for selecting optimal leaders are currently lacking.In this talk, we give an overview of our ongoing work on leader selection in complex networks. We focus on three design criteria, namely, the robustness of the system to noise in the links between nodes, the time for the follower nodes to converge to their desired state, and the controllability to the follower nodes from the leader nodes. We present a unifying framework based on submodularity, a diminishing returns property analogous to concavity of realvalued functions, for studying each of these criteria. Our framework enables efficient leader selection based on the criteria above, with provable guarantees on the resulting system performance. Moreover, we generalize our approach to timevarying networks, including networks with random failures, arbitrary topology variations due to node mobility, and attacks by an intelligent adversary targeting one or more links.
Distinguished Lecturer
The Grenoble Traffic Lab (GTL http://necs.inrialpes.fr/pages/reseach/gtl.php) initiative is a realtime traffic data center (platform) intended to collect traffic road infrastructure information in realtime with minimum latency and fast sampling periods. This lecture includes several aspects on modeling, forecasting and control of traffic systems, which are applied to the GTL. In this presentation, we first review main flowconservation models which are used as a basis to design physicaloriented forecasting & control algorithms. In particular we underline fundamental properties like downstream/upstream controllability and observability of such models, and present a new network setup for analysis. Then we present advances in traffic forecasting using graphconstrained macroscopic models which substantially reduce the number of possible affine dynamics of the system and preserve the number of vehicles in the network. This model is used to recover the state of the traffic network and precisely localize the eventual congestion front. The last part of the talk we discuss issues on density balancing control, where the objective is to design the homogeneous distribution of density on the freeway using the input flows as decision variables. The study shown, that a keystone for the design of the balanced states is a necessary cooperation of ramp metering with the variable speed limit control.
This lecture is devoted to new challenging control problems arising in the automotive industry as a consequence of the customerdriven performance specifications adopted by car builders which have dramatically increased the number of new proposed automated features where feedback interacts with the driver. The notion of "FuntoDrive by Feedback" relates, here, to the ability to design a control scheme resulting in good ride comfort behavior as well as acceptable safe operation. The lecture shows how control techniques can be used to solve some of these problems, and discusses how these subjective notions can be formalized thanks to concepts such as passivity and model matching control. We present a series of examples concerning systems that provide assisted automated devices (assisted clutch synchronization, steerbywire system, and advanced interdistance control), in which these aspects are assessed.
Distinguished Lecturer
In many practical systems, the control or decision making is triggered by certain events. The performance optimization for such systems is generally different from the traditional optimization approaches, such as Markov decision processes or mathematical programming. Because the sequence of events may not possess the Markov property, these traditional approaches may not work for the eventbased control or optimization problems. In this talk, we discuss a new optimization framework called eventbased optimization, which can be applied widely to the aforementioned optimization problems. With performance potential as building blocks, we develop two intuitive optimization algorithms to solve the eventbased optimization problem. The optimization algorithm is proposed based on an intuitive principle, i.e., to choose the actions with the best longterm effects on the system performance. The theoretical justifications are also discussed based on the difference equation of eventbased optimization. We find the conditions under which the intuitive algorithms lead to the optimal eventbased policy; and discuss the errors when they don’t. Finally, we use practical applications to demonstrate the effectiveness of the eventbased optimization framework. We hope that this framework will provide a new perspective to the optimization of the performance of eventtriggered dynamic systems. The talk is based on the joint work with QS Jia, Li Xia, and JY Zhang.
One of the difficulties in behavioral analysis is its timeinconsistency due to the distortion in performance probability. The standard dynamic programming fails to work in this area. In this talk, we first give a brief review of the different approaches in stochastic optimization and give an overview of the area with a sensitivitybased point of view. Then we apply this sensitivitybased approach, a powerful alternative to dynamic programming, to solve the portfolio management problem in an environment with probability distortion. We show that after changing the underlying probability measure the distorted performance becomes locally linear, and thus we discovered a property called “monolinearity” of the distorted performance. The derivative of the distorted performance is simply the expectation of the sample path based derivative of the performance under this new measure, which can be obtained by perturbation analysis. We also provide simulation algorithms for the derivative of distorted performance and hence a gradientbased search algorithm for the optimal policy. We apply this approach to the optimal portfolio selection problem with the distorted performance probability and obtained a martingale property of the optimal policy, as well as a closed form for the optimal performance, which is consistent with the results in the literature. We expect that this approach is generally applicable to optimization in other nonlinear behavioral analysis. This talk is based on the joint work with Xiangwei Wan.
In performance analysis of queuing systems, in addition to the Markov property, special structural properties of queuing systems are utilized to obtain closedform expressions and many other results. After the effort of many researchers in many decades, successful examples abound. However, it seems that exploring special features of queuing systems in performance optimization is still a territory which is not cultivated as well.
In this talk, we will introduce some of our efforts in this direction in the past three decades. We will show that very efficient algorithms can be developed to estimate the performance gradient and to implement performance optimization. We start from the perturbation analysis developed in early 80’s, to potentialbased sensitivity analysis of Markov decision processes proposed in 90’s, to perturbation realization based policy iteration of queuing networks developed in recent years. In all these approaches, the strong coupling structure among the servers are utilized, which distinguishes the queuing system from other standard Markov systems. We wish this talk may stimulate interests in this fascinating research direction. This talk is based on the joint work with Li Xia.
Motivated by the portfolio management problem, we propose a composite model for Markov processes. The state space of a composite Markov process consists of two parts, J and J' in the Euclidean space R^n. When the process is in J', it evolves like a continuoustime Levy process; and once the process enters J, it makes a jump (with a finite size) instantly according to a transition function like a directtime Markov chain. The composite Markov process provides a new model for the impulse stochastic
control problem, with the instant jumps in $J$ modeling the impulse control feature (e.g., selling or buying stocks in the portfolio management problem). With this model, we show that an optimal policy can be obtained by a direct comparison of the performance of any two policies.
Distinguished Lecturer
Abstract: Neuromuscular Electrical Stimulation (NMES) is prescribed by clinicians to aid in the recovery of strength, size, and function of human skeletal muscles to obtain physiological and functional benefits for impaired individuals. The two primary applications of NMES include: 1) rehabilitation of skeletal muscle size and function via plastic changes in the neuromuscular system, and 2) activation of muscle to elicit movements that result in functional performance (i.e., standing, stepping, reaching, etc.) termed functional electrical stimulation (FES). In both applications, stimulation protocols of appropriate duration and intensity are critical for preferential results. Automated NMES methods hold the potential to maximize the treatment by selfadjusting to the particular individual (facilitating potential inhome use and enabling positive therapeutic outcomes from less experienced clinicians). Yet, the development of automated NMES devices is complicated by the uncertain nonlinear musculoskeletal response to stimulation, including difficult to model disturbances such as fatigue. Unfortunately, NMES dosage (i.e., number of contractions, intensity of contractions) is limited by the onset of fatigue and poor muscle response during fatigue. This talk describes recent advances and experimental outcomes of control methods that seek to compensate for the uncertain nonlinear muscle response to electrical stimulation due to physiological variations, fatigue, and delays.
Analytical solutions to the infinite horizon optimal control problem for continuous time nonlinear systems are generally not possible because they involve solving a nonlinear partial differential equation. Another challenge is that the optimal controller includes exact knowledge of the system dynamics. Motivated by these issues, researchers have recently used reinforcement learning methods that involve an actor and a critic to yield a forwardintime approximate optimal control design. Methods that also seek to compensate for uncertain dynamics exploit some form of persistence of excitation assumption to yield parameter identification. However, in the adaptive dynamic programming context, this is impossible to verify a priori, and as a result researchers generally add an ad hoc probing signal to the controller that degrades the transient performance of the system. This presentation describes a forwardintime dynamic programming approach that exploits the use of concurrent learning tools where the adaptive update laws are driven by current state information and recorded state information to yield approximate optimal control solutions without the need for ad hoc probing. A unique desired goal sampling method is also introduced as a means to address the classical exploration versus exploitation conundrum. Applications are presented for autonomous systems including robot manipulators, underwater vehicles, and fin controlled cruise missiles. Solutions are also developed for networks of systems where the problem is cast as a differential game where a Nash equilibrium is sought.
Distinguished Lecturer
The last few years have seen significant progress in our understanding of how one should structure multirobot systems. New control, coordination, and communication strategies have emerged and in this talk, we summarize some of these developments. In particular, we will discuss how to go from local rules to global behaviors in a systematic manner in order to achieve distributed geometric objectives, such as achieving and maintaining formations, area coverage, and swarming behaviors. We will also investigate how users can interact with networks of mobile robots in order to inject new information and objectives. The efficacy of these interactions depends directly on the interaction dynamics and the structure of the underlying informationexchange network. We will relate these networklevel characteristics to controllability and manipulability notions in order to produce effective humanswarm interaction strategies.
When programming robots to perform tasks, one is inevitably forced to make abstractions at different levels in order to answer questions such as “What should the robot be doing?” and “How should it be doing it?” In this talk, we draw inspiration from choreography in order to produce these abstractions in a systematic manner. Manifestations of this idea will include robotic marionettes and humanoid, dancing robots that execute complex motions, and we will develop a formal framework for specifying and executing such motions by combining tools and techniques from hybrid optimal control theory and linear temporal logic.
Distinguished Lecturer
Atomic Force Microscope (AFM) opens a new window to the nanoworld. It features high resolution in vacuum, gases, or liquid operational environments, and has now become a widely used tool in the sectors of, for example, nano measurement and machining, biotech and medical testing etc. Many top research institutions in the world have put a lot of effort into this area.
Most conventional AFMs use piezoelectric devices to achieve their positioning. Even though the precision is up to nanolevel, it has a serious problem: short travelrange. In addition, when they are operated in liquids, large measurement error will occur, mainly because the damping is substantially increased when they are immersed in liquids. To overcome the problems of short travelrange and large measurement error in liquid environments existing in conventional AFMs, a long travelrange, twostate (gases and liquids) AFM needs to be designed and developed, which has equal precision capability for both operation environments and is capable of scanning areas up to the level of millimeter. Since electromagnetic actuator has the capability of providing long travelrange and piezoelectric actuator can offer high precision, the first aforementioned problems in conventional AFMs can be overcome by integrating these two advantages, which accounts for the need of introducing hybrid control in the resulting AFM. On the other hand, as the liquid operation environment is very critical to the cantilever scan, to cope with the resulting lower resolution of those AFM measurements made in liquids, as compared with those made in gases, becomes imperative. An adaptive Q controller has therefore been developed to resolve such problems.
Visual tracking in a dynamic environment has drawn a great deal of attention nowadays. It spans a wide research spectrum, such as for access control, human or vehicle detection and identification, detection of anomalous behaviors, crowd statistics or congestion analysis, humanmachine interaction in an intelligent space, etc.
Many problems in science require estimation of the state of a system that changes over time using a sequence of noisy measurements made on the system. Bayesian filter provides a rigorous general framework for dynamic state estimation problems. Visual tracking is also one kind of this problem. The sensed sequence of 2D image data, which may lose some 3D information and is usually noisy, includes the target, similar objects, and cluttered background, etc. The visual tracker designed with the spirit of Bayesian filter can overcome these problems and result in the successful multitarget tracking when the objects interacting in a group.
For utilizing the information obtained from visual tracking, such as the visual servoing of robot motion or humanmachine interaction, the visual tracking of one image frame must be processed as fast as possible, otherwise the information may be changed in the processing interval. In order to reduce the computational time, the dimension of the image processing can be efficiently constrained by some spatial and temporal hypotheses which are generated from the predictions of target motion model. Other sampling schemes, such as the particle filtering, would also be employed to achieve the realtime tracking purpose.
Another issue in this talk includes the widearea surveillance with active camera tracking and multicamera cooperation. Since the field of view of one camera is limited, a camera platform is usually equipped with some degree of freedoms to extend its observing range. The observing range can be extended more by combining multiple active cameras. The overall surveillance capability of the entire set of cameras could be effectively utilized through the welldesigned strategies of camera task assignment and camera action selection. Integrating the distributed camera agents, which can track targets alone and collaborate with other agents tightly, the prospect of seamless tracking would be realized stage by stage.
Human motion analysis from image sequence has many important applications in the fields of computer vision and robotics. With vision based interpretation of human motion in image sequence, HMI can be experienced more naturally without hand held device. Therefore, this talk will start with introduction of a twohands tracking method with a monocular camera. To cope with the situations of selfocclusion, similar color distracters and cluttered environment, this methodology is based on different kinds of image cues including locally discriminative color, motion history, gradient orientation feature, and depth order reasoning. Specifically, the multiple importance sampling (MIS) particle filter generates the tracking hypotheses of merged targets by the skin blob mask and the depth order estimation. These merged hypotheses are then evaluated by the visual cues of occluded face template, hand shape orientation and motion continuity. Next, the talk will also address pose estimation, where MIS particle filter will be used to integrate multiple clues to track both arms with arbitrary motion. Due to lack of depth information, a sequential pose estimation based on structurefrommotion (SFM) is proposed to online estimate 3D posture of the arms. With reliable tracking methodology, we feed the tracking result into the hidden Markov models (HMM) to online spot and recognize the performed action.
Distinguished Lecturer
Feedback is ubiquitous and is a core concept in control systems, where the main objective of feedback is to deal with the influences of various uncertainties on the performance of the dynamical systems to be controlled. Although much progress has been made in control theory over the past 50 years, especially in such areas as adaptive control and robust control, the following fundamental problem remains less explored: What is the maximum capability and limitations of the feedback mechanism in dealing with uncertainties? The feedback mechanism is defined as the class of all possible feedback laws (i.e., not restricted to a particular subclass), and the maximum capability of feedback is measured as the maximum size of uncertainties that can be dealt with by the feedback mechanism. In this lecture, we will work with discretetime (or sampleddata) nonlinear dynamical control systems with both structural and environmental uncertainties. We will reveal and prove some “Critical Values” and “Impossibility Theorems” concerning the maximum capability of the feedback mechanism for several basic classes of control systems. We will also show how the stochastic imbedding approach can be used in our investigation and how important the sensitivity function is in charactering the capability of feedback.
A fundamental issue in complex systems theory is to understand how locally interacting agents (or particles) leads to global behaviors (or structures) of the systems. Such problems arise naturally from diverse fields ranging from material and life sciences to social and engineering systems, and have attracted much research attention in recent years. In this lecture, we will focus on the synchronization problem of a basic class of nonequillibrium multiagent systems (or flocks) described by the wellknown Vicsek model. By working in a stochastic framework and by overcoming the widely recognized theoretical difficulty establishing some kind of dynamical connectivity needed for guaranteeing synchronization of the flocks, we are able to provide a rigorous and fairly complete theory for synchronization of flocks with large population. The main theorems are established based on analyses of the nonlinear dynamical equations involved and of the asymptotical properties of the spectrum of random geometric graphs. Furthermore, we will show how the global behaviors of the flocks may be intervened by using the “soft control” idea, without changing the existing interaction rules of the agents.
In traditional control theory, the plant to be controlled usually does not have its own payoff function, however, this is not the case for the control or regulation of many important social and economical systems, where the systems or subsystems to be regulated may have their own objectives which may not be the same as that of the global regulator or controller, and we may call such systems as gamebased control systems. This lecture will explore the characteristics and properties of a class of gamebased control systems, and will give some preliminary theoretical results concerning optimization, adaptation and cooperation.
Distinguished Lecturer
Reinforcement Learning Structures for RealTime Optimal Control and Differential Games
F. L. Lewis, National Academy of Inventors
IEEE Fellow, IFAC Fellow, Fellow UK Inst. Measurement & Control
MoncriefO’Donnell Endowed Chair
Head, Advanced Controls & Sensors Group
UTA Research Institute
The University of Texas at Arlington, USA
This talk will discuss some new adaptive control structures for learning online the solutions to optimal control problems and multiplayer differential games. Techniques from reinforcement learning are used to design a new family of adaptive controllers based on actorcritic learning mechanisms that converge in real time to optimal control and game theoretic solutions. Continuoustime systems are considered.
Optimal feedback control design has been responsible for much of the successful performance of engineered systems in aerospace, industrial processes, vehicles, ships, robotics, and elsewhere since the 1960s. Hinfinity control has been used for robust stabilization of systems with disturbances. Optimal feedback control design is performed offline by solving optimal design equations including the algebraic Riccati equation and the Game ARE. It is difficult to perform optimal designs for nonlinear systems since they rely on solutions to complicated HamiltonJacobiBellman or HJI equations. Finally, optimal design generally requires that the full system dynamics be known.
Optimal Adaptive Control. Adaptive control has provided powerful techniques for online learning of effective controllers for unknown nonlinear systems. In this talk we discuss online adaptive algorithms for learning optimal control solutions for continuoustime linear and nonlinear systems. This is a novel class of adaptive control algorithms that converge to optimal control solutions by online learning in real time. In the linear quadratic (LQ) case, the algorithms learn the solution to the ARE by adaptation along the system motion trajectories. In the case of nonlinear systems with general performance measures, the algorithms learn the (approximate smooth local) solutions of HJ or HJI equations. The algorithms are based on actorcritic reinforcement learning techniques. Methods are given that adapt to optimal control solutions without knowing the full system dynamics. Application of reinforcement learning to continuoustime (CT) systems has been hampered because the system Hamiltonian contains the full system dynamics. Using a technique known as Integral Reinforcement Learning (IRL), we will develop reinforcement learning methods that do not require knowledge of the system drift dynamics.
Online Algorithms for ZeroSum Games. We will develop new adaptive control algorithms for solving zerosum games online for continuoustime dynamical systems. Methods based on reinforcement learning policy iteration will be used to design adaptive controllers that converge to the Hinfinity control solution in realtime. An algorithm will be given for partially known systems where the drift dynamics is not known.
Cooperative/NonCooperative MultiPlayer Differential Games. New algorithms will be presented for solving online non zerosum multiplayer games for continuoustime systems. We use an adaptive control structure motivated by reinforcement learning policy iteration. Each player maintains two adaptive learning structures, a critic network and an actor network. The parameters of these two networks are tuned based on the actions of the other players in the team. The result is an adaptive control system that learns based on the interplay of agents in a game, to deliver true online gaming behavior.
Optimal Design for Cooperative Control Synchronization and Games on Comunication Graphs
F. L. Lewis, National Academy of Inventors
IEEE Fellow, IFAC Fellow, Fellow UK Inst. Measurement & Control
MoncriefO’Donnell Endowed Chair
Head, Advanced Controls & Sensors Group
UTA Research Institute
The University of Texas at Arlington, USA
ABSTRACT
Distributed systems of agents linked by communication networks only have access to information from their neighboring agents, yet must achieve global agreement on team activities to be performed cooperatively. Examples include networked manufacturing systems, wireless sensor networks, networked feedback control systems, and the internet. Sociobiological groups such as flocks, swarms, and herds have builtin mechanisms for cooperative control wherein each individual is influenced only by its nearest neighbors, yet the group achieves consensus behaviors such as heading alignment, leader following, exploration of the environment, and evasion of predators. It was shown by Charles Darwin that local interactions between population groups over long time scales lead to global results such as the evolution of species.
In this talk we present design methods for cooperative controllers for distributed systems. The developments are for general directed graph communication structures, for both continuoustime and discretetime agent dynamics. Cooperative control design is complicated by the fact that the graph topology properties limit what can be achieved by the local controller design. Thus, local controller designs may work properly on some communication graph topologies yet fail on other topologies. Our objective is to provide local agent feedback design methods that are independent of the graph topology and so function on a wide range of graph structures.
An optimal design method for local feedback controllers is given that decouples the control design from the graph structural properties. In the case of continuoustime systems, the optimal design method guarantees synchronization on any graph with suitable connectedness properties. In the case of discretetime systems, a condition for synchronization is that the Mahler measure of unstable eigenvalues of the local systems be restricted by the condition number of the graph. Thus, graphs with better topologies can tolerate a higher degree of inherent instability in the individual node dynamics.
A theory of duality between controllers and observers on communication graphs is given, including methods for cooperative output feedback control based on cooperative regulator designs.
In Part 2 of the talk, we discuss graphical games. Standard differential multiagent game theory has a centralized dynamics affected by the control policies of multiple agent players. We give a new formulation for games on communication graphs. Standard definitions of Nash equilibrium are not useful for graphical games since, though in Nash equilibrium, all agents may not achieve synchronization. A strengthened definition of Interactive Nash equilibrium is given that guarantees that all agents are participants in the same game, and that all agents achieve synchronization while optimizing their own value functions.
Stability vs. Optimality of Cooperative Multiagent Control
F. L. Lewis, National Academy of Inventors
IEEE Fellow, IFAC Fellow, Fellow UK Inst. Measurement & Control
MoncriefO’Donnell Endowed Chair
Head, Advanced Controls & Sensors Group
UTA Research Institute
The University of Texas at Arlington, USA
ABSTRACT
Distributed systems of agents linked by communication networks only have access to information from their neighboring agents, yet must achieve global agreement on team activities to be performed cooperatively. Examples include networked manufacturing systems, wireless sensor networks, networked feedback control systems, and the internet. Sociobiological groups such as flocks, swarms, and herds have builtin mechanisms for cooperative control wherein each individual is influenced only by its nearest neighbors, yet the group achieves consensus behaviors such as heading alignment, leader following, exploration of the environment, and evasion of predators. It was shown by Charles Darwin that local interactions between population groups over long time scales lead to global results such as the evolution of species.
Natural decision systems incorporate notions of optimality, since the resources available to organisms and species are limited. This talk investigates relations between the stability of cooperative control and optimality of cooperative control.
Stability. A method is given for the design of cooperative feedbacks for the continuoustime multiagent tracker problem (also called pinning control or leaderfollowing) that guarantees stable synchronization on arbitrary graphs with spanning trees. It is seen that this design is a locally optimal control with infinite gain margin. In the case of the discretetime cooperative tracker, local optimal design yields stability on graphs that satisfy an additional restriction based on the Mahler instability measure of the local agent dynamics.
Optimality. Global optimal control of distributed systems is complicated by the fact that, for general LQR performance indices, the resulting optimal control is not distributed in form. Therefore, it cannot be implemented on a prescribed communication graph topology. A condition is given for the existence of any optimal controllers that be implemented in distributed fashion. This condition shows that for the existence of global optimal controllers of distributed form, the performance index weighting matrices must be selected to depend on the graph structure.
Distinguished Lecturer
Airbreathing hypersonic vehicles (HSVs) are intended to be a reliable and costeffective technology for access to space. In the past few years, a considerable research effort has been spent to further their development and design. Notwithstanding the recent success of NASA's X43A and AFRL's X51 experimental vehicles, the design of robust guidance and control systems for HSVs is still an open problem, due to the complexity of the dynamics and the unprecedented level of coupling between the airframe and the propulsion system. The slender geometries and light structures required for these aircraft cause significant flexible effects, and a strong coupling between propulsive and aerodynamic forces results from the integration of the scramjet engine. Because of the variability of the vehicle characteristics with flight conditions, significant uncertainties affect the models, which are known to be unstable and nonminimum phase with respect to the regulated output. Finally, the presence of unavoidable constraints on the control inputs render the design of robust control systems an even harder endeavor, especially concerning controlling the propulsion system. In this talk, we give an overview of fundamental issues in controloriented modeling and control system design for HSVs, and present an account of the state of the art in nonlinear and adaptive control methodologies that in the past few years have been developed and applied to address these issues. In particular, we present a flight control system architecture that comprises a robust adaptive innerloop controller and a selfoptimizing guidance system. Finally, we discuss open problems and current research directions.
An established benchmark problem in active aerodynamic flow control is the suppression of pressure oscillations induced by flow over a shallow cavity. In this talk, we summarize the research activity of the Gas Dynamics and Turbulence Lab at The Ohio State University that has been devoted in the past few years to the design and experimental evaluation of modelbased controllers for subsonic cavity flows. Proper orthogonal decomposition and Galerkin projection techniques are used to obtain a reducedorder model of the flow dynamics from experimental data. The model is made amenable to control design by means of an optimizationbased control separation technique, which makes the control input appear explicitly in the equations. The design of a twoDOF controller based on the reducedorder model and its experimental validation are presented. An innerloop controller based on mixedsensitivity minimization with delay compensation is used to linearize the actuator response. An adaptive outerloop controller based on extremumseeking optimization is employed to suppress the cavity tones from pressure measurements. Experimental results, in qualitative agreement with the theoretical analysis, show that the controller achieves a significant attenuation of the resonant tone with a redistribution of the energy into other frequencies. The benefits of parameter adaptation over controllers of fixed structure under varying or uncertain flow conditions are also demonstrated experimentally.
Internal modelbased control for nonlinear systems has experienced a vigorous growth in the past decade. The classic solution relies on augmenting the plant model with a suitable "nonlinear copy" of the model of the disturbance generator, and, as a second stage, on designing a robust stabilizing unit that achieves stabilization of an errorzeroing manifold with prescribed domain of attraction. The closely related approach of designing the stabilizing unit first, and then looking for a suitable adaptive feedforward control to offset the disturbance (a technique known as AFC in linear systems) has received little or no attention for nonlinear systems. In this talk, we investigate the problem of adaptive feedforward compensation for a class of nonlinear systems, namely that of inputtostate (and locally exponentially) convergent systems. It is shown how, under a set assumptions reminiscent of those found in the LTI literature, the proposed scheme succeeds in achieving disturbance rejection of a harmonic disturbance at the input of a convergent nonlinear system, with a semiglobal domain of convergence. The suitability of the proposed solution is demonstrated by combining classic results from averaging analysis with modern techniques for semiglobal stabilization.
Distinguished Lecturer
The talk will discuss the contrasting possibilities of active and passive control both abstractly and in the context of automotive suspensions. It will be shown how systems and control thinking can highlight design tradeoffs and suggest new approaches which otherwise can remain hidden. The expanded possibilities for passive control using the "inerter" mechanical device will be discussed. The talk will be informed by examples of practice in Formula One racing.
The motivation is explained for revisiting certain questions in circuit theory, in particular, the synthesis theory of "oneport" (i.e. with a pair of external driving terminals) RLC networks and the questions of minimality associated with the BottDuffin procedure. The motivation relates to the synthesis of passive mechanical impedances and the need for a new ideal modelling element the "inerter".
Classical results from electrical circuit synthesis are reviewed including the procedures of Foster, Cauer, Brune, Darlington, Bott and Duffin. The reactance theorem of Foster for lossless networks and the BottDuffin construction for arbitrary positivereal functions are highlighted.
Recent work on classical network synthesis is described. This includes the new concept of regularity for positivereal functions and its use to aid the classification of lowcomplexity networks. The important theorem of Reichert will be described, which proves that nonminimality in the sense of modern systems theory is essential for the RLC realisation of some positivereal functions. New results on the procedure of BottDuffin will be presented.
Distinguished Lecturer
As far back as 1963, Beniot Mandelbrot (who sadly passed away just a few weeks ago) pointed out that asset price movements in the real world don't follow the Gaussian distribution. Instead they are "heavytailed"  that is, they display a kind of selfsimilarity and scaleinvariance. Since then, similar patterns have been observed in extreme weather such as rainfall, and more recently, in Internet traffic. Recent research in "pure" probability theory shows that heavytailed random variables have some very unusual properties. For instance, if we average many observations of such variables, the averages move in a few large bursts instead of moving smoothly. Such behavior has indeed been observed in the stock market.
The pervasiveness of heavytailed distributions in so many diverse arenas has implications for modeling, and risk mitigation. How do we design Internet traffic networks and storage servers if the volume of traffic is heavytailed? How do we hedge our equity positions if asset prices move in a heavytailed manner?
In this talk I will describe the issues involved through a combination of intuitive arguments, visualizations, and formal mathematics. My hope is to inspire practicing engineers to become familiar with this fascinating class of models, and theoretical researchers to study the many open problems that still remain.
Distinguished Lecturer
One of the main challenges in networked control systems is the analysis and synthesis of control over limited rate feedback channels. The problem of minimum data rate for stabilization of linear systems has attracted significant interest in the past decade and it is now well known that under perfect communications the minimum data rate is related to the unstable engenvalues of the openloop system. Another important issue in networked control is the uncertainties induced by the network such as packet losses. In this lecture, we shall discuss the minimum data rate for mean square stabilization over lossy networks. The packet losses process is modeled as an i.i.d. or Markov process. We show that the minimum data rate can be explicitly given in terms of the unstable eigenvalues of the openloop system and the packet loss rate for the i.i.d. case or the transition probabilities of the Markov chain for the Markovian packet losses case. The number of additional bits required to counter the effect of the packet losses on stabilizability is completely quantified. We shall also discuss the problem of minimum channel capacity for mean square stabilization of linear systems.
The problem of estimation and control over a communication network has attracted recurring interests in recent years due to the fact that there are more and more applications where communication networks are used to connect sensors, controllers and actuators. While having many advantages of using communication works for transmitting data/signals, the limited data rate and network uncertainties such as packet losses pose significant challenges for analysis and design. In particular, in wireless sensor networks, power consumption is a critical design factor and communications cost much more energy as compared with computation. As such, there is a clear motivation for minimizing the communications. In this lecture, we shall discuss issues of quantized estimation and stability of Kalman over lossy channels. For the quantized estimation problem, we focus on how to jointly design quantizer and estimator to minimize the mean square estimation error. Under the Gaussian assumption of the predicted density, the quantized MMSE filter is shown to have a similar form as the Kalman filter with the raw measurement simply replaced by its quantized version. Quantization effects are explicitly quantified in terms of the number of quantization levels and quantization thresholds. The stability of the quantized estimator in relation to the system dynamics is examined. We then discuss the stability of Kalman filtering over a network subject to random packet losses, which are modeled by a timehomogeneous ergodic Markov process. Necessary and sufficient conditions for stability of the mean estimation error covariance matrices are derived by taking into account the system structure. Stability criteria are expressed by simple inequalities in terms of the largest eigenvalue of the open loop matrix and transition probabilities of the Markov process. Their implications and relationships with related results in the literature are discussed.
Multiagent cooperation involves a collection of decisionmaking components with limited processing, limited sensing and limited communications capabilities, all seeking to achieve a collective objective. Well known examples include mobile sensor networks for environment monitoring and surveillance and multiUAV (unmanned aerial vehicle) formation flight. The distributed nature of information processing, sensing and actuation makes these applications a significant departure from the traditional centralized control system paradigm. In this lecture, we shall discuss the joint effects of agent dynamic, network topology and communication data rate on the consensusability of linear discretetime multiagent systems. Neglecting the finite data rate constraint, a necessary and sufficient condition for consensusability under a set of distributed control protocols is given which explicitly reveals how the intrinsic entropy rate of the agent dynamic and the communication graph affect consensusability. The result is established by solving a discretetime simultaneous stabilization problem. A lower bound of the optimal convergence rate to consensus, which is shown to be tight for some special cases, is given as well. The consensus problem under a finite communication data rate is also investigated. We shall present a systematic approach to the design of encoder, decoder and control protocol to achieve the exact consensus. The consensus convergence rate in relation to the bit rate, network synchronizability and the size of the network is established. The implementation of the algorithm on a real multirobot system as well as on a virtual platform will be demonstrated.