Using concepts from differential topology and information theory, I shall describe a theoretical framework for search strategies aimed at rapid discovery of topological features (locations of critical points and critical level sets) of a priori unknown differentiable random fields. The theory enables study of efficient reconnaissance strategies in which the tradeoff between speed and accuracy can be understood. The proposed approach to rapid discovery of topological features has led in a natural way to to the creation of parsimonious reconnaissance routines that do not rely on any prior knowledge of the environment. The design of topologyguided search protocols uses a mathematical framework that quantifies the relationship between what has been discovered and what remains to be discovered. The quantification rests on an information theory inspired model whose properties allow us to treat search as a problem in optimal information acquisition.
Distinguished Lecturers Program
Program Description
The Control Systems Society is continuing to fund a Distinguished Lecture Series.
The primary purpose is to help Society chapters provide interesting and informative programs for the membership, but the Distinguished Lecture Series may also be of interest to industry, universities, and other parties.
The Control Systems Society has agreed to a cost sharing plan which may be used by IEEE Chapters, sections, subsections, and student groups. IEEE student groups are especially encouraged to make use of this opportunity to have excellent speakers at moderate cost.
At the request of a Society Chapter, (or other IEEE groups as mentioned above), a lecture will be scheduled at a place and time that is mutually agreeable to both the Chapter and the Distinguished Lecturer. Eighty percent (80%) of the funds for the normal travel expenses for a lecture will be paid by the Society, the remaining travel expenses will be provided by the chapter. Lecturers will receive no honorarium. Note that the group organising the lecture must have some IEEE affiliation and the lecture must be free to attend by IEEE members.
The society will provide 80% of the expenses for qualified users of the program up to a maximum limit of $1000 for within a continent visit to be paid by the society and $2000 if the trip travels internationally. The speakers are geographically distributed so that this limit should be adequate for the trips to any part of the world. Travel outside of North America is encouraged provided that the society is not expected to spend in excess of $2000. Effective January 1, 2014, new limits of $2000 instead of $1000 and $4000 instead of $2000 will be applied.
Procedures
When you wish to use this program, you may contact the speaker directly to make arrangements. Then, you must submit a formal proposal to the Distinguished Lecturer Program Chair for his/her approval. The proposal should be sent to the Distinguished Lecturer Program Chair by someone in the local chapter, who should identify their role in the chapter, and provide some details of the invitation, including the dates. The proposal should contain budgetary quotations for air fare and accommodation from authorized sources (air line/ travel agent for the former, hotel for the latter), and a clear commitment of what the local chapter will contribute. If the trip is approved, then IEEE CSS would pay a maximum of 80% of the air fare and accommodation. The local hosts should pick up a minimum of 20% of the air fare and accommodation, plus all local costs such as airport transfer, local transportation, and meals. Procedures for unusual situations (such as when the speaker has other business on the trip) should be cleared through the Distinguished Lecturer Program Chair.
The expense claim filed by the distinguished lecturer upon the conclusion of the trip should contain receipts for the air fare and hotel.
Each distinguished lecturer will be limited to two trips per year, out of which at most one can be intercontinental.
Distinguished Lecturers Program Chair

Distinguished Lecturer Chair; Award Recipient
Distinguished Lecturers
Distinguished Lecturer
The interaction of information and control has been a topic of interest to system theorists that can be traced back to the 1950’s when the fields of communications, control, and information theory were new but developing rapidly. Recent advances in our understanding of this interplay have emerged from work on the dynamical effect of state quantization and a corresponding understanding of how communication channel data rates affect system stability. While a large body of research has now emerged dealing with communication constrained feedback channels and optimal design of information flows in networks, less attention has been paid to ways in which control systems should be designed in order to optimally mediate computation and communication. Such optimization problems are of interest in the context of quantum computing, and similar problems have recently been discussed in connection with protocols for assembly of molecular components in synthetic biology.
Recently W.S. Wong has proposed the concept of control communication complexity (CCC) as a formal approach for understanding how a group of distributed agents can choose independent actions from a prescribed "action code book" that cooperatively realize common goals and objectives. A prototypical goal is the computation of a function, and CCC provides a promising new approach to understanding complexity in terms of the cost of realizing a selected evaluation. This lecture will introduce control communication complexity in terms of what are called standard parts optimal control problems. Problems in optimal ensemble averaged motion sequences and distributed control of dynamical systems defined on Lie groups are discussed.
Distinguished Lecturer
Abstract: Control systems with large arrays of sensors and actuators are increasingly common in several applications such as fluid low control, process control, smart structures, and arrays of MicroElectroMechanical (MEMS) devices. These are systems where distributed arrays of sensors and actuators interact with media described by partial differential equations. We address the important issues of controller design, controller architecture and the communication requirements between sensors and actuators in such arrays.
We consider a special (but common) class of such systems which posses spatiotemporal invariant dynamics, and show that optimal controllers inherit this invariance property. We show how one can use multidimensional transform techniques to constructively design quadratically optimal (i.e. H2 or Hinfinity) distributed controllers by solving parameterized families of Ricatti equations. It turns out that such optimal controllers have an inherent degree of localization or semidecentralization. The implications for controlled actuator/sensor arrays will be discussed. We illustrate these concepts with an example of controlling arrays of capacitively actuated microcantilevers.
Abstract: The problem of describing transition and turbulence in wall bounded shear flows such as pipes, channels and boundary layers is an important and old problem in Hydrodynamic Stability. This type of turbulence is responsible for a significant portion of the drag on marine and aeronautical vehicles. Recently, a new theory of transition has emerged that appears to be in much better agreement with experiments than classical hydrodynamic stability. We review this theory and show the surprising parallels it has with the central notions of robust control theory. We show how tools like robust stability analysis, inputoutput norms and singular value plots describe transition and turbulent flow structures with surprising fidelity. It thus appears that in the technologically important case of wall bounded shear flows, transition is not so much a problem of linear or nonlinear instability, but rather of robustness to ambient uncertainty. This characterization of turbulence in terms of system theoretic norms and stability margins allows for a nice framework for its control. We will show how skin friction drag reduction problems can be recast as:
 they apply to general nonlinear systems with disturbances;
 we obtain explicit (often nonconservative) bounds on the maximal allowable transmission interval that guarantee stability; and
 we show that this approach is valid for a wide range of network protocols. This provides a flexible framework for design of NQCS, NCS and/or QCS that is amenable to various extensions and modifications, such as a treatment of dropouts and stochastic protocols, combined controller/protocol design, and so on.
Distinguished Lecturer
Abstract: Nanometer length scale analogues of most traditional control elements, such as sensors, actuators, and feedback controllers, have been enabled by recent advancements in device manufacturing and fundamental materials research. However, combining these new control elements in classical systems frameworks remains elusive. Methods to address the new generation of systems issues particular to nanoscale systems is termed here as systems nanotechnology. This presentation discusses some promising control strategies and theories that have been developed to address the challenges that arise in systems nanotechnology. Specific examples are provided where the identification, estimation, and control of complex nanoscale systems have been demonstrated in experimental implementations or in highfidelity simulations. Some control theory problems are also described that, if resolved, would facilitate further applications.
Abstract: Most high value products such as in the pharmaceuticals, microelectronic, and nanotechnology industries are manufactured in a series of processing steps that operate over finite time. These processes are usually distributed parameter systems in which tight control is required. Computationally efficient methods are proposed for the robust optimal control of finitetime distributed parameter systems (DPS), in which robustness is ensured for either deterministic or stochastic parametric uncertainties. In the deterministic case, the effects of uncertainties on the states and product quality are quantified by power series expansions combined with linear matrix inequality or structured singular value analysis. In the stochastic case, the effects of uncertainties are quantified by power series or polynomial chaos expansions. The robust performance analysis have been incorporated into fixed controllers and model predictive control algorithms. The approaches are illustrated for several applications problems.
Abstract: An overview is provided on advances in the control of molecular purity, crystal structure, and particle size distribution of pharmaceutical crystals. A nonlinear feedback control strategy is described that is robust to ordersofmagnitude variations in the crystallization kinetics, which enables the same feedback controller to apply to completely different pharmaceutical compounds by just updating the drug solubility. The control strategy enables the manufacture of large drug crystals of uniform size and specified crystal structure, regardless of whether the crystal structure is thermodynamically stable or metastable. The control strategy forces the closedloop operations to be within a tightly constrained trajectory bundle that connects the initial states to the desired final states. A modification of the methodology is able to achieve a target crystal size distribution by employing continual manipulation of seeds manufactured using a dualimpinging jet mixer. The methodology has been evaluated in theoretical, simulation, and experimental studies for a wide variety of pharmaceutical compounds. The presentation ends with a discussion of directions towards the simultaneous control of multiple properties, by the integration of feedback control with process design.
Distinguished Lecturer
The Grenoble Traffic Lab (GTL http://necs.inrialpes.fr/pages/reseach/gtl.php) initiative is a realtime traffic data center (platform) intended to collect traffic road infrastructure information in realtime with minimum latency and fast sampling periods. This lecture includes several aspects on modeling, forecasting and control of traffic systems, which are applied to the GTL. In this presentation, we first review main flowconservation models which are used as a basis to design physicaloriented forecasting & control algorithms. In particular we underline fundamental properties like downstream/upstream controllability and observability of such models, and present a new network setup for analysis. Then we present advances in traffic forecasting using graphconstrained macroscopic models which substantially reduce the number of possible affine dynamics of the system and preserve the number of vehicles in the network. This model is used to recover the state of the traffic network and precisely localize the eventual congestion front. The last part of the talk we discuss issues on density balancing control, where the objective is to design the homogeneous distribution of density on the freeway using the input flows as decision variables. The study shown, that a keystone for the design of the balanced states is a necessary cooperation of ramp metering with the variable speed limit control.
This lecture is devoted to new challenging control problems arising in the automotive industry as a consequence of the customerdriven performance specifications adopted by car builders which have dramatically increased the number of new proposed automated features where feedback interacts with the driver. The notion of "FuntoDrive by Feedback" relates, here, to the ability to design a control scheme resulting in good ride comfort behavior as well as acceptable safe operation. The lecture shows how control techniques can be used to solve some of these problems, and discusses how these subjective notions can be formalized thanks to concepts such as passivity and model matching control. We present a series of examples concerning systems that provide assisted automated devices (assisted clutch synchronization, steerbywire system, and advanced interdistance control), in which these aspects are assessed.
Distinguished Lecturer
In many practical systems, the control or decision making is triggered by certain events. The performance optimization for such systems is generally different from the traditional optimization approaches, such as Markov decision processes or mathematical programming. Because the sequence of events may not possess the Markov property, these traditional approaches may not work for the eventbased control or optimization problems. In this talk, we discuss a new optimization framework called eventbased optimization, which can be applied widely to the aforementioned optimization problems. With performance potential as building blocks, we develop two intuitive optimization algorithms to solve the eventbased optimization problem. The optimization algorithm is proposed based on an intuitive principle, i.e., to choose the actions with the best longterm effects on the system performance. The theoretical justifications are also discussed based on the difference equation of eventbased optimization. We find the conditions under which the intuitive algorithms lead to the optimal eventbased policy; and discuss the errors when they don’t. Finally, we use practical applications to demonstrate the effectiveness of the eventbased optimization framework. We hope that this framework will provide a new perspective to the optimization of the performance of eventtriggered dynamic systems. The talk is based on the joint work with QS Jia, Li Xia, and JY Zhang.
One of the difficulties in behavioral analysis is its timeinconsistency due to the distortion in performance probability. The standard dynamic programming fails to work in this area. In this talk, we first give a brief review of the different approaches in stochastic optimization and give an overview of the area with a sensitivitybased point of view. Then we apply this sensitivitybased approach, a powerful alternative to dynamic programming, to solve the portfolio management problem in an environment with probability distortion. We show that after changing the underlying probability measure the distorted performance becomes locally linear, and thus we discovered a property called “monolinearity” of the distorted performance. The derivative of the distorted performance is simply the expectation of the sample path based derivative of the performance under this new measure, which can be obtained by perturbation analysis. We also provide simulation algorithms for the derivative of distorted performance and hence a gradientbased search algorithm for the optimal policy. We apply this approach to the optimal portfolio selection problem with the distorted performance probability and obtained a martingale property of the optimal policy, as well as a closed form for the optimal performance, which is consistent with the results in the literature. We expect that this approach is generally applicable to optimization in other nonlinear behavioral analysis. This talk is based on the joint work with Xiangwei Wan.
In performance analysis of queuing systems, in addition to the Markov property, special structural properties of queuing systems are utilized to obtain closedform expressions and many other results. After the effort of many researchers in many decades, successful examples abound. However, it seems that exploring special features of queuing systems in performance optimization is still a territory which is not cultivated as well.
In this talk, we will introduce some of our efforts in this direction in the past three decades. We will show that very efficient algorithms can be developed to estimate the performance gradient and to implement performance optimization. We start from the perturbation analysis developed in early 80’s, to potentialbased sensitivity analysis of Markov decision processes proposed in 90’s, to perturbation realization based policy iteration of queuing networks developed in recent years. In all these approaches, the strong coupling structure among the servers are utilized, which distinguishes the queuing system from other standard Markov systems. We wish this talk may stimulate interests in this fascinating research direction. This talk is based on the joint work with Li Xia.
Motivated by the portfolio management problem, we propose a composite model for Markov processes. The state space of a composite Markov process consists of two parts, J and J' in the Euclidean space R^n. When the process is in J', it evolves like a continuoustime Levy process; and once the process enters J, it makes a jump (with a finite size) instantly according to a transition function like a directtime Markov chain. The composite Markov process provides a new model for the impulse stochastic
control problem, with the instant jumps in $J$ modeling the impulse control feature (e.g., selling or buying stocks in the portfolio management problem). With this model, we show that an optimal policy can be obtained by a direct comparison of the performance of any two policies.
Distinguished Lecturer
Motion coordination is a remarkable phenomenon in biological systems and an extremely useful tool in manmade groups of vehicles, mobile sensors, and embedded robotic systems. Just like animals do, groups of mobile autonomous agents need the ability to deploy over a region, assume a specified pattern, rendezvous at a common point, or jointly move in a synchronized manner. This talk illustrates ways in which systems and control theory helps us design autonomous and reliable robotic networks. We present some recently developed theoretical tools for modeling, analysis, and design of motion coordination algorithms. Numerous examples from deployment, aggregation, and consensus scenarios help illustrate the technical approach. In our exposition, we pay special attention to the characterization of the correctness and the evaluation of the performance of coordination algorithms.
Discontinuous dynamical systems, i.e., systems whose associated vector field is a discontinuous function of the state, arise in a large number of applications, including optimal control, nonsmooth mechanics, and robotic manipulation. Independently of the particular application, one always faces similar questions when dealing with them. This talk focuses on two important issues for discontinuous systems: the notion of solution and the stability analysis. We begin by introducing some of the most commonly used notions of solutions defined in the literature, discussing existence and uniqueness results, and examining various examples. Regarding the analysis of stability of discontinuous systems, we present useful notions and tools from nonsmooth analysis, including generalized gradients of locally Lipschitz functions and proximal subdifferentials of lower semicontinuous functions. Building on these notions, we establish monotonic properties of candidate Lyapunov functions along the solutions. These results are key in providing suitable generalizations of Lyapunov stability theorems and the LaSalle Invariance Principle. We illustrate the applicability of these results in several examples from mechanics, cooperative control, and distributed dynamical systems.
Networks of environmental sensors are playing an increasingly important role in scientific studies of the ocean, rivers, and the atmosphere. Robotic sensors can improve the efficiency of data collection, adapt to changes in the environment, and provide a robust response to individual failures. Complex statistical techniques come into play in the analysis of spatial environmental processes. Consequently, the operation of robotic sensors must be driven by statisticallyaware algorithms that make the most of the network capabilities for data collection and fusion. At the same time, such algorithms need to be distributed and scalable to make robotic networks capable of operating in an autonomous and robust fashion. The combination of these two objectives, complex statistical modeling and distributed coordination, presents grand technical challenges: traditional statistical modeling and inference assume full availability of all measurements and central computation. While the availability of data at a central location is certainly a desirable property, the paradigm for distributed motion coordination builds on partial, fragmented, and online information. In this talk, we present recent progress at bridging the gap between sophisticated statistical modeling and distributed motion coordination. We examine two problems for a network of robotic sensors: how to construct local representations of dynamic spatial processes in a distributed way and how to cooperatively optimize data collection for uncertainty minimization.
Distinguished Lecturer
Abstract: Neuromuscular Electrical Stimulation (NMES) is prescribed by clinicians to aid in the recovery of strength, size, and function of human skeletal muscles to obtain physiological and functional benefits for impaired individuals. The two primary applications of NMES include: 1) rehabilitation of skeletal muscle size and function via plastic changes in the neuromuscular system, and 2) activation of muscle to elicit movements that result in functional performance (i.e., standing, stepping, reaching, etc.) termed functional electrical stimulation (FES). In both applications, stimulation protocols of appropriate duration and intensity are critical for preferential results. Automated NMES methods hold the potential to maximize the treatment by selfadjusting to the particular individual (facilitating potential inhome use and enabling positive therapeutic outcomes from less experienced clinicians). Yet, the development of automated NMES devices is complicated by the uncertain nonlinear musculoskeletal response to stimulation, including difficult to model disturbances such as fatigue. Unfortunately, NMES dosage (i.e., number of contractions, intensity of contractions) is limited by the onset of fatigue and poor muscle response during fatigue. This talk describes recent advances and experimental outcomes of control methods that seek to compensate for the uncertain nonlinear muscle response to electrical stimulation due to physiological variations, fatigue, and delays.
Analytical solutions to the infinite horizon optimal control problem for continuous time nonlinear systems are generally not possible because they involve solving a nonlinear partial differential equation. Another challenge is that the optimal controller includes exact knowledge of the system dynamics. Motivated by these issues, researchers have recently used reinforcement learning methods that involve an actor and a critic to yield a forwardintime approximate optimal control design. Methods that also seek to compensate for uncertain dynamics exploit some form of persistence of excitation assumption to yield parameter identification. However, in the adaptive dynamic programming context, this is impossible to verify a priori, and as a result researchers generally add an ad hoc probing signal to the controller that degrades the transient performance of the system. This presentation describes a forwardintime dynamic programming approach that exploits the use of concurrent learning tools where the adaptive update laws are driven by current state information and recorded state information to yield approximate optimal control solutions without the need for ad hoc probing. A unique desired goal sampling method is also introduced as a means to address the classical exploration versus exploitation conundrum. Applications are presented for autonomous systems including robot manipulators, underwater vehicles, and fin controlled cruise missiles. Solutions are also developed for networks of systems where the problem is cast as a differential game where a Nash equilibrium is sought.
Distinguished Lecturer
The last few years have seen significant progress in our understanding of how one should structure multirobot systems. New control, coordination, and communication strategies have emerged and in this talk, we summarize some of these developments. In particular, we will discuss how to go from local rules to global behaviors in a systematic manner in order to achieve distributed geometric objectives, such as achieving and maintaining formations, area coverage, and swarming behaviors. We will also investigate how users can interact with networks of mobile robots in order to inject new information and objectives. The efficacy of these interactions depends directly on the interaction dynamics and the structure of the underlying informationexchange network. We will relate these networklevel characteristics to controllability and manipulability notions in order to produce effective humanswarm interaction strategies.
When programming robots to perform tasks, one is inevitably forced to make abstractions at different levels in order to answer questions such as “What should the robot be doing?” and “How should it be doing it?” In this talk, we draw inspiration from choreography in order to produce these abstractions in a systematic manner. Manifestations of this idea will include robotic marionettes and humanoid, dancing robots that execute complex motions, and we will develop a formal framework for specifying and executing such motions by combining tools and techniques from hybrid optimal control theory and linear temporal logic.
Distinguished Lecturer
Atomic Force Microscope (AFM) opens a new window to the nanoworld. It features high resolution in vacuum, gases, or liquid operational environments, and has now become a widely used tool in the sectors of, for example, nano measurement and machining, biotech and medical testing etc. Many top research institutions in the world have put a lot of effort into this area.
Most conventional AFMs use piezoelectric devices to achieve their positioning. Even though the precision is up to nanolevel, it has a serious problem: short travelrange. In addition, when they are operated in liquids, large measurement error will occur, mainly because the damping is substantially increased when they are immersed in liquids. To overcome the problems of short travelrange and large measurement error in liquid environments existing in conventional AFMs, a long travelrange, twostate (gases and liquids) AFM needs to be designed and developed, which has equal precision capability for both operation environments and is capable of scanning areas up to the level of millimeter. Since electromagnetic actuator has the capability of providing long travelrange and piezoelectric actuator can offer high precision, the first aforementioned problems in conventional AFMs can be overcome by integrating these two advantages, which accounts for the need of introducing hybrid control in the resulting AFM. On the other hand, as the liquid operation environment is very critical to the cantilever scan, to cope with the resulting lower resolution of those AFM measurements made in liquids, as compared with those made in gases, becomes imperative. An adaptive Q controller has therefore been developed to resolve such problems.
Visual tracking in a dynamic environment has drawn a great deal of attention nowadays. It spans a wide research spectrum, such as for access control, human or vehicle detection and identification, detection of anomalous behaviors, crowd statistics or congestion analysis, humanmachine interaction in an intelligent space, etc.
Many problems in science require estimation of the state of a system that changes over time using a sequence of noisy measurements made on the system. Bayesian filter provides a rigorous general framework for dynamic state estimation problems. Visual tracking is also one kind of this problem. The sensed sequence of 2D image data, which may lose some 3D information and is usually noisy, includes the target, similar objects, and cluttered background, etc. The visual tracker designed with the spirit of Bayesian filter can overcome these problems and result in the successful multitarget tracking when the objects interacting in a group.
For utilizing the information obtained from visual tracking, such as the visual servoing of robot motion or humanmachine interaction, the visual tracking of one image frame must be processed as fast as possible, otherwise the information may be changed in the processing interval. In order to reduce the computational time, the dimension of the image processing can be efficiently constrained by some spatial and temporal hypotheses which are generated from the predictions of target motion model. Other sampling schemes, such as the particle filtering, would also be employed to achieve the realtime tracking purpose.
Another issue in this talk includes the widearea surveillance with active camera tracking and multicamera cooperation. Since the field of view of one camera is limited, a camera platform is usually equipped with some degree of freedoms to extend its observing range. The observing range can be extended more by combining multiple active cameras. The overall surveillance capability of the entire set of cameras could be effectively utilized through the welldesigned strategies of camera task assignment and camera action selection. Integrating the distributed camera agents, which can track targets alone and collaborate with other agents tightly, the prospect of seamless tracking would be realized stage by stage.
Human motion analysis from image sequence has many important applications in the fields of computer vision and robotics. With vision based interpretation of human motion in image sequence, HMI can be experienced more naturally without hand held device. Therefore, this talk will start with introduction of a twohands tracking method with a monocular camera. To cope with the situations of selfocclusion, similar color distracters and cluttered environment, this methodology is based on different kinds of image cues including locally discriminative color, motion history, gradient orientation feature, and depth order reasoning. Specifically, the multiple importance sampling (MIS) particle filter generates the tracking hypotheses of merged targets by the skin blob mask and the depth order estimation. These merged hypotheses are then evaluated by the visual cues of occluded face template, hand shape orientation and motion continuity. Next, the talk will also address pose estimation, where MIS particle filter will be used to integrate multiple clues to track both arms with arbitrary motion. Due to lack of depth information, a sequential pose estimation based on structurefrommotion (SFM) is proposed to online estimate 3D posture of the arms. With reliable tracking methodology, we feed the tracking result into the hidden Markov models (HMM) to online spot and recognize the performed action.
Distinguished Lecturer
This talk aims at presenting filter design methodologies to deal with plants subject to parameter uncertainty and constant timedelay. After a brief discussion on relevant aspects of the problem under consideration as, for instance, convexity and computational difficulties, the conservativeness of the solution is evaluated in terms of the minimum guaranteed estimation error norm, based on a quality certificate determined from the equilibrium solution of a minmax problem. Constant timedelay is handled from a finite dimension comparison system well adapted to deal with frequency domain performances. The talk ends with a discussion on the possible application of the filter design methodology to networked systems described through an appropriate model based on Markov chains.
In this talk, several aspects involving switched linear systems are presented. They are divided in two main streams characterized by considering the switching signal either as an exogenous perturbation to be attenuated or a control action to be adequately designed. The talk starts by introducing basic concepts on stability, performance calculation and switching control design based on a mintype Lyapunov function. The relevance of the proposed switching control scheme is discussed by introducing the concept of control consistency that requires performance improvement when compared to the ones of each isolated subsystems. This aspect makes clear the importance of switching systems in both theoretical and practical application frameworks. The talk ends with two examples of application of this methodology in LPV systems and DCDC converters control design.
Distinguished Lecturer
Feedback is ubiquitous and is a core concept in control systems, where the main objective of feedback is to deal with the influences of various uncertainties on the performance of the dynamical systems to be controlled. Although much progress has been made in control theory over the past 50 years, especially in such areas as adaptive control and robust control, the following fundamental problem remains less explored: What is the maximum capability and limitations of the feedback mechanism in dealing with uncertainties? The feedback mechanism is defined as the class of all possible feedback laws (i.e., not restricted to a particular subclass), and the maximum capability of feedback is measured as the maximum size of uncertainties that can be dealt with by the feedback mechanism. In this lecture, we will work with discretetime (or sampleddata) nonlinear dynamical control systems with both structural and environmental uncertainties. We will reveal and prove some “Critical Values” and “Impossibility Theorems” concerning the maximum capability of the feedback mechanism for several basic classes of control systems. We will also show how the stochastic imbedding approach can be used in our investigation and how important the sensitivity function is in charactering the capability of feedback.
A fundamental issue in complex systems theory is to understand how locally interacting agents (or particles) leads to global behaviors (or structures) of the systems. Such problems arise naturally from diverse fields ranging from material and life sciences to social and engineering systems, and have attracted much research attention in recent years. In this lecture, we will focus on the synchronization problem of a basic class of nonequillibrium multiagent systems (or flocks) described by the wellknown Vicsek model. By working in a stochastic framework and by overcoming the widely recognized theoretical difficulty establishing some kind of dynamical connectivity needed for guaranteeing synchronization of the flocks, we are able to provide a rigorous and fairly complete theory for synchronization of flocks with large population. The main theorems are established based on analyses of the nonlinear dynamical equations involved and of the asymptotical properties of the spectrum of random geometric graphs. Furthermore, we will show how the global behaviors of the flocks may be intervened by using the “soft control” idea, without changing the existing interaction rules of the agents.
In traditional control theory, the plant to be controlled usually does not have its own payoff function, however, this is not the case for the control or regulation of many important social and economical systems, where the systems or subsystems to be regulated may have their own objectives which may not be the same as that of the global regulator or controller, and we may call such systems as gamebased control systems. This lecture will explore the characteristics and properties of a class of gamebased control systems, and will give some preliminary theoretical results concerning optimization, adaptation and cooperation.
Distinguished Lecturer
As computers, digital networks, and embedded systems become ubiquitous and increasingly complex, one needs to understand the coupling between logicbased components and continuous physical systems. This prompted a shift in the standard control paradigmin which dynamical systems were typically described by differential or difference equationsto allow the modeling, analysis, and design of systems that combine continuous dynamics with discrete logic. This new paradigm is often called hybrid or switched control.
This talk deals precisely with systems that result from the interconnection of differential equations with logicbased decision rules. Such systems are hybrid in the sense that some of the variables that describe their behavior take continuous values (e.g., the state of a differential equation) whereas others take discrete values (e.g., a Boolean value, or the state of a finite automaton). We are particularly interested in switched system. These are systems for which the continuous dynamics are effectively determined by the values of one or more discrete variables.
In the talk, we present several mathematical tools that have been developed to understand the behavior of switched systems. These tools are introduced in the context of specific applications where both logic and differential equations arise naturally. We draw these examples from areas as diverse as computer networks, visionbased robotics, and adaptive control. The goal of this talk is twofold: (i) demonstrate that switched systems are ubiquitous and of significant practical application, and (ii) show that a unified theory of switched systems is becoming available.
The time evolution of chemically reacting molecules is sometimes modeled using a stochastic formulation, which takes into account the inherent randomness of molecular motion. This formulation is especially useful for complex reactions inside living cells, where small populations of key reactants can set the stage for significant stochastic effects. In this talk, we show how Stochastic Hybrid Systems can be used to construct stochastic models for chemical reactions.
Hybrid systems combine continuoustime dynamics with discrete modes of operation. The states of such system usually have two distinct components: one that evolves continuously, typically according to a differential equation; and another one that only changes through instantaneous jumps. To model chemical reactions, we actually need Stochastic Hybrid Systems (SHSs) where transitions between discrete modes are triggered by stochastic events, much like transitions between states of a continuoustime Markov chains. However, the rate at which transitions occur is allowed to depend on both the continuous and the discrete states of the SHS.
Several tools are available to analyze SHSs. Among these, we discuss the use of the extended generator, infinitedimensional moment dynamics, and finitedimensional truncations by moment closure. The application of these tools is illustrated in the context of modeling the evolution of populations of molecules undergoing a system of chemical reactions.
Networked Control Systems (NCSs) are spatially distributed systems for which the communication between processes, sensors, actuators, and/or controllers is supported by a digital communication network. This type of systems exhibits several characteristics that make them unique from a control perspective.
In this talk we address the effect of limited communication bandwidth and network latency in the overall performance of a closedloop NCS. Not surprisingly, there is a tradeoff between the amount of communication resources utilized and the control performance achievable. For prototypical examples (linear processes and quadratic costs) we construct optimal communication logics that achieve optimal performance with minimal communication. The effect of network latency is also investigated in this context.
Distinguished Lecturer
Reinforcement Learning Structures for RealTime Optimal Control and Differential Games
F. L. Lewis, National Academy of Inventors
IEEE Fellow, IFAC Fellow, Fellow UK Inst. Measurement & Control
MoncriefO’Donnell Endowed Chair
Head, Advanced Controls & Sensors Group
UTA Research Institute
The University of Texas at Arlington, USA
This talk will discuss some new adaptive control structures for learning online the solutions to optimal control problems and multiplayer differential games. Techniques from reinforcement learning are used to design a new family of adaptive controllers based on actorcritic learning mechanisms that converge in real time to optimal control and game theoretic solutions. Continuoustime systems are considered.
Optimal feedback control design has been responsible for much of the successful performance of engineered systems in aerospace, industrial processes, vehicles, ships, robotics, and elsewhere since the 1960s. Hinfinity control has been used for robust stabilization of systems with disturbances. Optimal feedback control design is performed offline by solving optimal design equations including the algebraic Riccati equation and the Game ARE. It is difficult to perform optimal designs for nonlinear systems since they rely on solutions to complicated HamiltonJacobiBellman or HJI equations. Finally, optimal design generally requires that the full system dynamics be known.
Optimal Adaptive Control. Adaptive control has provided powerful techniques for online learning of effective controllers for unknown nonlinear systems. In this talk we discuss online adaptive algorithms for learning optimal control solutions for continuoustime linear and nonlinear systems. This is a novel class of adaptive control algorithms that converge to optimal control solutions by online learning in real time. In the linear quadratic (LQ) case, the algorithms learn the solution to the ARE by adaptation along the system motion trajectories. In the case of nonlinear systems with general performance measures, the algorithms learn the (approximate smooth local) solutions of HJ or HJI equations. The algorithms are based on actorcritic reinforcement learning techniques. Methods are given that adapt to optimal control solutions without knowing the full system dynamics. Application of reinforcement learning to continuoustime (CT) systems has been hampered because the system Hamiltonian contains the full system dynamics. Using a technique known as Integral Reinforcement Learning (IRL), we will develop reinforcement learning methods that do not require knowledge of the system drift dynamics.
Online Algorithms for ZeroSum Games. We will develop new adaptive control algorithms for solving zerosum games online for continuoustime dynamical systems. Methods based on reinforcement learning policy iteration will be used to design adaptive controllers that converge to the Hinfinity control solution in realtime. An algorithm will be given for partially known systems where the drift dynamics is not known.
Cooperative/NonCooperative MultiPlayer Differential Games. New algorithms will be presented for solving online non zerosum multiplayer games for continuoustime systems. We use an adaptive control structure motivated by reinforcement learning policy iteration. Each player maintains two adaptive learning structures, a critic network and an actor network. The parameters of these two networks are tuned based on the actions of the other players in the team. The result is an adaptive control system that learns based on the interplay of agents in a game, to deliver true online gaming behavior.
Optimal Design for Cooperative Control Synchronization and Games on Comunication Graphs
F. L. Lewis, National Academy of Inventors
IEEE Fellow, IFAC Fellow, Fellow UK Inst. Measurement & Control
MoncriefO’Donnell Endowed Chair
Head, Advanced Controls & Sensors Group
UTA Research Institute
The University of Texas at Arlington, USA
ABSTRACT
Distributed systems of agents linked by communication networks only have access to information from their neighboring agents, yet must achieve global agreement on team activities to be performed cooperatively. Examples include networked manufacturing systems, wireless sensor networks, networked feedback control systems, and the internet. Sociobiological groups such as flocks, swarms, and herds have builtin mechanisms for cooperative control wherein each individual is influenced only by its nearest neighbors, yet the group achieves consensus behaviors such as heading alignment, leader following, exploration of the environment, and evasion of predators. It was shown by Charles Darwin that local interactions between population groups over long time scales lead to global results such as the evolution of species.
In this talk we present design methods for cooperative controllers for distributed systems. The developments are for general directed graph communication structures, for both continuoustime and discretetime agent dynamics. Cooperative control design is complicated by the fact that the graph topology properties limit what can be achieved by the local controller design. Thus, local controller designs may work properly on some communication graph topologies yet fail on other topologies. Our objective is to provide local agent feedback design methods that are independent of the graph topology and so function on a wide range of graph structures.
An optimal design method for local feedback controllers is given that decouples the control design from the graph structural properties. In the case of continuoustime systems, the optimal design method guarantees synchronization on any graph with suitable connectedness properties. In the case of discretetime systems, a condition for synchronization is that the Mahler measure of unstable eigenvalues of the local systems be restricted by the condition number of the graph. Thus, graphs with better topologies can tolerate a higher degree of inherent instability in the individual node dynamics.
A theory of duality between controllers and observers on communication graphs is given, including methods for cooperative output feedback control based on cooperative regulator designs.
In Part 2 of the talk, we discuss graphical games. Standard differential multiagent game theory has a centralized dynamics affected by the control policies of multiple agent players. We give a new formulation for games on communication graphs. Standard definitions of Nash equilibrium are not useful for graphical games since, though in Nash equilibrium, all agents may not achieve synchronization. A strengthened definition of Interactive Nash equilibrium is given that guarantees that all agents are participants in the same game, and that all agents achieve synchronization while optimizing their own value functions.
Stability vs. Optimality of Cooperative Multiagent Control
F. L. Lewis, National Academy of Inventors
IEEE Fellow, IFAC Fellow, Fellow UK Inst. Measurement & Control
MoncriefO’Donnell Endowed Chair
Head, Advanced Controls & Sensors Group
UTA Research Institute
The University of Texas at Arlington, USA
ABSTRACT
Distributed systems of agents linked by communication networks only have access to information from their neighboring agents, yet must achieve global agreement on team activities to be performed cooperatively. Examples include networked manufacturing systems, wireless sensor networks, networked feedback control systems, and the internet. Sociobiological groups such as flocks, swarms, and herds have builtin mechanisms for cooperative control wherein each individual is influenced only by its nearest neighbors, yet the group achieves consensus behaviors such as heading alignment, leader following, exploration of the environment, and evasion of predators. It was shown by Charles Darwin that local interactions between population groups over long time scales lead to global results such as the evolution of species.
Natural decision systems incorporate notions of optimality, since the resources available to organisms and species are limited. This talk investigates relations between the stability of cooperative control and optimality of cooperative control.
Stability. A method is given for the design of cooperative feedbacks for the continuoustime multiagent tracker problem (also called pinning control or leaderfollowing) that guarantees stable synchronization on arbitrary graphs with spanning trees. It is seen that this design is a locally optimal control with infinite gain margin. In the case of the discretetime cooperative tracker, local optimal design yields stability on graphs that satisfy an additional restriction based on the Mahler instability measure of the local agent dynamics.
Optimality. Global optimal control of distributed systems is complicated by the fact that, for general LQR performance indices, the resulting optimal control is not distributed in form. Therefore, it cannot be implemented on a prescribed communication graph topology. A condition is given for the existence of any optimal controllers that be implemented in distributed fashion. This condition shows that for the existence of global optimal controllers of distributed form, the performance index weighting matrices must be selected to depend on the graph structure.
Distinguished Lecturer
Abstract: In recent years Model Predictive Control (MPC) has emerged as a powerful methodology for online control of complex systems. The central idea is to formulate a finite horizon optimal control problem based on a model of the system dynamics. The optimal control problem is then solved (either online, using numerical optimization tools, or offline, using multiparametric programming), the initial part of the optimal input sequence is applied, a measurement of the state is taken and the process is repeated. The periodic state measurements provide the feedback necessary to make the process robust against disturbances, including errors in the system model used in the optimization. Over the years, MPC for deterministic systems has become a mature technology, with countless applications in a wide range of domains. More recently, robust MPC (where the system model includes bounded, worst case uncertainty) has also flourished, exploiting advances in robust optimization. In comparison, MPC for systems with stochastic, potentially unbounded uncertainty has received relatively little attention. Dealing with stochastic uncertainty is important, since it will allow the MPC methodology to extend to application areas such as finance, air traffic management, and insurance, which naturally lend themselves to an MPC approach but also naturally involve stochastic models. The extension of MPC to stochastic systems poses several challenges, both conceptual and practical: How should state constraints be interpreted in the finite and infinite horizon context, how can input constraints be enforced, what cost functions and policies should one consider for the optimal control problem, under what conditions are the resulting optimization problems convex, etc. In this talk we highlight these challenges and outline methods that can be used to overcome them.
Abstract: Simulated annealing, Markov Chain Monte Carlo, and genetic algorithms are all randomized methods that can be used in practice to solve (albeit approximately) complex optimization problems. They rely on constructing appropriate Markov chains, whose stationary distribution concentrates on "good" parts of the parameter space (i.e. near the optimizers). Many of these methods come with asymptotic convergence guarantees, that establish conditions under which the Markov chain converges to a globally optimal solution in an appropriate probabilistic sense. An interesting question that is usually not covered by asymptotic convergence results is the rate of convergence: How long should the randomized algorithm be executed to obtain a near optimal solution with high probability? Answering this question would allow one to determine a level of accuracy and confidence with which approximate optimality claims can be made as a function of the amount of time available for computation. In this talk we present some new results on finite sample bounds of this type, primarily in the context of stochastic optimization with expected value criteria using Markov Chain Monte Carlo methods. The discussion will be motivated by the application of these methods to collision avoidance in air traffic management and parameter identification for biological systems.
Abstract: The term stochastic hybrid systems defines a class of control systems that involve the interaction of continuous dynamics, discrete dynamics and probabilistic uncertainty. Over the last decade stochastic hybrid systems have emerged as a powerful modeling paradigm in a wide range of application areas. This talk will provide an overview of recent developments in this rapidly evolving field of research. We will present the theoretical foundations and challenges of stochastic hybrid systems. We will also outline computational methods that can be used to analyze and control such systems, based primarily on randomized algorithms. The discussion will be motivated by applied modeling, analysis and control problems from the areas of systems biology and air traffic management.
Abstract: Increasing levels of traffic are pushing the current Air Traffic Management (ATM) system to its limits. It is widely recognized that safely accommodating this increase in demand will require (in addition to technological advances) operational changes as well as novel, advanced decision support algorithms to assist the human operators. The operation of ATM is characterized by a hierarchy of tasks, which are exceptional benchmarks for control methodologies aiming to tackle complexity. Ongoing control research in the area of ATM includes large scale and distributed optimization for the management of traffic flows, innovative filtering, prediction and control methods for collision avoidance, systems methods for improving the situational awareness of human operators, optimal control for the design of safe coordination maneuvers, etc. In addition, models and simulation tools covering all levels of ATM are being developed, either to test and validate new methods, or to perform risk assessment for existing operations. This talk will highlight the problems and challenges that ATM poses for control engineers and outline advanced control and filtering methods that have been developed to improve the accuracy of aircraft trajectory prediction, detect potential safety problems, and compute maneuvers to resolve them.
Distinguished Lecturer
Abstract: Design of engineered systems whose operation is "best" or "optimal" in some sense is increasingly important due to a range of socioeconomic and environmental problems that we are facing at the dawn of the 21st century, such as the climate change and increased competition in a global market. While still attracting a considerable research attention, optimal control methods can be regarded as classical and in certain areas, such as linear quadratic control, they are very well developed and understood. An underlying assumption in the classical control literature is that both the plant model and the cost to optimize are known to the engineer designing the system. However, surprisingly many engineering systems do not satisfy this basic assumption and, hence, classical optimization methods are often not directly applicable.
Extremum seeking is an optimal control approach that deals with situations when the plant model and/or the cost to optimize are not available to the designer but it is assumed that measurements of plant input and output signals are available. Using these available signals, the goal is to design a controller that dynamically searches for the optimizing inputs. This method was successfully applied to biochemical reactors, ABS control in automotive brakes, variable cam timing engine operation, electromechanical valves, axial compressors, mobile robots, mobile sensor networks, optical fibre amplifiers and so on. Interestingly, some bacteria (such as the flagellaactuated E. Coli) and swarms of fish collectively search for food using extremum seeking techniques. This presents opportunities for research crossfertilization between control engineering and biology.
While extremum seeking is an old topic, local stability of a class of extremum seeking controllers was proved for the first time by Krstic and Wang in 2000. Subsequently, Popovic and Teel proposed an alternative framework for extremum seeking and provided corresponding stability proofs. We present an extension of stability results by Krstic and Wang for a simplified extremum seeking scheme and show that the scheme yields semiglobal extremum seeking under appropriate assumptions if the controller parameters are tuned appropriately. An interesting tradeoff between the size of domain of attraction and the speed of convergence of the scheme is uncovered. The scheme works in essence as a steepest descent method and we also provide a strategy and conditions under which it yields global stability in presence of local extrema. Flexibility of the choice of dither in the scheme is also discussed and its effect on the convergence speed is explained. We use recent results on singular perturbations and averaging in the stability analysis. Examples of application of our scheme and Teel and Popovic approach are presented respectively for biochemical reactors and Raman optical amplifiers.
Abstract: Emerging control applications, such as drivebywire cars, often require some control loops to be closed over a network. Motivation for using this setup comes from lower cost, ease of maintenance, great flexibility, as well as low weight and volume. This motivates research into control systems in which one or several control loops are closed via a network.
Currently, there are two distinct approaches to modelling the effects of the network in such systems. The first approach assumes that only a finite number of bits can be transmitted over the network at any transmission instant and the sensor/actuator values need to be appropriately quantized before they are sent over the network. We refer to such systems as quantized control systems (QCS). In another approach, network transmits sensor/actuator values in packets and it is assumed that packets are large enough to ignore quantization effects. In this case we can regard the network as a serial communication channel that transmits signals from many sensors/actuators in the control system. The main issue in such systems is that the serial communication channel has many \x{201C}nodes\x{201D} (groups of sensors and actuators) where only one node can transmit its value at any transmission time and, hence, access to the channel needs to be scheduled in an appropriate manner for a proper operation of the system. Such systems are often referred to in the literature as networked control systems (NCS).
While QCS and NCS deal with very similar issues, these systems have been treated separately in the literature with little crossfertilization. Our goal is to present a unified approach to analysis and design of networked and quantized control systems (NQCS) that combine time scheduling and quantization. In particular, we present an emulation controller design approach where, in the first step, we design a controller ignoring the network and, in the second step, we implement the designed controller over the network with sufficiently fast transmissions and a given protocol. Our results have several features:
Abstract: In the vast literature on nonlinear control design, an area that has received scant attention is sampleddata control. In this problem, a continuous time plant is typically controlled by a discretetime feedback algorithm. A sample and hold device provides the interface between continuous time and discretetime.
One way to address sampleddata control is to implement a continuous time control algorithm with a sufficiently small sampling period (i.e. emulation). However, the hardware used to sample and hold the plant measurements or compute the feedback control action may make it impossible to reduce the sampling period to a level that guarantees acceptable closedloop performance. In this case, it becomes interesting to investigate the application of sampleddata control algorithms based on a discretetime model of the process. Note that even if the continuoustime model of a nonlinear plant is known to the designer, we typically can not compute analytically an exact discretetime model and, hence, a more realistic approach is to base the controller design on an approximate discretetime model of the plant (e.g. Euler).
We present an overview of our work on sampleddata nonlinear systems. First, we discuss a framework for controller design for sampleddata nonlinear systems via their approximate discretetime models that we proposed. Our conditions are very general and we illustrate with examples that if some of these conditions are relaxed it may happen that the controller stabilizes the approximate model of the plant but it destabilized the exact model for all positive sampling periods. Our results adapt the notion of "consistency" from the numerical analysis literature and exploit it in our conditions and proofs. A range of controller design methods can be developed within our framework and we present a backstepping design for strict feedback systems as an illustration. Then, we investigate different techniques for emulation and continuoustime controller redesign for sampleddata implementation. Several examples illustrate the generality and flexibility of our approach. Moreover, they illustrate that the discretetime designs typically outperform the emulation designs in simulations.
Distinguished Lecturer
Today’s Internet is a gigantic, complex communications infrastructure, shared by endtoend connections which compete for the bandwidth resources. What is the resulting allocation? The answer is highly dependent on multiple automatic control mechanisms embedded in the network protocols, which dynamically adjust flow rates, routing, medium access, etc., making the Internet probably the largest scale artificial control system in operation. Is there any hope for a mathematical analysis or synthesis at this huge scale? Research over more than a decade has shown that substantial progress is possible by casting the problem in the language of economic theory and convex optimization. These tools combine with methods of dynamics and control to provide useful insights into current behavior, and design proposals with provable performance at multiple protocol layers. In this talk we will give a tutorial overview of this field of research.
Users of the Internet establish endtoend connections to support data transfers. The traffic rates obtained by these connections are the result of a global resource allocation involving multiple layers of control: flow control, routing, medium access control, and physical layer adaptation. Research over more than a decade has provided mathematical models that help understand the stability and fairness of this allocation among connections.
The generation of connections is, however, itself a dynamic process, and therefore also subject to stability and fairness concerns. One model of connection dynamics is in the realm of queueing theory, where connections are generated randomly between the many network endpoints, bringing a certain random file size to be transported. In this talk we will discuss recent results that characterize the stability region of this queueing system, for general filesize distributions commonly observed in practice.
Another viewpoint for the generation of connections is as a strategic game: users can actively generate parallel connections to obtain a higher share of the underlying bandwidth resources, making fairness between connections a moot point. This behavior indeed occurs with certain greedy applications, and if generalized can have negative consequences. In this talk we will advocate for a ``usercentric" notion of fairness, where fairness is framed in terms of aggregate rates per user, and propose admission control mechanisms to achieve this allocation in a distributed way across a network.
Distinguished Lecturer
Abstract: In the past, manipulators, machine tools, measurement and many other systems were designed with rigid structures and operated at relatively low speeds. With an increasing demand for fuel efficiency, smaller actuators, and speed, lighter weight materials are now often used in the construction of systems, making them more flexible. Flexible structures are also prevalent in space systems where lightweight materials are necessitated for fuel efficiency when carrying the structures into space. Achieving highperformance control of flexible structures is a difficult task, but one that is now critical to the success of many important applications, ranging from the shuttle remote manipulator system, satellites, wind turbines, robot manipulators, gantry cranes, disk drives, to atomic force microscopes. The unwanted vibration that results from maneuvering a flexible structure often dictates limiting factors in the performance and lifespan of the system.
We will discuss combined feedforward and feedback architectures and algorithms for controlling flexible structures. Depending upon the particular performance goals, such as tracking accuracy in a trajectory following task or rapid settle time for a pointtopoint motion, there are different requirements for the controller. In many applications, the actuators and sensors are separated by the flexible structure, leading to nonminimum phase characteristics that are challenging for control. Over the last few decades, many feedback and feedforward control methods have been developed for flexible structures. We will overview and compare several of these control methods and highlight recent developments and results. We will also present advances in a few application areas that have been achieved through better control of inherent flexible structures. Finally, we shall close by discussing a number of future challenges.
Abstract: In many applications, such as tactical defense, unmanned aerial vehicles, and mobile robotics, multiple sensors are used to track objects and assess the environment. Multiple sensors provide large amounts of data with which to detect, track, and identify targets of interest. Using different types of sensors to obtain information allows the strengths of one sensor type to compensate for the weaknesses of another and further provides redundance, therefore increasing system robustness.
In this talk, we will review a few multisensor fusion algorithms for tracking applications that combine measurements from multiple sensors in a consistent manner. We will then discuss some selected recent research results in developing effective methods of managing sensor resources, deriving and extending sensor fusion algorithms for distributed processing architectures, developing techniques that allow complex multisensor fusion algorithms to be evaluated and compared efficiently, and formulating methods for detecting track loss in the absence of truth data.
Abstract: Haptic interfaces enable users to feel, touch, and manipulate remote or virtual objects, and as such, haptic interfaces can facilitate humancomputer and humanmachine interaction in a wide range of applications ranging from scientific visualization to teleoperation to laparoscopic surgery. In this talk, we will give examples of haptic interfaces from around the world, including those we have developed in our own lab. Limitations and capabilities of current haptic interfaces will be discussed. We will also outline a number of applications of haptic interfaces, ranging from lowend applications (vibrotactile mice, joysticks) to highend applications (medical/rehabilitation, scientific visualization). Throughout the talk, we will highlight some of our work in two areas: (1) investigating the use of haptic interfaces for scientific visualization of complex multidimensional data, as well as (2) developing lowcost yet highquality multidegreeoffreedom haptic interfaces in the hopes of expanding haptic interfaces to an even broader range of applications.
Distinguished Lecturer
Semidefinite programming has grown into an important computational tool for attacking problems in all areas of control. In this overview we discuss the basic ideas how to translate stability, performance and robust performance objectives into the framework of linear matrix inequalities. Throughout the presentation particular emphasis is put on an investigation of those system interconnection structures that are amenable to convex optimization for controller design.
Estimators serve to reconstruct information about the internal behavior of a dynamical systems on the basis of measurements that are corrupted by noise. Based on a tutorial introduction of classical Kalman filtering, we review the relevance of optimal estimator synthesis for modern control applications. This serves as a foundation for illustrating the key steps in translating optimal estimator synthesis into a semidefinite program. In practical applications, models of dynamical systems are never precise. In the last part of the talk we will demonstrate how various types of systemmodel mismatches can be captured by robust semidefinite programming or by integral quadratic constraints.
Various interesting optimization based controller synthesis problems can be translated into semidefinite programs which can in turn be solved rather efficiently. In this tutorial presentation we motivate why linear matrix inequalities naturally appear if addressing question of stability and performance for linear control systems. If parameters describing the optimization problems are not precisely known, it is argued why it is required to solve socalled robust linear matrix inequalities. In the more technical part of the presentation we will reveal how to systematically construct approximations of robust linear matrix inequalities on the basis of socalled sumofsquares decompositions (related to Hilbert's famous 17th problem), with the benefit of allowing to arbitrarily reduce the relaxation error.
Distinguished Lecturer
Consensus problems have attracted significant attention in the control community over the last decade. They act as a rich source of new mathematical problems pertaining to the growing field of cooperative and distributed control. The talk focuses on consensus problems whose underlying statespace is not a linear space, but instead a highly symmetric nonlinear space such as the circle and several other relevant generalizations. A geometric approach is shown to highlight the connection between several fundamental models of consensus, synchronization, and coordination, to raise significant global convergence issues not captured by linear models, and to be relevant for a number of engineering applications, including the design of coordinated motions in the plane or in the threedimensional space. Finally, nonlinear considerations on the original consensus problem defined in the positive orthant shed light on the special role of nonquadratic Lyapunov functions in this framework.
The talk is an introduction to a recent computational framework for optimization over the set of fixed rank positive semidefinite matrices. The foundation is geometric and the motivation is algorithmic, with a bias towards lowrank computations in largescale problems. Special attention is given to two quotient riemannian geometries that are rooted in classical matrix factorizations and that lead to rankpreserving efficient computations in the cone of symmetric positive definite matrices. The field of applications is vast, and the talk surveys recent developments that illustrate the potential of the approach in largescale computational problems encountered in control, optimization, and machine learning. The talk is introductory and requires no particular background in riemannian geometry.
Modern scientific exploration is increasingly based on distributed technologies. Those include extremely large telescopes made of thousand individual mirrors, sensor networks sharing hundreds of spatially distributed measurements in the ocean, microarrays allowing for the simultaneous measurement of thousands gene expressions in a single cell, or the simultaneous acquisition of thousands of diffusion tensors (one per voxel) in brain imaging.
Distributed technologies overcome fundamental hardware limitations at the price of formidable computational challenges. Those challenges raise new algorithmic questions that involve a mix of statistical, machine learning, and optimization tools.
This nontechnical talk illustrates some of these challenges and open questions through a journey across three concrete research projects, suggesting that geometry plays a fundamental role in making those computational problems tractable.
Distinguished Lecturer
Airbreathing hypersonic vehicles (HSVs) are intended to be a reliable and costeffective technology for access to space. In the past few years, a considerable research effort has been spent to further their development and design. Notwithstanding the recent success of NASA's X43A and AFRL's X51 experimental vehicles, the design of robust guidance and control systems for HSVs is still an open problem, due to the complexity of the dynamics and the unprecedented level of coupling between the airframe and the propulsion system. The slender geometries and light structures required for these aircraft cause significant flexible effects, and a strong coupling between propulsive and aerodynamic forces results from the integration of the scramjet engine. Because of the variability of the vehicle characteristics with flight conditions, significant uncertainties affect the models, which are known to be unstable and nonminimum phase with respect to the regulated output. Finally, the presence of unavoidable constraints on the control inputs render the design of robust control systems an even harder endeavor, especially concerning controlling the propulsion system. In this talk, we give an overview of fundamental issues in controloriented modeling and control system design for HSVs, and present an account of the state of the art in nonlinear and adaptive control methodologies that in the past few years have been developed and applied to address these issues. In particular, we present a flight control system architecture that comprises a robust adaptive innerloop controller and a selfoptimizing guidance system. Finally, we discuss open problems and current research directions.
An established benchmark problem in active aerodynamic flow control is the suppression of pressure oscillations induced by flow over a shallow cavity. In this talk, we summarize the research activity of the Gas Dynamics and Turbulence Lab at The Ohio State University that has been devoted in the past few years to the design and experimental evaluation of modelbased controllers for subsonic cavity flows. Proper orthogonal decomposition and Galerkin projection techniques are used to obtain a reducedorder model of the flow dynamics from experimental data. The model is made amenable to control design by means of an optimizationbased control separation technique, which makes the control input appear explicitly in the equations. The design of a twoDOF controller based on the reducedorder model and its experimental validation are presented. An innerloop controller based on mixedsensitivity minimization with delay compensation is used to linearize the actuator response. An adaptive outerloop controller based on extremumseeking optimization is employed to suppress the cavity tones from pressure measurements. Experimental results, in qualitative agreement with the theoretical analysis, show that the controller achieves a significant attenuation of the resonant tone with a redistribution of the energy into other frequencies. The benefits of parameter adaptation over controllers of fixed structure under varying or uncertain flow conditions are also demonstrated experimentally.
Internal modelbased control for nonlinear systems has experienced a vigorous growth in the past decade. The classic solution relies on augmenting the plant model with a suitable "nonlinear copy" of the model of the disturbance generator, and, as a second stage, on designing a robust stabilizing unit that achieves stabilization of an errorzeroing manifold with prescribed domain of attraction. The closely related approach of designing the stabilizing unit first, and then looking for a suitable adaptive feedforward control to offset the disturbance (a technique known as AFC in linear systems) has received little or no attention for nonlinear systems. In this talk, we investigate the problem of adaptive feedforward compensation for a class of nonlinear systems, namely that of inputtostate (and locally exponentially) convergent systems. It is shown how, under a set assumptions reminiscent of those found in the LTI literature, the proposed scheme succeeds in achieving disturbance rejection of a harmonic disturbance at the input of a convergent nonlinear system, with a semiglobal domain of convergence. The suitability of the proposed solution is demonstrated by combining classic results from averaging analysis with modern techniques for semiglobal stabilization.
Distinguished Lecturer
The talk will discuss the contrasting possibilities of active and passive control both abstractly and in the context of automotive suspensions. It will be shown how systems and control thinking can highlight design tradeoffs and suggest new approaches which otherwise can remain hidden. The expanded possibilities for passive control using the "inerter" mechanical device will be discussed. The talk will be informed by examples of practice in Formula One racing.
The motivation is explained for revisiting certain questions in circuit theory, in particular, the synthesis theory of "oneport" (i.e. with a pair of external driving terminals) RLC networks and the questions of minimality associated with the BottDuffin procedure. The motivation relates to the synthesis of passive mechanical impedances and the need for a new ideal modelling element the "inerter".
Classical results from electrical circuit synthesis are reviewed including the procedures of Foster, Cauer, Brune, Darlington, Bott and Duffin. The reactance theorem of Foster for lossless networks and the BottDuffin construction for arbitrary positivereal functions are highlighted.
Recent work on classical network synthesis is described. This includes the new concept of regularity for positivereal functions and its use to aid the classification of lowcomplexity networks. The important theorem of Reichert will be described, which proves that nonminimality in the sense of modern systems theory is essential for the RLC realisation of some positivereal functions. New results on the procedure of BottDuffin will be presented.
Distinguished Lecturer
Sliding mode control is a practically realisable nonlinear control strategy which yields robust performance in the presence of uncertainty. Interesting properties result from allowing the control signal to switch; for example, total invariance of the system response to a substantial class of parameter variations and external disturbance signals is possible. Dynamic performance requirements are met by prescribing a dynamic system which exhibits the ideal performance required from the plant. An appropriate discontinuous control signal is then selected to ensure the trajectories of the system of interest find this ideal dynamics attractive. This lecture will first provide an introduction to the basic properties of sliding mode control. The sliding mode controller design paradigm will be reviewed and it will be shown how designers can select the ideal performance for given problem classes and how the control can be selected to ensure the ideal dynamics is attained and maintained. The lecture will briefly review certain topics of current research interest in the sliding mode control research community including providing a brief introduction to the higher order sliding mode control concept. In conclusion, the results of recent implementation studies will be presented to demonstrate both the practical issues associated with controller implementation and the merit of the proposed approach.
Historically the sliding mode technique developed as a robust control method being characterised by a suite of feedback control laws and a decision rule. The decision rule, termed the switching function, has as its input some measure of the current system behaviour and produces as an output the particular feedback controller which should be used at that instant in time. The concept of sliding mode observers came later. These observers have unique properties, in that the ability to generate the socalled sliding motion on the error between the measured plant output and the output of the observer ensures that a sliding mode observer produces a set of state estimates that are precisely commensurate with the actual output of the plant. It is also the case that analysis of the average value of the applied observer injection signal, the socalled equivalent injection signal, contains useful information about the mismatch between the model used to define the observer and the actual plant. These unique properties, coupled with the fact that the discontinuous injection signals which were perceived as problematic for many control applications have no disadvantages for softwarebased observer frameworks, have generated a ground swell of interest in sliding mode observer methods in recent years. This lecture presents an overview of both linear and nonlinear sliding mode observer paradigms. The use of the equivalent injection signal in problems relating to fault detection and condition monitoring is demonstrated. A number of application specific results are also described.
Distinguished Lecturer
Research interests in Boolean networks (BNs) have been motivated by the large number of natural and artificial systems whose describing variables display only two distinct configurations, and hence take only two values.
Originally introduced to model simple neural networks, BNs have recently proved to be suitable to describe and simulate the behavior of genetic regulatory networks. As a further application area, BNs have also been used to describe the interactions among agents and hence to investigate consensus problems.
BNs are autonomous systems, since they evolve as automata, whose dynamics is uniquely determined once the initial conditions are assigned.
On the other hand, when the network behavior depends also on some (Boolean) control inputs, the concept of BN naturally extends to that of Boolean control network (BCN).
In the last decade, D. Cheng and coworkers have developed an algebraic framework to deal with both BNs and BCNs. The main idea underlying in this approach is that a Boolean network with n state variables exhibits 2^n possible configurations, and if any such configuration is represented by means of a canonical vector of size 2^n, all the logic maps that regulate the stateupdating can be equivalently described by means of 2^n x 2^n Boolean matrices.
As a result, every Boolean network can be described as a discretetime linear system.
In a similar fashion, a Boolean control network can be converted into a discretetime bilinear system or, more conveniently, it can be seen as a family of BNs, each of them associated with a specific value of the input variables, and in that sense it represents a switched system. As a consequence of this algebraic setup, logicbased problems can be converted into algebraic problems and hence solved by resorting to the standard mathematical tools available for linear statespace models.
In this talk, by following this stream of research, we first address and characterize observability and reconstructibility of Boolean networks. Then, we extend this analysis to the class of BCNs. Finally, we address the problem of designing a state observer for a BCN.
Research interests in Boolean networks (BNs) have been motivated by the large number of natural and artificial systems whose describing variables display only two distinct configurations, and hence take only two values.
Originally introduced to model simple neural networks, BNs have recently proved to be suitable to describe and simulate the behavior of genetic regulatory networks. As a further application area, BNs have also been used to describe the interactions among agents and hence to investigate consensus problems.
BNs are autonomous systems, since they evolve as automata, whose dynamics is uniquely determined once the initial conditions are assigned. On the other hand, when the network behavior depends also on some (Boolean) control inputs, the concept of BN naturally extends to that of Boolean control network (BCN).
In the last decade, D. Cheng and coworkers have developed an algebraic framework to deal with both BNs and BCNs. The main idea underlying this approach is that a Boolean network with n state variables exhibits 2^n possible configurations, and if any such configuration is represented by means of a canonical vector of size 2^n, all the logic maps that regulate the stateupdating can be equivalently described by means of 2^n x 2^n Boolean matrices.
As a result, every Boolean network can be described as a discretetime linear system. In a similar fashion, a Boolean control network can be converted into a discretetime bilinear system or, more conveniently, it can be seen as a family of BNs, each of them associated with a specific value of the input variables, and in that sense it represents a switched system.
As a consequence of this algebraic setup, logicbased problems can be converted into algebraic problems and hence solved by resorting to the standard mathematical tools available for linear statespace models.
The optimal control of BCNs has been recently addressed in a few contributions.
The interest in the optimal control problem for BCNs arises primarily, but not exclusively, from two research areas: game theory and biological systems (e.g., a mammalian cell cycle network).
In this talk we address the optimal control problem for BCNs, by assuming a cost function that depends on both the state and the input values at every time instant. We first consider the finite horizon optimal control problem. By resorting to the semitensor product, the original cost function is rewritten as a linear one, and the problem solution is obtained by means of a recursive algorithm that represents the analogue for BCNs of the difference Riccati equation for linear systems. Several optimal control problems for BCNs are shown to be easily reframed into the present setup. In particular, the cost function is adjusted so as to include penalties on the switchings, provided that the size of the BCN state variable is suitably augmented.
We finally address the infinite horizon optimal control problem and we provide necessary and sufficient conditions for its solvability. The solution is obtained as the limit of the solution over the finite horizon [0,T], and it is always achievable in a finite number of steps.
Distinguished Lecturer
As far back as 1963, Beniot Mandelbrot (who sadly passed away just a few weeks ago) pointed out that asset price movements in the real world don't follow the Gaussian distribution. Instead they are "heavytailed"  that is, they display a kind of selfsimilarity and scaleinvariance. Since then, similar patterns have been observed in extreme weather such as rainfall, and more recently, in Internet traffic. Recent research in "pure" probability theory shows that heavytailed random variables have some very unusual properties. For instance, if we average many observations of such variables, the averages move in a few large bursts instead of moving smoothly. Such behavior has indeed been observed in the stock market.
The pervasiveness of heavytailed distributions in so many diverse arenas has implications for modeling, and risk mitigation. How do we design Internet traffic networks and storage servers if the volume of traffic is heavytailed? How do we hedge our equity positions if asset prices move in a heavytailed manner?
In this talk I will describe the issues involved through a combination of intuitive arguments, visualizations, and formal mathematics. My hope is to inspire practicing engineers to become familiar with this fascinating class of models, and theoretical researchers to study the many open problems that still remain.
Distinguished Lecturer
Abstract
In order to accommodate actuator failures which are uncertain in time, pattern and value, two adaptive backstepping control schemes for parametric strict feedback systems are presented. Firstly a basic design scheme on the basis of existing approaches is considered. It is analyzed that, when actuator failures occur, transient performance of the adaptive system cannot be adjusted through changing controller design parameters. Then, based on a prescribed performance bound (PPB) which characterizes the convergence rate and maximum overshoot of the tracking error, a new controller design scheme is given. It is shown that the tracking error satisfies the prescribed performance bound all the time. Simulation studies also verify the established theoretical results that the PPB based scheme can improve transient performance compared with the basic scheme, while both ensure stability and asymptotic tracking with zero steady state error in the presence of uncertain actuator failures.
Abstract
In most of the existing results on adaptive control of systems with actuator failures, only the cases with finite number of failures are considered. It is assumed that one actuator may only fail once and the failure mode does not change afterwards. In this talk, we shall consider how to deal with the problem of accommodating infinite number of actuator failures or faults in controlling uncertain nonlinear systems based on adaptive backstepping technique. With a newly proposed scheme, it is shown that all closedloop signals are ensured bounded. The performance of the tracking error in the mean square sense with respect to the frequency of failure/fault pattern changes is also established. Moreover, asymptotic tracking can be achieved when the total number of failures and faults is finite. The effectiveness of the proposed scheme is verified in an aircraft application through simulation studies.
Abstract
In the control of uncertain complex interconnected systems, decentralized adaptive control technique is an efficient and practical strategy to be employed for many reasons such as ease of design, familiarity, difficulty in information exchanging between subsystems and so on. In this context, a local controller using only local information is designed for each subsystem while guaranteeing the stability and performance of the overall system. However, simplicity of the design makes the analysis of the overall system quite difficult, especially when adaptive control approaches are employed to handle system uncertainties. Some results in the area will be covered in the following two parts:
Part 1 Decentralized Adaptive Control based on Conventional Approaches.
In this part, how to establish the stability for decentralized adaptive control systems without relative degree constraints on subsystems will be presented.
Part 2 Decentralized Adaptive Control Based on Backstepping Approches
In this part, decentralized adaptive control using backstepping technique will be considered for interconnected systems with both input and output dynamic interactions. To clearly illustrate the approaches, we will start with linear systems and then extend the results to nonlinear systems. The L2 and L1 norms of the system outputs are also established as functions of design parameters. This implies that the transient system performance can be adjusted by choosing suitable design parameters.
Distinguished Lecturer
One of the main challenges in networked control systems is the analysis and synthesis of control over limited rate feedback channels. The problem of minimum data rate for stabilization of linear systems has attracted significant interest in the past decade and it is now well known that under perfect communications the minimum data rate is related to the unstable engenvalues of the openloop system. Another important issue in networked control is the uncertainties induced by the network such as packet losses. In this lecture, we shall discuss the minimum data rate for mean square stabilization over lossy networks. The packet losses process is modeled as an i.i.d. or Markov process. We show that the minimum data rate can be explicitly given in terms of the unstable eigenvalues of the openloop system and the packet loss rate for the i.i.d. case or the transition probabilities of the Markov chain for the Markovian packet losses case. The number of additional bits required to counter the effect of the packet losses on stabilizability is completely quantified. We shall also discuss the problem of minimum channel capacity for mean square stabilization of linear systems.
The problem of estimation and control over a communication network has attracted recurring interests in recent years due to the fact that there are more and more applications where communication networks are used to connect sensors, controllers and actuators. While having many advantages of using communication works for transmitting data/signals, the limited data rate and network uncertainties such as packet losses pose significant challenges for analysis and design. In particular, in wireless sensor networks, power consumption is a critical design factor and communications cost much more energy as compared with computation. As such, there is a clear motivation for minimizing the communications. In this lecture, we shall discuss issues of quantized estimation and stability of Kalman over lossy channels. For the quantized estimation problem, we focus on how to jointly design quantizer and estimator to minimize the mean square estimation error. Under the Gaussian assumption of the predicted density, the quantized MMSE filter is shown to have a similar form as the Kalman filter with the raw measurement simply replaced by its quantized version. Quantization effects are explicitly quantified in terms of the number of quantization levels and quantization thresholds. The stability of the quantized estimator in relation to the system dynamics is examined. We then discuss the stability of Kalman filtering over a network subject to random packet losses, which are modeled by a timehomogeneous ergodic Markov process. Necessary and sufficient conditions for stability of the mean estimation error covariance matrices are derived by taking into account the system structure. Stability criteria are expressed by simple inequalities in terms of the largest eigenvalue of the open loop matrix and transition probabilities of the Markov process. Their implications and relationships with related results in the literature are discussed.
Multiagent cooperation involves a collection of decisionmaking components with limited processing, limited sensing and limited communications capabilities, all seeking to achieve a collective objective. Well known examples include mobile sensor networks for environment monitoring and surveillance and multiUAV (unmanned aerial vehicle) formation flight. The distributed nature of information processing, sensing and actuation makes these applications a significant departure from the traditional centralized control system paradigm. In this lecture, we shall discuss the joint effects of agent dynamic, network topology and communication data rate on the consensusability of linear discretetime multiagent systems. Neglecting the finite data rate constraint, a necessary and sufficient condition for consensusability under a set of distributed control protocols is given which explicitly reveals how the intrinsic entropy rate of the agent dynamic and the communication graph affect consensusability. The result is established by solving a discretetime simultaneous stabilization problem. A lower bound of the optimal convergence rate to consensus, which is shown to be tight for some special cases, is given as well. The consensus problem under a finite communication data rate is also investigated. We shall present a systematic approach to the design of encoder, decoder and control protocol to achieve the exact consensus. The consensus convergence rate in relation to the bit rate, network synchronizability and the size of the network is established. The implementation of the algorithm on a real multirobot system as well as on a virtual platform will be demonstrated.