Control Systems are Ubiquitous (2016)

Yutaka Yamamoto, 2013 IEEE CSS President

Table of Contents:
1 What is Control?
2 How Does it Work?
3 Feedback
4 Your Intuition Can Fail 
5 Various Issues

1 What is control?

Control is ubiquitous. We live our everyday lives surrounded by all sorts of control systems, and we are for the most part, unaware of them.

Anything that moves, changes in time, has dynamics and in order that such dynamics function properly, it needs control. Control governs, or regulates, how the system behaves or functions. A helicopter would immediately fall once its controller stops functioning. Our body is a huge collection of control systems: cells, tissues, organs all function according to certain biological, chemical, or physical rules, and they are controlled by such rules. Without control our heart would malfunction or even cease to work, and we would stop living. All electric/electronic appliances are controlled according to some control laws. They include the air-conditioners, refrigerators, TVs, radios, audio and video players, and of course, computers.

Automobiles and airplanes are full of control systems. The fuel supply of a car is now carefully controlled by a fuel injection system. An automobile engine may knock or stall if its ignition timing is not properly controlled. An airplane must control its direction, 3-dimensional positions (yaw, pitch and roll angles), and advanced control mechanisms are employed to maintain its proper ight positions.

Control is a fundamental discipline that underlies all our lives. The IEEE Control Systems Society aims at clarifying its far-reaching power, pursuing various new and exciting applications, and exploring yet unknown new methodologies that contribute to the advancement of human lives.

For more advanced, up-to-date examples, see our "Impact of Control Technology" page:

2 How Does It Work?

Let us look at a very simple example: throwing a baseball to a catcher. One says that a pitcher has good control if he/she can manipulate the necessary muscles to make the ball reach the target position of the catcher; else the pitcher does not have good control.

There are two major aspects in this example of control that differ from what we usually consider control:

  • The control is done by a human, i.e., this is manual control. The usual control scheme we consider today is automatic, and does not require human intervention.
  • The control is open-loop. That is, once the ball is thrown to the air, there is no mechanism to adjust its course in accordance with the current state of the ball1.

For the first point, it would be better if the control could be done automatically not relying on any human effort. If we have to always turn the air-conditioning on and off manually, it would be quite bothersome; of course, before the advent of the modern air-conditioning, we had to do something like this to cope with the changes in room temperature.

This example also signifies the crucial difference between human control and automatic control. Any sensible air-conditioning system is incorporated with some sort of automatic sensoring system that cooperates with the controller so that we do not need to adjust the temperature setting all the time. More specifically, the system measures the room temperature, and feeds it back to the system.

The idea of measuring the state of the system, the temperature in the above case, represents the crucial idea of feedback. This is central to the idea of control.

1 There is a related concept of feedforward control, in which control can occur during the whole course of action but there is no mechanism to correct the behavior of the system while the system keeps moving.

3 Feedback

The concept of feedback is central to most control systems. In an air-conditioning system mentioned above, the system measures the temperature, calculates the difference of the pre-set desired reference temperature from the current room temperature, and feeds the error back to the system, perhaps with some processing by a controller. This scheme can cope with the fluctuations of the environment temperature. If the temperature becomes higher for some reason, the air-conditioning system automatically adjusts the room temperature without our interaction with the system.

This nifty idea clearly does not apply to the baseball pitching. Once the ball is thrown to the air, there is no way to change its course. Hence if there is a disturbance, e.g., a wind, after the ball is thrown, or if the pitcher realizes a mistake in his throw, there is no way to compensate for such problems. If only the ball had a brain and could control its direction by itself!2

Feedback is a realization of this dream. If the controlled objective (often called plant) is exposed to fluctuations or disturbances, the control system automatically detects such variations, and adjusts the system by feeding back a suitable control action to the plant.

The centrifugal governor by James Watt (Figure 1) is generally known to be the first example of this feedback control principle 3. It measures the rotation speed of the steam engine, and regulates the supply of steam to the engine in comparison with the pre-set rotation speed. While everything was mechanically processed, the advantage of automatic adjustment to the constant speed was obvious, and this invention readily led to a spectacular success which is known today as the industrial revolution.

Figure 1: Watt's Centrifugal Governor, Science Museum, London

We here note that not all automatic control systems are feedback control. Some crude systems, for example bread toaster or a laundry machine, are generally not of feedback type, they process a control action according to a pre-assigned control fashion, and do not reflect changes based on the behavior of the system. In this sense, such control is of feedforward type. But it should be clear that any advanced automatic control should have some kind of feedback mechanism.4

Figure ?? shows a typical control system, called a unity feedback system.  "Unity" here means that the feedback loop here has gain 1. But this can be replaced by any gain or even a more complicated system. It is just a matter of calibration. What is important here are the following steps:

  1. calculate the error signal e(t) := r(t)  y(t), and
  2. feed e back to the plant P with some control processing C to make the error e(t) tend to zero as t goes to infinity.

Figure 2: Unity Feedback System

Simple as it may appear, this principle works well in many typical examples. Many day-to-day applications have the objective of tracking to a constant pre-set value, and for such a purpose, the above idea works for taking r to be a constant function; when one needs to alter this setting, we simply change the reference value. That is, we deal with the problem of tracking to a step function. The temperature setting for an air-conditioning system, destination floor setting of an elevator, constant speed cruising of a car are such examples.

Of course, whether this method works or not depends wholly on the choice or design of C. There is an interesting pitfall in such a case, and we conclude this short note by pointing out this problem.

2 There is a famous opera Der Freish•utz by Carl Maria von Weber that expresses this human desire that the bullet can control itself and hit the target no matter what happens.

3 Governors have been used for other purposes, e.g., control of windmills. However, it is argued that such examples do not incorporate feedback, and Watt's invention is probably the rst example where feedback is explicitly utilized.

4 In the 1990's, fuzzy control used to be quite fashionable for some electric appliances, e.g., laundry machines; but the boom seems to be gone now.

4 Your Intuition Can Fail

Let us look at a simple tracking problem. Consider the case of an elevator, in which people push the button to input the desired value of the reference to the system. Suppose you want to go to the 23rd floor, and press the button no. 23. The indicator is converted to an electronic signal, often voltage, and used as a signal to be controlled. Under such a conversion, the current position (height) is represented by a voltage signal, and the control objective is to track the current position to exactly the position that corresponds to the 23rd floor.

If the elevator accelerates up to some constant speed, and then tries to stop when you reach the floor 23, the elevator cannot stop immediately due to inertia. There is an overshoot. Hence the speed must be zero when you reach the target floor. How can we do this?

A simple idea might be to watch the error between the current position from the target position, and feed it back to the system, according to the configuration Figure ??. It is likely to employ a constant gain K for the controller C, multiplied by the error signal e and then apply it to P.

Fortunately, this simple idea works. But there is a catch. The whole thing depends on how the system is constructed. Usually the input is applied to the elevator motor where the derivative θ is proportional to the input Ke. Here θ is the rotational angle of the motor which is parallel to the distance (and hence height) that the elevator travels. The crucial fact here is that the error input Ke is to be once integrated to become the rotational angle, and hence proportional to the the actual distance (or the height).

While we omit its formal proof (that is taught in any first-year course in control, and is quite easy), inclusion of such an integrator is necessary for tracking arbitrary step functions; and this is robust under a small variation of plant parameters, provided that the overall stability is maintained. This is a special case of what is now known as the internal model principle.

That is, if the closed-loop system robustly tracks exogenous inputs, the closed-loop system must contain the model that generates that family of exogenous signals.

If, on the other hand, one attemps to lift up the elevator by allowing a constant speed, it will always yield an error in tracking called "offset." One may calibrate to stop the elevator by measuring how close the elevator is to the destination, but this calibration has to be done for each floor, and can be quite cumbersome. Besides the characteristics of the elevator can never be known exactly. Control theory can give a universal recipe for such and more general problems.

5 Various Issues 

There are various issues to be studied further. As noted above, for the step tracking problem, Figure ?? works and tracks the assigned reference signal provided that this system remains stable and the system contains the integrator in the forward loop. If there is a modeling error or a fluctuation in P , the system remains to work well provided that the above conditions are satisfied. That is, the tracking property here is robust against plant fluctuations. This is a great advantage of feedback and the internal model, which cannot be achieved with a naive point of view (for example, with a feedforward control). Robustness is thus a central concern in designing a control system; without robustness, the designed system may not function when exposed to fluctuations of various kinds. In relation to the same type of concerns, adaptation attempts to vary the controller C in accordance with the changes occuring in the plant P. Adaptive control thus aims at modifying the controller C as P changes.

We mentioned stability above. Examining stability of a control system is of central concern from the very beginning of control theory. If the system is not stable, i.e., unstable, one has to stabilize it by suitably designing a controller. Stability and stabilization have been, and are, another central issue for control theory.

In the above example, the system is modeled as a linear system; however, most systems in the world are nonlinear. Linear models work well often around a neighborhood of an equilibrium point but not much beyond. What can we, and should we, do for nonlinear systems? How we can handle and deal with nonlinear systems is another core of our research topics.

There are also linear systems whose state depends on spatial parameters. Such distributed parameter systems are encountered in many practical situations, e.g., heat transfer, systems with transmission delays, etc., and they possess in nite-dimensional state spaces. Studies of such systems are also of our central interest.

All our studies depend crucially on the model we use. How can we obtain such a model? Even if there can be some theoretical models based on first principles, e.g., Newton's law, many systems are far too complex. In such a case, identification finds its role to arrive at a model based on external behavioral data. Modeling and identification are also very important elements in our explorations.

Let us briefly view other, perhaps more recent, issues. There can be systems consisting of heterogenous components. For example, those operating in different time-sets or in time-scales, systems that are driven primarily by events that occur and not by time, etc. Such systems are hybrid. Many control actions are determined by a supervisor, e.g., computers, and their overall behavior is an active research topic.

Also some control actions may be transmitted through a network, and such networked control systems require more in-depth study of their behavior and overall stability. This is currently a very hot research topic.

Biological systems are a marvel of complex nonlinear behaviors, which integrate to a variety of functionalities. System biology is under active research, and is expected to lead to breakthroughs in the medical field.

With all these studies and experiences coordinated, we need to find out how we can implement such control laws to make the actual system work. This more practical side of control comprises an integral part of control and a serious part of our study.

There are innumerable themes, subjects to be studied. Opportunities abound. The IEEE Control Systems Society strives for the improvement of such technologies. We wait for your challenge.