Modern Control Theory

Much of advanced programming is actually math!

Introduction

Some of the above may look intimidating—you're in good hands. Control theory is a branch of applied physics and mathematics that deals with the control of mechanical systems and is a concept we use everyday. The temperature in our rooms, the cruise control in our cars, it's technically even wired in our brains when we walk!

In robotics, this concept is crucial. To be successful in an FRC game, you need accurate control of arms, drivetrains, elevators, only to name a few. At first though, this concept may look scary, to say the least.arrow-up-right That said, once understood it is a fascinating concept that is frankly, not too difficult to break down.

Open- and Closed-loop Control

In the very basic programming courses, namely the block-coding on websites like Code.org, there would be a character that needs to move a certain amount of tiles. In that case, young programmers would simply put a certain number of "go" blocks and you'd win. Applying this to robotics, when we want our robot to move a certain distance autonomously, the concept is rather straightforward. Just tell the wheel to spin a certain amount of times! In engineering, this is called absolute open-loop or feedforward control.

However, it has obvious drawbacks. Namely, the sheer amount of variables involved. These drivetrains need time to hit brakes and stop, so when it detects that it has finished, it would have overshot already. In addition, the friction on the wheels, tiny differences in electrical currents, the surface you're on, even the air pressure in your room can make undesired changes.

The early engineers who worked with mechanical systems realised this problem. Their solution, have the system stop itself. Enter closed-loop, or feedback control.

Have you ever heard of a closed-feedback loop? It's in your brain right now. Try balancing a broom on your hand. Notice how your immediate instinct is to move your hand until the broom looks balanced? The world-record holder was able to do this continuously for 44 minutes! Applied to the current discussion, a feedforward system will place the broom in a balanced position, but also would duct tape your arm in one position. Not good!

Your brain thinks differently, and without needing your brain to think locks on to the broom, looking for disturbances and constantly correcting for them. This is also why it is called closed-loop or feedback control. Your "control loop" checks where you are and gives small corrections.

There are a few vocabulary words that you need to know.

Gain is the difference between your input and output

Setpoint is your target (in the broom's case zero movement)

Error is the difference between our current output and your setpoint

You survived!

Closed Loop Control in Robotics: PID Control

PID is an extremely efficient form of closed-loop control. It stands for Proportional-Integral-Derivative and uses math to find out how far you are from your target, how far you've already moved, an how much you still have to go and putting all of them together to form an accurate climb (or descent) towards your setpoint. The real math for it looks like this:

Woah. Beautiful, isn't it?

Don't be worried because it is a lot simpler than it looks.

The Proportional gain is the first term. The problem this solves is that there needs to be a correction for error that is works everywhere. It works by setting a value that is directly proportional to the current error (say 10%). In control, this means as the error decreases, the rate of change decreases with it and will avoid overshooting.

The Derivative gain accounts for the fact that really, there are still variables that just a proportional value won't solve. Proportional will always react to the changes, and much of the time it isn't enough to avoid overshooting. It takes in how much the error is changing and though a value proportional to that, will change (and hopefully prevent) overshoots.

The Integral gain looks at how much error has been accounted for at any given time. Then, it will sum all of it up and check if the system is where it should be. Believe it or not, sometimes the system will stop before or even after where it is supposed to. In everyday life, if you have a heavy blanket of fog ahead of you while driving, knowing how much you have already travelled can avoid you driving off a cliff, which wouldn't be so nice.

You survived again!

All of this will let your control system look at where it is and send it to the setpoint. If you would like a more detailed rundown, here is an excellent explanation for you:

PID Control - A brief introductionarrow-up-right

Applying Feedforward to a Closed-loop Control System

Much of the research that is done on control engineering is to predict values that can introduce error. In its advanced forms, this can start dealing with partial derivatives, but we don't need to worry about that!

This is because with all forms in which it is excellent, we really want to minimise the load on closed-loop control systems. This is due to a few reasons, namely that a feedback control loop like PID can only correct for error once the system is already behind. This adds a layer of not only uncertainty but also a time delay, which is far from ideal in a system built for accuracy.

To demonstrate this, we know a setpoint can and will change, and in addition there will be disturbances in the system. For instance, in high-precision elevator and arm subsystems in which gravity plays a large part, said forces are best counteracted, in fact, by telling the system to take it into account even before they hit. This can also be achieved by increasing the integral gain of a closed-loop system, however is overall less stable.

As a more intuitive way of explaining this, thinking back in the broom balancing example, if there was a gust of wind we wouldn't want to have to react to that gust, but rather predict that gust before it happens. If we know when and where it blows, we can compensate exactly when said gust blows. Now that's impressive!

Feedforward can be incredibly powerful when coupled with feedback control. If we have an ideal model of our system, we can predict exactly, with the current error, the amount the system needs to compensate, with feedback control only correcting for small changes. This results in a reliable, capable system.

Unfortunately there isn't an intuitive equation. However, feedforward constants most commonly used are:

kS, the power needed to overcome static friction. This is a fiction force that needs to be overcome in order to move the system.

kV, the power needed to sustain a specific velocity. In other words, it is the RPM/Voltage ratio. Remember that in brushless motors, velocity is directly proportional to voltage, so any given voltage will have a correlated velocity.

kA, the power required to sustain a specific acceleration. It is a coefficient correlated to acceleration.

kG is the power needed to overcome gravity.

You survived level 3! I didn't though :(

By understanding what we can predict, we can far easier get where we wish. For a more detailed explanation again, the same source has an excellent explanation for that as well.

What Is Feedforward Control?arrow-up-right

The Bang-Bang Controller

DON'T USE IT AND DON'T SEARCH IT UP

Last updated