## Lectures of roger w brockett nonlinear control for physical systems m15 of eeci igsc school 2014

**Nonlinear control** theory is the area of control theory which deals with systems that are nonlinear, time-variant, or both. Control theory is an interdisciplinary branch of engineering and mathematics that is concerned with the behavior of dynamical systems with inputs, and how to modify the output by changes in the input using feedback. The system to be controlled is called the "plant". In order to make the output of a system follow a desired reference signal a controller is designed which compares the output of the plant to the desired output, and provides feedback to the plant to modify the output to bring it closer to the desired output. Control theory is divided into two branches:

## Contents

- Lectures of roger w brockett nonlinear control for physical systems m15 of eeci igsc school 2014
- Ode phase diagrams
- Properties of nonlinear systems
- Analysis and control of nonlinear systems
- Nonlinear feedback analysis The Lure problem
- Absolute stability problem
- Popov criterion
- Frobenius Theorem
- References

Linear control theory applies to systems made of linear devices; which means they obey the superposition principle; the output of the device is proportional to its input. Systems with this property are governed by linear differential equations. A major subclass is systems which in addition have parameters which do not change with time, called *linear time invariant* (LTI) systems. These systems are amenable to powerful frequency domain mathematical techniques of great generality, such as the Laplace transform, Fourier transform, and Z transform, root-locus, Bode plot, and Nyquist stability criterion. These lead to a description of the system using terms like bandwidth, frequency response, eigenvalues, gain, resonant frequencies, poles, and zeros, which give solutions for system response and design techniques to most problems of interest.

Nonlinear control theory covers a wider class of systems that do not obey the superposition principle. It applies to more real-world systems, because all real control systems are nonlinear. These systems are often governed by nonlinear differential equations. The mathematical techniques which have been developed to handle them are more rigorous and much less general, often applying only to narrow categories of systems. These include limit cycle theory, Poincaré maps, Liapunov stability theory, and describing functions. If only solutions near a stable point are of interest, nonlinear systems can often be linearized by approximating them by a linear system obtained by expanding the nonlinear solution in a series, and then linear techniques can be used. Nonlinear systems are often analyzed using numerical methods on computers, for example by simulating their operation using a simulation language. Even if the plant is linear, a nonlinear controller can often have attractive features such as simpler implementation, faster speed, more accuracy, or reduced control energy, which justify the more difficult design procedure.

An example of a nonlinear control system is a thermostat-controlled heating system. A building heating system such as a furnace has a nonlinear response to changes in temperature; it is either "on" or "off", it does not have the fine control in response to temperature differences that a proportional (linear) device would have. Therefore, the furnace is off until the temperature falls below the "turn on" setpoint of the thermostat, when it turns on. Due to the heat added by the furnace, the temperature increases until it reaches the "turn off" setpoint of the thermostat, which turns the furnace off, and the cycle repeats. This cycling of the temperature about the desired temperature is called a *limit cycle*, and is characteristic of nonlinear control systems.

## Ode phase diagrams

## Properties of nonlinear systems

Some properties of nonlinear dynamic systems are

## Analysis and control of nonlinear systems

There are several well-developed techniques for analyzing nonlinear feedback systems:

*The Lur'e Problem*below)

Control design techniques for nonlinear systems also exist. These can be subdivided into techniques which attempt to treat the system as a linear system in a limited range of operation and use (well-known) linear design techniques for each region:

Those that attempt to introduce auxiliary nonlinear feedback in such a way that the system can be treated as linear for purposes of control design:

And Lyapunov based methods:

## Nonlinear feedback analysis – The Lur'e problem

An early nonlinear feedback system analysis problem was formulated by A. I. Lur'e. Control systems described by the Lur'e problem have a forward path that is linear and time-invariant, and a feedback path that contains a memory-less, possibly time-varying, static nonlinearity.

The linear part can be characterized by four matrices (A,B,C,D), while the nonlinear part is Φ(y) with

## Absolute stability problem

Consider:

- (A,B) is controllable and (C,A) is observable
- two real numbers a, b with a<b, defining a sector for function Φ

The problem is to derive conditions involving only the transfer matrix H(s) and {a,b} such that x=0 is a globally uniformly asymptotically stable equilibrium of the system. This is known as the Lur'e problem. There are two well-known wrong conjections on absolute stability:

There are counterexamples to Aizerman's and Kalman's conjectures such that nonlinearity belongs to the sector of linear stability and unique stable equilibrium coexists with a stable periodic solution -- hidden oscillation.

There are two main theorems concerning the problem:

which give sufficient conditions of absolute stability.

## Popov criterion

The sub-class of Lur'e systems studied by Popov is described by:

where x ∈ R^{n}, ξ,u,y are scalars and A,b,c,d have commensurate dimensions. The nonlinear element Φ: R → R is a time-invariant nonlinearity belonging to *open sector* (0, ∞). This means that

The transfer function from u to y is given by

**Theorem:** Consider the system (1)-(2) and suppose

- A is Hurwitz
- (A,b) is controllable
- (A,c) is observable
- d>0 and
- Φ ∈ (0,∞)

then the system is globally asymptotically stable if there exists a number r>0 such that

inf_{ω ∈ R} Re[(1+jωr)h(jω)] > 0 .

Things to be noted:

## Frobenius Theorem

The Frobenius theorem is a deep result in Differential Geometry. When applied to Nonlinear Control, it says the following: Given a system of the form

where