In control theory, observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals. The concept of observability was introduced by American-Hungarian engineer Rudolf E. Kalman for linear dynamic systems.
Contents
Definition
Formally, a system is said to be observable if, for any possible sequence of state and control vectors, the current state can be determined in finite time using only the outputs (this definition is slanted towards the state space representation). Less formally, this means that from the system's outputs it is possible to determine the behavior of the entire system. If a system is not observable, this means the current values of some of its states cannot be determined through output sensors. This implies that their value is unknown to the controller (although they can be estimated through various means).
For time-invariant linear systems in the state space representation, there is a convenient test to check if a system is observable. Consider a SISO system with
is equal to
A module designed to estimate the state of a system from measurements of the outputs is called a state observer or simply an observer for that system.
The Observability index
A slightly weaker notion than observability is detectability. A system is detectable if all the unobservable states are stable.
Continuous time-varying system
Consider the continuous linear time-variant system
Suppose that the matrices
where
It is possible to determine a unique
Note that the matrix
Nonlinear case
Given the system
Now define the observation space
Note:
Early criteria for observability in nonlinear dynamic systems were discovered by Griffith and Kumar, Kou, Elliot and Tarn, and Singh.
Static systems and general topological spaces
Observability may also be characterized for steady state systems (systems typically defined in terms of algebraic equations and inequalities), or more generally, for sets in