Harman Patil (Editor)

Surprisal analysis

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Surprisal analysis

Surprisal analysis is an information-theoretical analysis technique that integrates and applies principles of thermodymamics and maximal entropy. Surprisal analysis is capable of relating the underlying microscopic properties to the macroscopic bulk properties of a system. It has already been applied to a spectrum of disciplines including engineering, physics, chemistry and biomedical engineering. Recently, it has been extended to characterize the state of living cells, specifically monitoring and characterizing biological processes in real time using transcriptional data.

Contents

History

Surprisal analysis was formulated at the Hebrew University of Jerusalem as a joint effort between Raphael David Levine, Richard Barry Bernstein and Avinoam Ben-Shaul in 1972. Levine and colleagues had recognized a need to better understand the dynamics of non-equilibrium systems, particularly of small systems, that are not seemingly applicable to thermodynamic reasoning. Alhassid and Levine first applied surprisal analysis in nuclear physics, to characterize the distribution of products in heavy ion reactions. Since its formulation, surprisal analysis has become a critical tool for the analysis of reaction dynamics and is an official IUPAC term.*

Application

Maximum entropy methods are at the core of a new view of scientific inference, allowing analysis and interpretation of large and sometimes noisy data. Surprisal analysis extends principles of maximal entropy and of thermodynamics, where both equilibrium thermodynamics and statistical mechanics are assumed to be inferences processes. This enables surprisal analysis to be an effective method of information quantification and compaction and of providing an unbiased characterization of systems. Surprisal analysis is particularly useful to characterize and understand dynamics in small systems, where energy fluxes otherwise negligible in large systems, heavily influence system behavior.

Foremost, surprisal analysis identifies the state of a system when it reaches its maximal entropy, or thermodynamic equilibrium. This is known as balance state of the system because once a system reaches its maximal entropy, it can no longer initiates or participates in spontaneous processes. Following the determination of the balanced state, surprisal analysis then characterizes all the states in which the system deviates away from the balance state. These deviations are caused by constraints; these constraints on the system prevent the system from reaching its maximal entropy. Surprisal analysis is applied to both identify and characterize these constraints. In terms of the constraints, the probability P ( n ) of an event n is quantified by

P ( n ) = P 0 ( n ) exp [ α λ α G α ( n ) ] .

Here P 0 ( n ) is the probability of the event n in the balanced state. It is usually called the “prior probability” because it is the probability of an event n prior to any constraints. The surprisal itself is defined as

surprisal = def ln P ( n ) P 0 ( n ) = α λ α G α ( n )

The surprisal equals the sum over the constraints and is a measure of the deviation from the balanced state. These deviations are ranked on the degree of deviation from the balance state and ordered on the most to least influential to the system. This ranking is provided through the use of Lagrange multipliers. The most important constraint and usually the constraint sufficient to characterize a system exhibit the largest Lagrange multiplier. The multiplier for constraint α is denoted above as λ α ; larger multipliers identify more influential constraints. The event variable G α ( n ) is the value of the constraint α for the event n . Using the method of Lagrange multipliers requires that the prior probability P 0 ( n ) and the nature of the constraints be experimentally identified. A numerical algorithm for determining Lagrange multipliers has been introduced by Agmon et al. Recently, singular value decomposition and principal component analysis of the surprisal was utilized to identify constraints on biological systems, extending surprisal analysis to better understanding biological dynamics as shown in the figure.

Surprisal analysis in physics

Surprisal was first introduced to better understand the specificity of energy release and selectivity of energy requirements of elementary chemical reactions. This gave rise to a series of new experiments which demonstrated that in elementary reactions, the nascent products could be probed and that the energy is preferentially released and not statistically distributed. Surprisal analysis was initially applied to characterize a small three molecule system that did not seemingly conform to principles of thermodynamics and a single dominant constraint was identified that was sufficient to describe the dynamic behavior of the three molecule system. Similar results were then observed in nuclear reactions, where differential states with varying energy partitioning are possible. Often chemical reactions require energy to overcome an activation barrier. Surprisal analysis is applicable to such applications as well. Later, surprisal analysis was extended to mesoscopic systems, bulk systems and to dynamical processes.

Surprisal analysis in biology and biomedical sciences

Recently, surprisal analysis was extended to better characterize and understand cellular processes, see figure, biological phenomena and human disease with reference to personalized diagnostics. Surprisal analysis was first utilized to identify genes implicated in the balance state of cells in vitro; the genes mostly present in the balance state were genes directly responsible for the maintenance of cellular homeostasis. Similarly, it has been used to discern two distinct phenotypes during the EMT of cancer cells.

References

Surprisal analysis Wikipedia