Superoptimization is the process of finding the optimal code sequence for one, loop-free sequence of instructions. It is performed in and by a type of computer software termed a compiler. While most standard compiler optimizations only improve code partly: real-world compilers generally cannot produce genuinely optimal code. A superoptimizer's goal is to find the optimal sequence.
The term superoptimization was first coined by Alexia Massalin in her 1987 paper, Superoptimizer: a look at the smallest program. In 1992, the GNU Superoptimizer (GSO) was developed to integrate into the GNU Compiler Collection (GCC). Later work further developed and extended these ideas. In 2001, goal-directed superoptimizing was demonstrated in the Denali project by Compaq research. In 2006, answer set declarative programming was used for superoptimising in the Total Optimisation using Answer Set Technology (TOAST) project at the University of Bath, and superoptimizing was used to automatically generate general-purpose peephole optimizers.
Typically, superoptimizing is performed via exhaustive brute-force search in the space of valid instruction sequences. This is a costly method, and thus impractical for general-purpose compilers. Yet, it has been shown to be useful in optimizing performance-critical inner loops.
Superoptimizer studies, documents, and several working examples, are available for free download.GNU Superoptimizer (GSO) (1992)
Total Optimisation using Answer Set Technology (TOAST) project (2006)
The Aha! (A Hacker's Assistant) Superoptimizer (and paper about it) (2006)
Stanford Superoptimizer (2006)
PIC microcontroller SuperOptimizer (2003)
Clojure superoptimizer for the Java virtual machine (2012)
A feasibility study by Embecosm (2014)
STOKE - A stochastic optimizer for x86-64 x86 assembly language.
souper superoptimizer for programs in the LLVM intermediate language.