Trisha Shetty (Editor)

Μ law algorithm

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Μ-law algorithm

The µ-law algorithm (sometimes written "mu-law", often approximated as "u-law") is a companding algorithm, primarily used in 8-bit PCM digital telecommunication systems in North America and Japan. It is one of two versions of the G.711 standard from ITU-T, the other version being the similar A-law, used in regions where digital telecommunication signals are carried on E-1 circuits, e.g. Europe.

Contents

Companding algorithms reduce the dynamic range of an audio signal. In analog systems, this can increase the signal-to-noise ratio (SNR) achieved during transmission; in the digital domain, it can reduce the quantization error (hence increasing signal to quantization noise ratio). These SNR increases can be traded instead for reduced bandwidth for equivalent SNR.

Algorithm types

The µ-law algorithm may be described in an analog form and in a quantized digital form.

Continuous

For a given input x, the equation for μ-law encoding is

F ( x ) = sgn ( x ) ln ( 1 + μ | x | ) ln ( 1 + μ )         1 x 1

where μ = 255 (8 bits) in the North American and Japanese standards. It is important to note that the range of this function is −1 to 1.

μ-law expansion is then given by the inverse equation:

F 1 ( y ) = sgn ( y ) ( 1 / μ ) ( ( 1 + μ ) | y | 1 )         1 y 1

Discrete

The discrete form is defined in ITU-T Recommendation G.711.

G.711 is unclear about how to code the values at the limit of a range (e.g. whether +31 codes to 0xEF or 0xF0). However, G.191 provides example code in the C language for a μ-law encoder. The difference between the positive and negative ranges, e.g. the negative range corresponding to +30 to +1 is −31 to −2. This is accounted for by the use of 1's complement (simple bit inversion) rather than 2's complement to convert a negative value to a positive value during encoding.

Implementation

There are three ways of implementing a μ-law algorithm:

Analog
Use an amplifier with non-linear gain to achieve companding entirely in the analog domain.
Non-linear ADC
Use an analog-to-digital converter with quantization levels which are unequally spaced to match the μ-law algorithm.
Digital
Use the quantized digital version of the μ-law algorithm to convert data once it is in the digital domain.

Usage justification

This encoding is used because speech has a wide dynamic range. In the analog world, when mixed with a relatively constant background noise source, the finer detail is lost. Given that the precision of the detail is compromised anyway, and assuming that the signal is to be perceived as audio by a human, one can take advantage of the fact that the perceived acoustic intensity level or loudness is logarithmic by compressing the signal using a logarithmic-response operational amplifier (Weber-Fechner law). In telecommunications circuits, most of the noise is injected on the lines, thus after the compressor, the intended signal is perceived as significantly louder than the static, compared to an un-compressed source. This became a common solution, and thus, prior to common digital usage, the μ-law specification was developed to define an inter-compatible standard.

In digital systems this pre-existing algorithm had the effect of significantly reducing the number of bits needed to encode recognizable human voice. Using μ-law, a sample could be effectively encoded in as few as 8 bits, a sample size that conveniently matched the symbol size of most standard computers.

μ-law encoding effectively reduced the dynamic range of the signal, thereby increasing the coding efficiency while biasing the signal in a way that results in a signal-to-distortion ratio that is greater than that obtained by linear encoding for a given number of bits. This is an early form of perceptual audio encoding.

The μ-law algorithm is also used in the .au format, which dates back at least to the SPARCstation 1 by Sun Microsystems as the native method used by the /dev/audio interface, widely used as a de facto standard for sound on Unix systems. The au format is also used in various common audio APIs such as the classes in the sun.audio Java package in Java 1.1 and in some C# methods.

This plot illustrates how μ-law concentrates sampling in the smaller (softer) values. The abscissa represents the byte values 0-255 and the vertical axis is the 16-bit linear decoded value of μ-law encoding.

Comparison with A-law

The µ-law algorithm provides a slightly larger dynamic range than the A-law at the cost of worse proportional distortions for small signals. By convention, A-law is used for an international connection if at least one country uses it.

References

Μ-law algorithm Wikipedia