Girish Mahajan (Editor)

Audio bit depth

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Audio bit depth

In digital audio using pulse-code modulation (PCM), bit depth is the number of bits of information in each sample, and it directly corresponds to the resolution of each sample. Examples of bit depth include Compact Disc Digital Audio, which uses 16 bits per sample, and DVD-Audio and Blu-ray Disc which can support up to 24 bits per sample.

Contents

In basic implementations, variations in bit depth primarily affect the noise level from quantization error—thus the signal-to-noise ratio (SNR) and dynamic range. However, techniques such as dithering, noise shaping and oversampling mitigate these effects without changing the bit depth. Bit depth also affects bit rate and file size.

Bit depth is only meaningful in reference to a PCM digital signal. Non-PCM formats, such as lossy compression formats, do not have associated bit depths. For example, in MP3, quantization is performed on PCM samples that have been transformed into the frequency domain.

Binary representation

A PCM signal is a sequence of digital audio samples containing the data providing the necessary information to reconstruct the original analog signal. Each sample represents the amplitude of the signal at a specific point in time, and the samples are uniformly spaced in time. The amplitude is the only information explicitly stored in the sample, and it is typically stored as either an integer or a floating point number, encoded as a binary number with a fixed number of digits: the sample's bit depth.

The resolution of binary integers increases exponentially as the word length increases. Adding one bit doubles the resolution, adding two quadruples it and so on. The number of possible values that can be represented by an integer bit depth can be calculated by using 2n, where n is the bit depth. Thus, a 16-bit system has a resolution of 65,536 (216) possible values. Integer PCM audio data is typically stored as signed numbers in two's complement format.

Many audio file formats and digital audio workstations (DAWs) now support PCM formats with samples represented by floating point numbers. Both the WAV file format and the AIFF file format support floating point representations. Unlike integers, whose bit pattern is a single series of bits, a floating point number is instead composed of separate fields whose mathematical relation forms a number. The most common standard is IEEE floating point which is composed of three bit patterns: a sign bit which represents whether the number is positive or negative, an exponent and a mantissa which is raised by the exponent. The mantissa is expressed as a binary fraction in IEEE base-two floating point formats.

Quantization

The bit depth limits the signal-to-noise ratio (SNR) of the reconstructed signal to a maximum level determined by quantization error. The bit depth has no impact on the frequency response, which is constrained by the sample rate.

Quantization noise is a model of quantization error introduced by the sampling process during analog-to-digital conversion (ADC). It is a rounding error between the analog input voltage to the ADC and the output digitized value. The noise is nonlinear and signal-dependent.

In an ideal ADC, where the quantization error is uniformly distributed between ± 1 2 least significant bit (LSB) and where the signal has a uniform distribution covering all quantization levels, the signal-to-quantization-noise ratio (SQNR) can be calculated from

S Q N R = 20 log 10 ( 2 Q ) 6.02 Q   d B

where Q is the number of quantization bits and the result is measured in decibels (dB).

24-bit digital audio has a theoretical maximum SNR of 144 dB, compared to 96 dB for 16-bit; however, as of 2007 digital audio converter technology is limited to a SNR of about 123 dB (21-bit ENOB) because of real-world limitations in integrated circuit design. Still, this approximately matches the performance of the human auditory system. (While 32-bit converters exist, they are purely for marketing purposes and provide no practical benefit over 24-bit converters; the extra bits are either zero or encode only noise.)

The resolution of floating point samples is less straightforward than integer samples, but the benefit comes in the increased accuracy of low values. In floating point representation, the space between any two adjacent values is of the same proportion as the space between any other two adjacent values, whereas in an integer representation, the space between adjacent values gets larger in proportion to low-level signals. This greatly increases the SNR because the accuracy of a high-level signal will be the same as the accuracy of an identical signal at a lower level.

The trade-off between floating point and integers is that the space between large floating point values is greater than the space between large integer values of the same bit depth. Rounding a large floating point number results in a greater error than rounding a small floating point number whereas rounding an integer number will always result in the same level of error. In other words, integers have round-off that is uniform, always rounding the LSB to 0 or 1, and floating point has SNR that is uniform, the quantization noise level is always of a certain proportion to the signal level. A floating point noise floor will rise as the signal rises and fall as the signal falls, resulting in audible variance if the bit depth is low enough.

Audio processing

Most processing operations on digital audio involve requantization of samples, and thus introduce additional rounding error analogous to the original quantization error introduced during analog to digital conversion. To prevent rounding error larger than the implicit error during ADC, calculations during processing must be performed at higher precisions than the input samples.

Digital signal processing (DSP) operations can be performed in either fixed point or floating point precision. In either case, the precision of each operation is determined by the precision of the hardware operations used to perform each step of the processing and not the resolution of the input data. For example, on x86 processors, floating point operations are performed at 32- or 64-bit precision and fixed point operations at 16-, 32- or 64-bit resolution. Consequently, all processing performed on Intel-based hardware will be performed at 16-, 32- or 64-bit integer precision, or 32- or 64-bit floating point precision regardless of the source format. However, if memory is at a premium, software may still choose to output lower resolution 16- or 24-bit audio after higher precision processing.

Fixed point digital signal processors often support unusual word sizes and precisions in order to support specific signal resolutions. For example, the Motorola 56000 DSP chip uses 24-bit word sizes, 24-bit multipliers and 56-bit accumulators to perform multiply-accumulate operations on two 24-bit samples without overflow or rounding. On devices that do not support large accumulators, fixed point operations may be implicitly rounded, reducing precision to below that of the input samples.

Errors compound through multiple stages of DSP at a rate that depends on the operations being performed. For uncorrelated processing steps on audio data without a DC offset, errors are assumed to be random with zero mean. Under this assumption, the standard deviation of the distribution represents the error signal, and quantization error scales with the square root of the number of operations. High levels of precision are necessary for algorithms that involve repeated processing, such as convolution. High levels of precision are also necessary in recursive algorithms, such as infinite impulse response (IIR) filters. In the particular case of IIR filters, rounding error can degrade frequency response and cause instability.

Dither

The noise introduced by quantization error, including rounding errors and loss of precision introduced during audio processing, can be mitigated by adding a small amount of random noise, called dither, to the signal before quantizing. Dithering eliminates the granularity of quantization error, giving very low distortion, but at the expense of a slightly raised noise floor. Measured using ITU-R 468 noise weighting, this is about 66 dB below alignment level, or 84 dB below digital full scale, which is somewhat lower than the microphone noise level on most recordings, and hence of no consequence in 16-bit audio (see programme level for more on this).

24-bit audio does not require dithering, as the noise level of the digital converter is always louder than the required level of any dither that might be applied. 24-bit audio could theoretically encode 144 dB of dynamic range, but based on manufacturer's datasheets no ADCs exist that can provide higher than ~125 dB.

Dither can also be used to increase the effective dynamic range. The perceived dynamic range of 16-bit audio can be 120 dB or more with noise-shaped dither, taking advantage of the frequency response of the human ear.

Dynamic range

Dynamic range is the difference between the largest and smallest signal a system can record or reproduce. Without dither, the dynamic range correlates to the quantization noise floor. For example, 16-bit integer resolution allows for a dynamic range of about 96 dB.

Using higher bit depths during studio recording accommodates greater dynamic range. If the signal's dynamic range is lower than that allowed by the bit depth, the recording has headroom. The higher the bit depth, the more headroom that is available. This reduces the risk of clipping without encountering quantization errors at low volumes.

With the proper application of dither, digital systems can reproduce signals with levels lower than their resolution would normally allow, extending the effective dynamic range beyond the limit imposed by the resolution.

The use of techniques such as oversampling and noise shaping can further extend the dynamic range of sampled audio by moving quantization error out of the frequency band of interest.

Oversampling

Oversampling is an alternative method to increase the dynamic range of PCM audio without changing the number of bits per sample. In oversampling, audio samples are acquired at a multiple of the desired sample rate. Because quantization error is assumed to be uniformly distributed with frequency, much of the quantization error is shifted to ultrasonic frequencies, and can be removed by the digital to analog converter during playback.

For an increase equivalent to n additional bits of resolution, a signal must be oversampled by

n u m b e r   o f   s a m p l e s = ( 2 n ) 2 = 2 2 n .

For example, a 14-bit ADC can produce 16-bit 48 kHz audio if operated at 16× oversampling, or 768 kHz. Oversampled PCM therefore exchanges fewer bits per sample for more samples in order to obtain the same resolution.

Dynamic range can also be enhanced with oversampling at signal reconstruction, absent oversampling at the source. Consider 16× oversampling at reconstruction. Each sample at reconstruction would be unique in that for each of the original sample points sixteen are inserted, all having been calculated by the digital signal processor (FIR digital filter) as time interpolation. This is not linear interpolation. The mechanism of lowered noise floor is as previously discussed, that is, quantization noise power has not been reduced, but the noise spectrum has been spread over 16× the audio bandwidth.

Historical note—The compact disc standard was developed by a collaboration between Sony and Phillips. The first Sony consumer unit featured a 16-bit DAC; the first Phillips units dual 14-bit DACs. This caused confusion in the marketplace and even in professional circles. Years after, one of the electronic engineering trade journals mistakenly made a historical note of the 14-bit DACs in the Phillips unit as allowing 84 dB SNR, as the writer was either unaware that the specifications of the unit indicated 4× oversampling or unaware of the implication. It was correctly noted that Phillips had no OEM sourced 16-bit DAC at the time, but the writer was not cognizant of the power of digital signal processing to increase the audio SNR to 90 dB.

Noise shaping

Oversampling a signal results in equal quantization noise per unit of bandwidth at all frequencies and a dynamic range that improves with only the square root of the oversampling ratio. Noise shaping is a technique that adds additional noise at higher frequencies which cancels out some error at lower frequencies, resulting in a larger increase in dynamic range when oversampling. For nth-order noise shaping, the dynamic range of an oversampled signal is improved by an additional 6n dB relative to oversampling without noise shaping. For example, for a 20 kHz analog audio sampled at 4× oversampling with second order noise shaping, the dynamic range is increased by 30 dB. Therefore, a 16-bit signal sampled at 176 kHz would have equal resolution as a 21-bit signal sampled at 44.1 kHz without noise shaping.

Noise shaping is commonly implemented with delta-sigma modulation. Using delta-sigma modulation, Super Audio CD obtains 120 dB SNR at audio frequencies using 1-bit audio with 64× oversampling.

Applications

Bit depth is a fundamental property of digital audio implementations and there are a variety of situations where it is a measurement.

Bit rate and file size

Bit depth affects bit rate and file size. Bit rate refers to the amount of data, specifically bits, transmitted or received per second.

References

Audio bit depth Wikipedia