Kalpana Kalpana (Editor)

Sampling (signal processing)

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Sampling (signal processing)

In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave (a continuous signal) to a sequence of samples (a discrete-time signal).

Contents

A sample is a value or set of values at a point in time and/or space.

A sampler is a subsystem or operation that extracts samples from a continuous signal.

A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points.

Theory

Sampling can be done for functions varying in space, time, or any other dimension, and similar results are obtained in two or more dimensions.

For functions that vary with time, let s(t) be a continuous function (or "signal") to be sampled, and let sampling be performed by measuring the value of the continuous function every T seconds, which is called the sampling interval or the sampling period.  Then the sampled function is given by the sequence:

s(nT),   for integer values of n.

The sampling frequency or sampling rate, fs, is the average number of samples obtained in one second (samples per second), thus fs = 1/T.

Reconstructing a continuous function from samples is done by interpolation algorithms. The Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal lowpass filter whose input is a sequence of Dirac delta functions that are modulated (multiplied) by the sample values. When the time interval between adjacent samples is a constant (T), the sequence of delta functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the comb function with s(t). That purely mathematical abstraction is sometimes referred to as impulse sampling.

Most sampled signals are not simply stored and reconstructed. But the fidelity of a theoretical reconstruction is a customary measure of the effectiveness of sampling. That fidelity is reduced when s(t) contains frequency components whose periodicity is smaller than 2 samples; or equivalently the ratio of cycles to samples exceeds ½ (see Aliasing). The quantity ½ cycles/sample × fs samples/sec = fs/2 cycles/sec (hertz) is known as the Nyquist frequency of the sampler. Therefore, s(t) is usually the output of a lowpass filter, functionally known as an anti-aliasing filter. Without an anti-aliasing filter, frequencies higher than the Nyquist frequency will influence the samples in a way that is misinterpreted by the interpolation process.

Practical considerations

In practice, the continuous signal is sampled using an analog-to-digital converter (ADC), a device with various physical limitations. This results in deviations from the theoretically perfect reconstruction, collectively referred to as distortion.

Various types of distortion can occur, including:

  • Aliasing. Some amount of aliasing is inevitable because only theoretical, infinitely long, functions can have no frequency content above the Nyquist frequency. Aliasing can be made arbitrarily small by using a sufficiently large order of the anti-aliasing filter.
  • Aperture error results from the fact that the sample is obtained as a time average within a sampling region, rather than just being equal to the signal value at the sampling instant. In a capacitor-based sample and hold circuit, aperture error is introduced because the capacitor cannot instantly change voltage thus requiring the sample to have non-zero width.
  • Jitter or deviation from the precise sample timing intervals.
  • Noise, including thermal sensor noise, analog circuit noise, etc.
  • Slew rate limit error, caused by the inability of the ADC input value to change sufficiently rapidly.
  • Quantization as a consequence of the finite precision of words that represent the converted values.
  • Error due to other non-linear effects of the mapping of input voltage to converted output value (in addition to the effects of quantization).
  • Although the use of oversampling can completely eliminate aperture error and aliasing by shifting them out of the pass band, this technique cannot be practically used above a few GHz, and may be prohibitively expensive at much lower frequencies. Furthermore, while oversampling can reduce quantization error and non-linearity, it cannot eliminate these entirely. Consequently, practical ADCs at audio frequencies typically do not exhibit aliasing, aperture error, and are not limited by quantization error. Instead, analog noise dominates. At RF and microwave frequencies where oversampling is impractical and filters are expensive, aperture error, quantization error and aliasing can be significant limitations.

    Jitter, noise, and quantization are often analyzed by modeling them as random errors added to the sample values. Integration and zero-order hold effects can be analyzed as a form of low-pass filtering. The non-linearities of either ADC or DAC are analyzed by replacing the ideal linear function mapping with a proposed nonlinear function.

    Audio sampling

    Digital audio uses pulse-code modulation and digital signals for sound reproduction. This includes analog-to-digital conversion (ADC), digital-to-analog conversion (DAC), storage, and transmission. In effect, the system commonly referred to as digital is in fact a discrete-time, discrete-level analog of a previous electrical analog. While modern systems can be quite subtle in their methods, the primary usefulness of a digital system is the ability to store, retrieve and transmit signals without any loss of quality.

    Sampling rate

    A commonly seen measure of sampling is S/s, which stands for "Samples per second." As an example, 1 MS/s is one million samples per second.

    When it is necessary to capture audio covering the entire 20–20,000 Hz range of human hearing,  such as when recording music or many types of acoustic events, audio waveforms are typically sampled at 44.1 kHz (CD), 48 kHz, 88.2 kHz, or 96 kHz.  The approximately double-rate requirement is a consequence of the Nyquist theorem. Sampling rates higher than about 50 kHz to 60 kHz cannot supply more usable information for human listeners. Early professional audio equipment manufacturers chose sampling rates in the region of 50 kHz for this reason.

    There has been an industry trend towards sampling rates well beyond the basic requirements: such as 96 kHz and even 192 kHz  This is in contrast with laboratory experiments, which have failed to show that ultrasonic frequencies are audible to human observers; however in some cases ultrasonic sounds do interact with and modulate the audible part of the frequency spectrum (intermodulation distortion).  It is noteworthy that intermodulation distortion is not present in the live audio and so it represents an artificial coloration to the live sound.  One advantage of higher sampling rates is that they can relax the low-pass filter design requirements for ADCs and DACs, but with modern oversampling sigma-delta converters this advantage is less important.

    The Audio Engineering Society recommends 48 kHz sampling rate for most applications but gives recognition to 44.1 kHz for Compact Disc and other consumer uses, 32 kHz for transmission-related applications, and 96 kHz for higher bandwidth or relaxed anti-aliasing filtering.

    A more complete list of common audio sample rates is:

    Bit depth

    Audio is typically recorded at 8-, 16-, and 24-bit depth, which yield a theoretical maximum Signal-to-quantization-noise ratio (SQNR) for a pure sine wave of, approximately, 49.93 dB, 98.09 dB and 122.17 dB. CD quality audio uses 16-bit samples. Thermal noise limits the true number of bits that can be used in quantization. Few analog systems have signal to noise ratios (SNR) exceeding 120 dB. However, digital signal processing operations can have very high dynamic range, consequently it is common to perform mixing and mastering operations at 32-bit precision and then convert to 16 or 24 bit for distribution.

    Speech sampling

    Speech signals, i.e., signals intended to carry only human speech, can usually be sampled at a much lower rate. For most phonemes, almost all of the energy is contained in the 100 Hz–4 kHz range, allowing a sampling rate of 8 kHz. This is the sampling rate used by nearly all telephony systems, which use the G.711 sampling and quantization specifications.

    Video sampling

    Standard-definition television (SDTV) uses either 720 by 480 pixels (US NTSC 525-line) or 704 by 576 pixels (UK PAL 625-line) for the visible picture area.

    High-definition television (HDTV) uses 720p (progressive), 1080i (interlaced), and 1080p (progressive, also known as Full-HD).

    In digital video, the temporal sampling rate is defined the frame rate – or rather the field rate – rather than the notional pixel clock. The image sampling frequency is the repetition rate of the sensor integration period. Since the integration period may be significantly shorter than the time between repetitions, the sampling frequency can be different from the inverse of the sample time:

  • 50 Hz – PAL video
  • 60 / 1.001 Hz ~= 59.94 Hz – NTSC video
  • Video digital-to-analog converters operate in the megahertz range (from ~3 MHz for low quality composite video scalers in early games consoles, to 250 MHz or more for the highest-resolution VGA output).

    When analog video is converted to digital video, a different sampling process occurs, this time at the pixel frequency, corresponding to a spatial sampling rate along scan lines. A common pixel sampling rate is:

  • 13.5 MHz – CCIR 601, D1 video
  • Spatial sampling in the other direction is determined by the spacing of scan lines in the raster. The sampling rates and resolutions in both spatial directions can be measured in units of lines per picture height.

    Spatial aliasing of high-frequency luma or chroma video components shows up as a moiré pattern.

    3D sampling

    The process of volume rendering samples a 3D grid of voxels to produce 3D renderings of sliced (tomographic) data. The 3D grid is assumed to represent a continuous region of 3D space. Volume rendering is common in medial imaging, X-ray computed tomography (CT/CAT), Magnetic resonance imaging (MRI), Positron Emission Tomography (PET) are some examples. It is also used for Seismic tomography and other applications.

    Undersampling

    When a bandpass signal is sampled slower than its Nyquist rate, the samples are indistinguishable from samples of a low-frequency alias of the high-frequency signal. That is often done purposefully in such a way that the lowest-frequency alias satisfies the Nyquist criterion, because the bandpass signal is still uniquely represented and recoverable. Such undersampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF to digital conversion.

    Oversampling

    Oversampling is used in most modern analog-to-digital converters to reduce the distortion introduced by practical digital-to-analog converters, such as a zero-order hold instead of idealizations like the Whittaker–Shannon interpolation formula.

    Complex sampling

    Complex sampling (I/Q sampling) is the simultaneous sampling of two different, but related, waveforms, resulting in pairs of samples that are subsequently treated as complex numbers.  When one waveform , s ^ ( t ) ,   is the Hilbert transform of the other waveform , s ( t ) ,   the complex-valued function,   s a ( t )   = def   s ( t ) + j s ^ ( t ) ,   is called an analytic signal,  whose Fourier transform is zero for all negative values of frequency. In that case, the Nyquist rate for a waveform with no frequencies ≥ B can be reduced to just B (complex samples/sec), instead of 2B (real samples/sec). More apparently, the equivalent baseband waveform,   s a ( t ) e j 2 π B 2 t ,   also has a Nyquist rate of B, because all of its non-zero frequency content is shifted into the interval [-B/2, B/2).

    Although complex-valued samples can be obtained as described above, they are also created by manipulating samples of a real-valued waveform. For instance, the equivalent baseband waveform can be created without explicitly computing s ^ ( t ) ,   by processing the product sequence , [ s ( n T ) e j 2 π B 2 T n ] ,  through a digital lowpass filter whose cutoff frequency is B/2. Computing only every other sample of the output sequence reduces the sample-rate commensurate with the reduced Nyquist rate. The result is half as many complex-valued samples as the original number of real samples. No information is lost, and the original s(t) waveform can be recovered, if necessary.

    References

    Sampling (signal processing) Wikipedia