The objective of synthetic array heterodyne detection is to isolate regions of a large area detector surface into virtual pixels. This provides the benefits of having multiple pixels (for example, to make an image) without having to have physical pixels (i.e. isolated detector elements). The detector can be a simple single wire output over which all the virtual pixels can be read out continuously and in parallel. The pixels are multiplexed in the frequency domain.
Of special interest, this solves two common and vexing problems encountered in optical heterodyne detection. First, Heterodyned signals are beat frequencies between the signal source and a reference source (dubbed local oscillator). They are not DC light levels but oscillating signals and thus unlike conventional detectors the light flux from the signal cannot be integrated on a capacitor. Therefore, to have an array of pixels, each pixel must be backed by AC amplifier and detection circuit which is complex. With Synthetic array detection, all the signals can be amplified and detected by the same circuit. The second problem Synthetic array detection solves arrises, not in pixel imaging but, when the signal is not spatially coherent across the surface of the detector. In this case, the beat frequencies arising are differently phased across the detector surface and these destructively interfere producing a low signal output. In synthetic array detection, each region of the detector has a different fundamental for its beat frequency and thus there is no stationary interference even if the signal's phase varies across the surface of the detector.
Rainbow heterodyne detection Wikipedia
Figure 1 shows a particular implementation format of the Synthetic Array method. This implementation is called "Rainbow Heterodyne Detection" because the local oscillator has its frequencies spread out like a rainbow across the surface of the detector. The output from the detector is a multifrequency signal. If this output signal is spectrally resolved then each different electrical frequency corresponds to a different location on the detector.
While the concept is simple, there is a key difficulty that must be overcome by any implementation: how to generate a rainbow of spread optical frequencies whose bandwidth of difference frequencies with respect to the detector is less than the electrical bandwidth of the detector. That is to say, a typical detector might have a bandwidth on the scale of 100Megahertz. If the biggest difference frequency is |ω6-ω6| then this difference need to be smaller than 100 Megahertz. This in turn means the spacing between the adjacent difference frequencies must be less than 100Mhz and on average less than 100Mhz/number of pixels. To see why this presents a problem consider dispersing white light with a prism. For any finite size prism you cannot get enough dispersion to create resolved (non-overlapping beamlets) that differ by less than a megahertz. Thus dispersion methods cannot disperse a broadband light source to create the frequency shifted beamlets with narrowly spaced difference frequencies, One possible way to achieve this is to have a separate laser source for every beamlet; these sources must be precisely frequency controlled so their center frequencies are separated by the desired shifts. The primary problem with this is practical: The bandwidth and frequency drift of most lasers is much greater than 1 Mhtz. The lasers needed for this must be of sufficiently narrow spectral purity that they can interfere coherently with the signal source. Even so, having multiple narrow band precision frequency-tuned lasers is also complex.
One practical way to achieve this is to use an Acousto Optic Deflector. These devices deflect an incoming light beam in proportion to the Acoustic driving frequency. They also have the side effect of shifting the output optical frequency by the acoustic frequency. Thus when one of these is driven with multiple acoustic frequencies a series of deflected beams are emitted each with a small and different shift in the optical frequency. Conveniently, this works even if the source laser has low spectral purity since every sub-spectral component of the beamlet is mutually phase coherent with the source and shifted by the same frequency. In particular this approach allows the use of inexpensive, high power or pulsed lasers as sources because no frequency control is required. Figure 2 shows a simple 2 "pixel" version of this implementation. A laser beam is deflected by a 25Mhz and a 29Mhz acoustic frequency via an acousto-optic modulator. Two beams emerge and both are combined on the detector along with the original laser beam. The 25Mhz beamlet illuminates the left half of the detector while the 29Mhz beamlet illuminates the right half of the detector. The beat frequencies against the signal beam on the detector produce 25 and 29 MHz output frequencies. Thus we can differentiate which photons hit the left or right half of the detector. This method scales to larger numbers of pixels since AOD's with thousands of resolvable spots (each with a different frequency) are commercially available. 2D arrays can be produced with a second AOD arranged at right angles, or by holographic methods.
The method multiplexes all the spatial positions on the detector by frequency. If frequencies are uniformly spaced then a simple fourier transform recovers the coherent image. However, there is no reason the frequencies have to be uniformly spaced so one can adjust the number, size and shape of the pixels dynamically. One can also independently change the Heterodyne gain on each pixel individually simply by making the LO beamlet more or less strong. Thus one can extend the dynamic range of the receiver by lowering the gain on bright pixels, raising it on dim ones, and possibly using larger pixels for dim regions.
The multiplex technique does introduce two limitations as well. In the case of imaging, the signals must not be changing faster than the Nyquist time constant implied by the difference frequency between adjacent pixels. If it does do the pixels blur or alias. (For non-imaging applications—such as when one is simply trying to collect more light but is limited by the spatial incoherence—that aliasing is not important since it doesn't change the incoherent sum of the pixels.) Additionally, when one is working near the shot noise limit the multiplex approach can raise the noise floor since all of the pixels see the shot noise from the whole array (since they all are connected by the same wire). (Again for non-imaging applications this may not be important).
this is a stub section
this is a stub section
this is a stub
THis is a stub.