fbpx
Wikipedia

Nyquist–Shannon sampling theorem

The Nyquist–Shannon sampling theorem is a theorem in the field of signal processing which serves as a fundamental bridge between continuous-time signals and discrete-time signals. It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth.

Example of magnitude of the Fourier transform of a bandlimited function

Strictly speaking, the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies. Intuitively we expect that when one reduces a continuous function to a discrete sequence and interpolates back to a continuous function, the fidelity of the result depends on the density (or sample rate) of the original samples. The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are band-limited to a given bandwidth, such that no actual information is lost in the sampling process. It expresses the sufficient sample rate in terms of the bandwidth for the class of functions. The theorem also leads to a formula for perfectly reconstructing the original continuous-time function from the samples.

Perfect reconstruction may still be possible when the sample-rate criterion is not satisfied, provided other constraints on the signal are known (see § Sampling of non-baseband signals below and compressed sensing). In some cases (when the sample-rate criterion is not satisfied), utilizing additional constraints allows for approximate reconstructions. The fidelity of these reconstructions can be verified and quantified utilizing Bochner's theorem.[1]

The name Nyquist–Shannon sampling theorem honours Harry Nyquist and Claude Shannon, but the theorem was also previously discovered by E. T. Whittaker (published in 1915) and Shannon cited Whittaker's paper in his work. The theorem is thus also known by the names Whittaker–Shannon sampling theorem, Whittaker–Shannon, and Whittaker–Nyquist–Shannon, and may also be referred to as the cardinal theorem of interpolation.

Introduction

Sampling is a process of converting a signal (for example, a function of continuous time or space) into a sequence of values (a function of discrete time or space). Shannon's version of the theorem states:[2]

Theorem — If a function   contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced   seconds apart.

A sufficient sample-rate is therefore anything larger than   samples per second. Equivalently, for a given sample rate  , perfect reconstruction is guaranteed possible for a bandlimit  .

When the bandlimit is too high (or there is no bandlimit), the reconstruction exhibits imperfections known as aliasing. Modern statements of the theorem are sometimes careful to explicitly state that   must contain no sinusoidal component at exactly frequency   or that   must be strictly less than ½ the sample rate. The threshold   is called the Nyquist rate and is an attribute of the continuous-time input   to be sampled. The sample rate must exceed the Nyquist rate for the samples to suffice to represent    The threshold   is called the Nyquist frequency and is an attribute of the sampling equipment. All meaningful frequency components of the properly sampled   exist below the Nyquist frequency. The condition described by these inequalities is called the Nyquist criterion, or sometimes the Raabe condition. The theorem is also applicable to functions of other domains, such as space, in the case of a digitized image. The only change, in the case of other domains, is the units of measure attributed to     and  

 
The normalized sinc function: sin(πx) / (πx) ... showing the central peak at x = 0, and zero-crossings at the other integer values of x.

The symbol   is customarily used to represent the interval between samples and is called the sample period or sampling interval. The samples of function   are commonly denoted by   (alternatively   in older signal processing literature), for all integer values of    Another convenient definition is   which preserves the energy of the signal as   varies.[3]

A mathematically ideal way to interpolate the sequence involves the use of sinc functions. Each sample in the sequence is replaced by a sinc function, centered on the time axis at the original location of the sample   with the amplitude of the sinc function scaled to the sample value,   Subsequently, the sinc functions are summed into a continuous function. A mathematically equivalent method uses the Dirac comb and proceeds by convolving one sinc function with a series of Dirac delta pulses, weighted by the sample values. Neither method is numerically practical. Instead, some type of approximation of the sinc functions, finite in length, is used. The imperfections attributable to the approximation are known as interpolation error.

Practical digital-to-analog converters produce neither scaled and delayed sinc functions, nor ideal Dirac pulses. Instead they produce a piecewise-constant sequence of scaled and delayed rectangular pulses (the zero-order hold), usually followed by a lowpass filter (called an "anti-imaging filter") to remove spurious high-frequency replicas (images) of the original baseband signal.

Aliasing

 
The samples of two sine waves can be identical when at least one of them is at a frequency above half the sample rate.

When   is a function with a Fourier transform  :

 

the Poisson summation formula indicates that the samples,  , of   are sufficient to create a periodic summation of  . The result is:

  (Eq.1)
 
X(f) (top blue) and XA(f) (bottom blue) are continuous Fourier transforms of two different functions,   and   (not shown). When the functions are sampled at rate  , the images (green) are added to the original transforms (blue) when one examines the discrete-time Fourier transforms (DTFT) of the sequences. In this hypothetical example, the DTFTs are identical, which means the sampled sequences are identical, even though the original continuous pre-sampled functions are not. If these were audio signals,   and   might not sound the same. But their samples (taken at rate fs) are identical and would lead to identical reproduced sounds; thus xA(t) is an alias of x(t) at this sample rate.

which is a periodic function and its equivalent representation as a Fourier series, whose coefficients are   This function is also known as the discrete-time Fourier transform (DTFT) of the sample sequence.

As depicted, copies of   are shifted by multiples of the sampling rate   and combined by addition. For a band-limited function     and sufficiently large   it is possible for the copies to remain distinct from each other. But if the Nyquist criterion is not satisfied, adjacent copies overlap, and it is not possible in general to discern an unambiguous   Any frequency component above   is indistinguishable from a lower-frequency component, called an alias, associated with one of the copies. In such cases, the customary interpolation techniques produce the alias, rather than the original component. When the sample-rate is pre-determined by other considerations (such as an industry standard),   is usually filtered to reduce its high frequencies to acceptable levels before it is sampled. The type of filter required is a lowpass filter, and in this application it is called an anti-aliasing filter.

 
Spectrum, Xs(f), of a properly sampled bandlimited signal (blue) and the adjacent DTFT images (green) that do not overlap. A brick-wall low-pass filter, H(f), removes the images, leaves the original spectrum, X(f), and recovers the original signal from its samples.
 
The figure on the left shows a function (in gray/black) being sampled and reconstructed (in gold) at steadily increasing sample-densities, while the figure on the right shows the frequency spectrum of the gray/black function, which does not change. The highest frequency in the spectrum is ½ the width of the entire spectrum. The width of the steadily-increasing pink shading is equal to the sample-rate. When it encompasses the entire frequency spectrum it is twice as large as the highest frequency, and that is when the reconstructed waveform matches the sampled one.

Derivation as a special case of Poisson summation

When there is no overlap of the copies (also known as "images") of  , the   term of Eq.1 can be recovered by the product:

       where:
 

The sampling theorem is proved since   uniquely determines  

All that remains is to derive the formula for reconstruction.   need not be precisely defined in the region   because   is zero in that region. However, the worst case is when   the Nyquist frequency. A function that is sufficient for that and all less severe cases is:

 

where rect(•) is the rectangular function.  Therefore:

 
       (from  Eq.1, above).
      [A]

The inverse transform of both sides produces the Whittaker–Shannon interpolation formula:

 

which shows how the samples,   can be combined to reconstruct  

  • Larger-than-necessary values of fs (smaller values of T), called oversampling, have no effect on the outcome of the reconstruction and have the benefit of leaving room for a transition band in which H(f) is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation.
  • Theoretically, the interpolation formula can be implemented as a low-pass filter, whose impulse response is sinc(t/T) and whose input is   which is a Dirac comb function modulated by the signal samples. Practical digital-to-analog converters (DAC) implement an approximation like the zero-order hold. In that case, oversampling can reduce the approximation error.

Shannon's original proof

Poisson shows that the Fourier series in Eq.1 produces the periodic summation of  , regardless of   and  . Shannon, however, only derives the series coefficients for the case  . Virtually quoting Shannon's original paper:

Let   be the spectrum of    Then
 
because   is assumed to be zero outside the band    If we let   where   is any positive or negative integer, we obtain:
 

 

 

 

 

(Eq.2)

On the left are values of   at the sampling points. The integral on the right will be recognized as essentially[a] the nth coefficient in a Fourier-series expansion of the function   taking the interval   to   as a fundamental period. This means that the values of the samples   determine the Fourier coefficients in the series expansion of    Thus they determine   since   is zero for frequencies greater than B, and for lower frequencies   is determined if its Fourier coefficients are determined. But   determines the original function   completely, since a function is determined if its spectrum is known. Therefore the original samples determine the function   completely.

Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction via sinc functions, what we now call the Whittaker–Shannon interpolation formula as discussed above. He does not derive or prove the properties of the sinc function, but these would have been[weasel words] familiar to engineers reading his works at the time, since the Fourier pair relationship between rect (the rectangular function) and sinc was well known.

Let   be the nth sample. Then the function   is represented by:
 

As in the other proof, the existence of the Fourier transform of the original signal is assumed, so the proof does not say whether the sampling theorem extends to bandlimited stationary random processes.

Notes

  1. ^ Multiplying both sides of Eq.2 by   produces, on the left, the scaled sample values   in Poisson's formula (Eq.1), and, on the right, the actual formula for Fourier expansion coefficients.

Application to multivariable signals and images

 
Subsampled image showing a Moiré pattern
 
Properly sampled image

The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to time-dependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as two-dimensional arrays (or matrices) of real numbers representing the relative intensities of pixels (picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely—one for the row, and one for the column.

Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors—red, green, and blue, or RGB for short. Other colorspaces using 3-vectors for colors include HSV, CIELAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated as vector-valued functions over a two-dimensional sampled domain.

Similar to one-dimensional discrete-time signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera's image sensor. The aliasing appears as a moiré pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt, use a higher resolution sensor, or to optically blur the image before acquiring it with the sensor using an optical low-pass filter.

Another example is shown to the right in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through a low-pass filter first and then downsamples the image to result in a smaller image that does not exhibit the moiré pattern. The top image is what happens when the image is downsampled without low-pass filtering: aliasing results.

The sampling theorem applies to camera systems, where the scene and lens constitute an analog spatial signal source, and the image sensor is a spatial sampling device. Each of these components is characterized by a modulation transfer function (MTF), representing the precise resolution (spatial bandwidth) available in that component. Effects of aliasing or blurring can occur when the lens MTF and sensor MTF are mismatched. When the optical image which is sampled by the sensor device contains higher spatial frequencies than the sensor, the under sampling acts as a low-pass filter to reduce or eliminate aliasing. When the area of the sampling spot (the size of the pixel sensor) is not large enough to provide sufficient spatial anti-aliasing, a separate anti-aliasing filter (optical low-pass filter) may be included in a camera system to reduce the MTF of the optical image. Instead of requiring an optical filter, the graphics processing unit of smartphone cameras performs digital signal processing to remove aliasing with a digital filter. Digital filters also apply sharpening to amplify the contrast from the lens at high spatial frequencies, which otherwise falls off rapidly at diffraction limits.

The sampling theorem also applies to post-processing digital images, such as to up or down sampling. Effects of aliasing, blurring, and sharpening may be adjusted with digital filtering implemented in software, which necessarily follows the theoretical principles.

Critical frequency

To illustrate the necessity of  , consider the family of sinusoids generated by different values of   in this formula:

 
 
A family of sinusoids at the critical frequency, all having the same sample sequences of alternating +1 and –1. That is, they all are aliases of each other, even though their frequency is not above half the sample rate.

With   or equivalently  , the samples are given by:

 

regardless of the value of  . That sort of ambiguity is the reason for the strict inequality of the sampling theorem's condition.

Sampling of non-baseband signals

As discussed by Shannon:[2]

A similar result is true if the band does not start at zero frequency but at some higher value, and can be proved by a linear translation (corresponding physically to single-sideband modulation) of the zero-frequency case. In this case the elementary pulse is obtained from sin(x)/x by single-side-band modulation.

That is, a sufficient no-loss condition for sampling signals that do not have baseband components exists that involves the width of the non-zero frequency interval as opposed to its highest frequency component. See Sampling (signal processing) for more details and examples.

For example, in order to sample the FM radio signals in the frequency range of 100–102 MHz, it is not necessary to sample at 204 MHz (twice the upper frequency), but rather it is sufficient to sample at 4 MHz (twice the width of the frequency interval).

A bandpass condition is that X(f) = 0, for all nonnegative f outside the open band of frequencies:

 
for some nonnegative integer N. This formulation includes the normal baseband condition as the case N=0.

The corresponding interpolation function is the impulse response of an ideal brick-wall bandpass filter (as opposed to the ideal brick-wall lowpass filter used above) with cutoffs at the upper and lower edges of the specified band, which is the difference between a pair of lowpass impulse responses:

 

Other generalizations, for example to signals occupying multiple non-contiguous bands, are possible as well. Even the most generalized form of the sampling theorem does not have a provably true converse. That is, one cannot conclude that information is necessarily lost just because the conditions of the sampling theorem are not satisfied; from an engineering perspective, however, it is generally safe to assume that if the sampling theorem is not satisfied then information will most likely be lost.

Nonuniform sampling

The sampling theory of Shannon can be generalized for the case of nonuniform sampling, that is, samples not taken equally spaced in time. The Shannon sampling theory for non-uniform sampling states that a band-limited signal can be perfectly reconstructed from its samples if the average sampling rate satisfies the Nyquist condition.[4] Therefore, although uniformly spaced samples may result in easier reconstruction algorithms, it is not a necessary condition for perfect reconstruction.

The general theory for non-baseband and nonuniform samples was developed in 1967 by Henry Landau.[5] He proved that the average sampling rate (uniform or otherwise) must be twice the occupied bandwidth of the signal, assuming it is a priori known what portion of the spectrum was occupied. In the late 1990s, this work was partially extended to cover signals whose amount of occupied bandwidth was known, but the actual occupied portion of the spectrum was unknown.[6] In the 2000s, a complete theory was developed (see the section Sampling below the Nyquist rate under additional restrictions below) using compressed sensing. In particular, the theory, using signal processing language, is described in this 2009 paper.[7] They show, among other things, that if the frequency locations are unknown, then it is necessary to sample at least at twice the Nyquist criteria; in other words, you must pay at least a factor of 2 for not knowing the location of the spectrum. Note that minimum sampling requirements do not necessarily guarantee stability.

Sampling below the Nyquist rate under additional restrictions

The Nyquist–Shannon sampling theorem provides a sufficient condition for the sampling and reconstruction of a band-limited signal. When reconstruction is done via the Whittaker–Shannon interpolation formula, the Nyquist criterion is also a necessary condition to avoid aliasing, in the sense that if samples are taken at a slower rate than twice the band limit, then there are some signals that will not be correctly reconstructed. However, if further restrictions are imposed on the signal, then the Nyquist criterion may no longer be a necessary condition.

A non-trivial example of exploiting extra assumptions about the signal is given by the recent field of compressed sensing, which allows for full reconstruction with a sub-Nyquist sampling rate. Specifically, this applies to signals that are sparse (or compressible) in some domain. As an example, compressed sensing deals with signals that may have a low over-all bandwidth (say, the effective bandwidth EB), but the frequency locations are unknown, rather than all together in a single band, so that the passband technique does not apply. In other words, the frequency spectrum is sparse. Traditionally, the necessary sampling rate is thus 2B. Using compressed sensing techniques, the signal could be perfectly reconstructed if it is sampled at a rate slightly lower than 2EB. With this approach, reconstruction is no longer given by a formula, but instead by the solution to a linear optimization program.

Another example where sub-Nyquist sampling is optimal arises under the additional constraint that the samples are quantized in an optimal manner, as in a combined system of sampling and optimal lossy compression.[8] This setting is relevant in cases where the joint effect of sampling and quantization is to be considered, and can provide a lower bound for the minimal reconstruction error that can be attained in sampling and quantizing a random signal. For stationary Gaussian random signals, this lower bound is usually attained at a sub-Nyquist sampling rate, indicating that sub-Nyquist sampling is optimal for this signal model under optimal quantization.[9]

Historical background

The sampling theorem was implied by the work of Harry Nyquist in 1928,[10] in which he showed that up to 2B independent pulse samples could be sent through a system of bandwidth B; but he did not explicitly consider the problem of sampling and reconstruction of continuous signals. About the same time, Karl Küpfmüller showed a similar result[11] and discussed the sinc-function impulse response of a band-limiting filter, via its integral, the step-response sine integral; this bandlimiting and reconstruction filter that is so central to the sampling theorem is sometimes referred to as a Küpfmüller filter (but seldom so in English).

The sampling theorem, essentially a dual of Nyquist's result, was proved by Claude E. Shannon.[2] The mathematician E. T. Whittaker published similar results in 1915,[12] J. M. Whittaker in 1935,[13] and Gabor in 1946 ("Theory of communication").

In 1948 and 1949, Claude E. Shannon published the two revolutionary articles in which he founded the information theory.[14][15][2] In Shannon 1948 the sampling theorem is formulated as “Theorem 13”: Let f(t) contain no frequencies over W. Then

  where  .

It was not until these articles were published that the theorem known as “Shannon’s sampling theorem” became common property among communication engineers, although Shannon himself writes that this is a fact which is common knowledge in the communication art.[B] A few lines further on, however, he adds: "but in spite of its evident importance, [it] seems not to have appeared explicitly in the literature of communication theory".

Other discoverers

Others who have independently discovered or played roles in the development of the sampling theorem have been discussed in several historical articles, for example, by Jerri[16] and by Lüke.[17] For example, Lüke points out that H. Raabe, an assistant to Küpfmüller, proved the theorem in his 1939 Ph.D. dissertation; the term Raabe condition came to be associated with the criterion for unambiguous representation (sampling rate greater than twice the bandwidth). Meijering[18] mentions several other discoverers and names in a paragraph and pair of footnotes:

As pointed out by Higgins [135], the sampling theorem should really be considered in two parts, as done above: the first stating the fact that a bandlimited function is completely determined by its samples, the second describing how to reconstruct the function using its samples. Both parts of the sampling theorem were given in a somewhat different form by J. M. Whittaker [350, 351, 353] and before him also by Ogura [241, 242]. They were probably not aware of the fact that the first part of the theorem had been stated as early as 1897 by Borel [25].27 As we have seen, Borel also used around that time what became known as the cardinal series. However, he appears not to have made the link [135]. In later years it became known that the sampling theorem had been presented before Shannon to the Russian communication community by Kotel'nikov [173]. In more implicit, verbal form, it had also been described in the German literature by Raabe [257]. Several authors [33, 205] have mentioned that Someya [296] introduced the theorem in the Japanese literature parallel to Shannon. In the English literature, Weston [347] introduced it independently of Shannon around the same time.28

27 Several authors, following Black [16], have claimed that this first part of the sampling theorem was stated even earlier by Cauchy, in a paper [41] published in 1841. However, the paper of Cauchy does not contain such a statement, as has been pointed out by Higgins [135].

28 As a consequence of the discovery of the several independent introductions of the sampling theorem, people started to refer to the theorem by including the names of the aforementioned authors, resulting in such catchphrases as “the Whittaker–Kotel’nikov–Shannon (WKS) sampling theorem" [155] or even "the Whittaker–Kotel'nikov–Raabe–Shannon–Someya sampling theorem" [33]. To avoid confusion, perhaps the best thing to do is to refer to it as the sampling theorem, "rather than trying to find a title that does justice to all claimants" [136].

Why Nyquist?

Exactly how, when, or why Harry Nyquist had his name attached to the sampling theorem remains obscure. The term Nyquist Sampling Theorem (capitalized thus) appeared as early as 1959 in a book from his former employer, Bell Labs,[19] and appeared again in 1963,[20] and not capitalized in 1965.[21] It had been called the Shannon Sampling Theorem as early as 1954,[22] but also just the sampling theorem by several other books in the early 1950s.

In 1958, Blackman and Tukey cited Nyquist's 1928 article as a reference for the sampling theorem of information theory,[23] even though that article does not treat sampling and reconstruction of continuous signals as others did. Their glossary of terms includes these entries:

Sampling theorem (of information theory)
Nyquist's result that equi-spaced data, with two or more points per cycle of highest frequency, allows reconstruction of band-limited functions. (See Cardinal theorem.)
Cardinal theorem (of interpolation theory)
A precise statement of the conditions under which values given at a doubly infinite set of equally spaced points can be interpolated to yield a continuous band-limited function with the aid of the function
 

Exactly what "Nyquist's result" they are referring to remains mysterious.

When Shannon stated and proved the sampling theorem in his 1949 article, according to Meijering,[18] "he referred to the critical sampling interval   as the Nyquist interval corresponding to the band W, in recognition of Nyquist’s discovery of the fundamental importance of this interval in connection with telegraphy". This explains Nyquist's name on the critical interval, but not on the theorem.

Similarly, Nyquist's name was attached to Nyquist rate in 1953 by Harold S. Black:

"If the essential frequency range is limited to B cycles per second, 2B was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less half a quantum step. This rate is generally referred to as signaling at the Nyquist rate and   has been termed a Nyquist interval."[24] (bold added for emphasis; italics as in the original)

According to the OED, this may be the origin of the term Nyquist rate. In Black's usage, it is not a sampling rate, but a signaling rate.

See also

Notes

  1. ^ The sinc function follows from rows 202 and 102 of the transform tables
  2. ^ Shannon 1949, p. 448.

References

  1. ^ Nemirovsky, Jonathan; Shimron, Efrat (2015). "Utilizing Bochners Theorem for Constrained Evaluation of Missing Fourier Data". arXiv:1506.03300 [physics.med-ph].
  2. ^ a b c d Shannon, Claude E. (January 1949). "Communication in the presence of noise". Proceedings of the Institute of Radio Engineers. 37 (1): 10–21. doi:10.1109/jrproc.1949.232969. S2CID 52873253. Reprint as classic paper in: Proc. IEEE, Vol. 86, No. 2, (Feb 1998) 2010-02-08 at the Wayback Machine
  3. ^ Ahmed, N.; Rao, K.R. (July 10, 1975). Orthogonal Transforms for Digital Signal Processing (1 ed.). Berlin Heidelberg New York: Springer-Verlag. doi:10.1007/978-3-642-45450-9. ISBN 9783540065562.
  4. ^ Marvasti, F., ed. (2000). Nonuniform Sampling, Theory and Practice. New York: Kluwer Academic/Plenum Publishers.
  5. ^ Landau, H. J. (1967). "Necessary density conditions for sampling and interpolation of certain entire functions". Acta Math. 117 (1): 37–52. doi:10.1007/BF02395039.
  6. ^ see, e.g., Feng, P. (1997). Universal minimum-rate sampling and spectrum-blind reconstruction for multiband signals. Ph.D. dissertation, University of Illinois at Urbana-Champaign.
  7. ^ Mishali, Moshe; Eldar, Yonina C. (March 2009). "Blind Multiband Signal Reconstruction: Compressed Sensing for Analog Signals". IEEE Trans. Signal Process. 57 (3): 993–1009. Bibcode:2009ITSP...57..993M. CiteSeerX 10.1.1.154.4255. doi:10.1109/TSP.2009.2012791. S2CID 2529543.
  8. ^ Kipnis, Alon; Goldsmith, Andrea J.; Eldar, Yonina C.; Weissman, Tsachy (January 2016). "Distortion rate function of sub-Nyquist sampled Gaussian sources". IEEE Transactions on Information Theory. 62: 401–429. arXiv:1405.5329. doi:10.1109/tit.2015.2485271. S2CID 47085927.
  9. ^ Kipnis, Alon; Eldar, Yonina; Goldsmith, Andrea (26 April 2018). "Analog-to-Digital Compression: A New Paradigm for Converting Signals to Bits". IEEE Signal Processing Magazine. 35 (3): 16–39. arXiv:1801.06718. Bibcode:2018ISPM...35...16K. doi:10.1109/MSP.2017.2774249. S2CID 13693437.
  10. ^ Nyquist, Harry (April 1928). "Certain topics in telegraph transmission theory". Trans. AIEE. 47 (2): 617–644. Bibcode:1928TAIEE..47..617N. doi:10.1109/t-aiee.1928.5055024. Reprint as classic paper in: Proc. IEEE, Vol. 90, No. 2, Feb 2002 2013-09-26 at the Wayback Machine
  11. ^ Küpfmüller, Karl (1928). "Über die Dynamik der selbsttätigen Verstärkungsregler". Elektrische Nachrichtentechnik (in German). 5 (11): 459–467. (English translation 2005).
  12. ^ Whittaker, E. T. (1915). "On the Functions Which are Represented by the Expansions of the Interpolation Theory". Proc. R. Soc. Edinburgh. 35: 181–194. doi:10.1017/s0370164600017806. ("Theorie der Kardinalfunktionen").
  13. ^ Whittaker, J. M. (1935). Interpolatory Function Theory. Cambridge, England: Cambridge Univ. Press..
  14. ^ Shannon, Claude E. (July 1948). "A Mathematical Theory of Communication". Bell System Technical Journal. 27 (3): 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x. hdl:11858/00-001M-0000-002C-4317-B..
  15. ^ Shannon, Claude E. (October 1948). "A Mathematical Theory of Communication". Bell System Technical Journal. 27 (4): 623–666. doi:10.1002/j.1538-7305.1948.tb00917.x. hdl:11858/00-001M-0000-002C-4314-2.
  16. ^ Jerri, Abdul (November 1977). "The Shannon Sampling Theorem—Its Various Extensions and Applications: A Tutorial Review". Proceedings of the IEEE. 65 (11): 1565–1596. Bibcode:1977IEEEP..65.1565J. doi:10.1109/proc.1977.10771. S2CID 37036141. See also Jerri, Abdul (April 1979). "Correction to "The Shannon sampling theorem—Its various extensions and applications: A tutorial review"". Proceedings of the IEEE. 67 (4): 695. doi:10.1109/proc.1979.11307.
  17. ^ Lüke, Hans Dieter (April 1999). "The Origins of the Sampling Theorem" (PDF). IEEE Communications Magazine. 37 (4): 106–108. CiteSeerX 10.1.1.163.2887. doi:10.1109/35.755459.
  18. ^ a b Meijering, Erik (March 2002). "A Chronology of Interpolation From Ancient Astronomy to Modern Signal and Image Processing" (PDF). Proc. IEEE. 90 (3): 319–342. doi:10.1109/5.993400.
  19. ^ Members of the Technical Staff of Bell Telephone Lababoratories (1959). Transmission Systems for Communications. AT&T. pp. 26–4 (Vol.2).
  20. ^ Guillemin, Ernst Adolph (1963). Theory of Linear Physical Systems. Wiley. ISBN 9780471330707.
  21. ^ Roberts, Richard A.; Barton, Ben F. (1965). Theory of Signal Detectability: Composite Deferred Decision Theory.
  22. ^ Gray, Truman S. (1954). "Applied Electronics: A First Course in Electronics, Electron Tubes, and Associated Circuits". Physics Today. 7 (11): 17. Bibcode:1954PhT.....7k..17G. doi:10.1063/1.3061438. hdl:2027/mdp.39015002049487.
  23. ^ Blackman, R. B.; Tukey, J. W. (1958). The Measurement of Power Spectra : From the Point of View of Communications Engineering (PDF). New York: Dover. Archived (PDF) from the original on 2022-10-09.[permanent dead link]
  24. ^ Black, Harold S. (1953). Modulation Theory.

Further reading

  • Higgins, J.R. (1985). "Five short stories about the cardinal series". Bulletin of the AMS. 12 (1): 45–89. doi:10.1090/S0273-0979-1985-15293-0.
  • Küpfmüller, Karl (1931). "Utjämningsförlopp inom Telegraf- och Telefontekniken" [Transients in telegraph and telephone engineering]. Teknisk Tidskrift (9): 153–160. and (10): pp. 178–182
  • Marks, II, R.J. (1991). Introduction to Shannon Sampling and Interpolation Theory (PDF). Springer Texts in Electrical Engineering. Springer. doi:10.1007/978-1-4613-9708-3. ISBN 978-1-4613-9708-3. (PDF) from the original on 2011-07-14.
  • Marks, II, R.J., ed. (1993). Advanced Topics in Shannon Sampling and Interpolation Theory (PDF). Springer Texts in Electrical Engineering. Springer. doi:10.1007/978-1-4613-9757-1. ISBN 978-1-4613-9757-1. (PDF) from the original on 2011-10-06.
  • Marks, II, R.J. (2009). "Chapters 5–8". Handbook of Fourier Analysis and Its Applications. Oxford University Press. ISBN 978-0-19-804430-7.
  • Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "§13.11. Numerical Use of the Sampling Theorem", Numerical Recipes: The Art of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8
  • Unser, Michael (April 2000). "Sampling-50 Years after Shannon" (PDF). Proc. IEEE. 88 (4): 569–587. doi:10.1109/5.843002. S2CID 11657280. (PDF) from the original on 2022-11-03.

External links

  • Learning by Simulations Interactive simulation of the effects of inadequate sampling
  • Institute of Telecommunications, University of Stuttgart
  • Undersampling and an application of it
  • Journal devoted to Sampling Theory
  • Huang, J.; Padmanabhan, K.; Collins, O.M. (June 2011). "The Sampling Theorem With Constant Amplitude Variable Width Pulses". IEEE Transactions on Circuits and Systems I: Regular Papers. 58 (6): 1178–90. doi:10.1109/TCSI.2010.2094350. S2CID 13590085.
  • Lüke, Hans Dieter (April 1999). "The Origins of the Sampling Theorem" (PDF). IEEE Communications Magazine. 37 (4): 106–108. CiteSeerX 10.1.1.163.2887. doi:10.1109/35.755459.

nyquist, shannon, sampling, theorem, theorem, field, signal, processing, which, serves, fundamental, bridge, between, continuous, time, signals, discrete, time, signals, establishes, sufficient, condition, sample, rate, that, permits, discrete, sequence, sampl. The Nyquist Shannon sampling theorem is a theorem in the field of signal processing which serves as a fundamental bridge between continuous time signals and discrete time signals It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous time signal of finite bandwidth Example of magnitude of the Fourier transform of a bandlimited function Strictly speaking the theorem only applies to a class of mathematical functions having a Fourier transform that is zero outside of a finite region of frequencies Intuitively we expect that when one reduces a continuous function to a discrete sequence and interpolates back to a continuous function the fidelity of the result depends on the density or sample rate of the original samples The sampling theorem introduces the concept of a sample rate that is sufficient for perfect fidelity for the class of functions that are band limited to a given bandwidth such that no actual information is lost in the sampling process It expresses the sufficient sample rate in terms of the bandwidth for the class of functions The theorem also leads to a formula for perfectly reconstructing the original continuous time function from the samples Perfect reconstruction may still be possible when the sample rate criterion is not satisfied provided other constraints on the signal are known see Sampling of non baseband signals below and compressed sensing In some cases when the sample rate criterion is not satisfied utilizing additional constraints allows for approximate reconstructions The fidelity of these reconstructions can be verified and quantified utilizing Bochner s theorem 1 The name Nyquist Shannon sampling theorem honours Harry Nyquist and Claude Shannon but the theorem was also previously discovered by E T Whittaker published in 1915 and Shannon cited Whittaker s paper in his work The theorem is thus also known by the names Whittaker Shannon sampling theorem Whittaker Shannon and Whittaker Nyquist Shannon and may also be referred to as the cardinal theorem of interpolation Contents 1 Introduction 2 Aliasing 3 Derivation as a special case of Poisson summation 4 Shannon s original proof 4 1 Notes 5 Application to multivariable signals and images 6 Critical frequency 7 Sampling of non baseband signals 8 Nonuniform sampling 9 Sampling below the Nyquist rate under additional restrictions 10 Historical background 10 1 Other discoverers 10 2 Why Nyquist 11 See also 12 Notes 13 References 14 Further reading 15 External linksIntroduction EditSampling is a process of converting a signal for example a function of continuous time or space into a sequence of values a function of discrete time or space Shannon s version of the theorem states 2 Theorem If a function x t displaystyle x t contains no frequencies higher than B hertz it is completely determined by giving its ordinates at a series of points spaced 1 2 B displaystyle 1 2B seconds apart A sufficient sample rate is therefore anything larger than 2 B displaystyle 2B samples per second Equivalently for a given sample rate f s displaystyle f s perfect reconstruction is guaranteed possible for a bandlimit B lt f s 2 displaystyle B lt f s 2 When the bandlimit is too high or there is no bandlimit the reconstruction exhibits imperfections known as aliasing Modern statements of the theorem are sometimes careful to explicitly state that x t displaystyle x t must contain no sinusoidal component at exactly frequency B displaystyle B or that B displaystyle B must be strictly less than the sample rate The threshold 2 B displaystyle 2B is called the Nyquist rate and is an attribute of the continuous time input x t displaystyle x t to be sampled The sample rate must exceed the Nyquist rate for the samples to suffice to represent x t displaystyle x t The threshold f s 2 displaystyle f s 2 is called the Nyquist frequency and is an attribute of the sampling equipment All meaningful frequency components of the properly sampled x t displaystyle x t exist below the Nyquist frequency The condition described by these inequalities is called the Nyquist criterion or sometimes the Raabe condition The theorem is also applicable to functions of other domains such as space in the case of a digitized image The only change in the case of other domains is the units of measure attributed to t displaystyle t f s displaystyle f s and B displaystyle B The normalized sinc function sin px px showing the central peak at x 0 and zero crossings at the other integer values of x The symbol T 1 f s displaystyle T triangleq 1 f s is customarily used to represent the interval between samples and is called the sample period or sampling interval The samples of function x t displaystyle x t are commonly denoted by x n x n T displaystyle x n triangleq x nT alternatively x n displaystyle x n in older signal processing literature for all integer values of n displaystyle n Another convenient definition is x n T x n T displaystyle x n triangleq T cdot x nT which preserves the energy of the signal as T displaystyle T varies 3 A mathematically ideal way to interpolate the sequence involves the use of sinc functions Each sample in the sequence is replaced by a sinc function centered on the time axis at the original location of the sample n T displaystyle nT with the amplitude of the sinc function scaled to the sample value x n displaystyle x n Subsequently the sinc functions are summed into a continuous function A mathematically equivalent method uses the Dirac comb and proceeds by convolving one sinc function with a series of Dirac delta pulses weighted by the sample values Neither method is numerically practical Instead some type of approximation of the sinc functions finite in length is used The imperfections attributable to the approximation are known as interpolation error Practical digital to analog converters produce neither scaled and delayed sinc functions nor ideal Dirac pulses Instead they produce a piecewise constant sequence of scaled and delayed rectangular pulses the zero order hold usually followed by a lowpass filter called an anti imaging filter to remove spurious high frequency replicas images of the original baseband signal Aliasing EditMain article Aliasing The samples of two sine waves can be identical when at least one of them is at a frequency above half the sample rate When x t displaystyle x t is a function with a Fourier transform X f displaystyle X f X f x t e i 2 p f t d t displaystyle X f triangleq int infty infty x t e i2 pi ft rm d t the Poisson summation formula indicates that the samples x n T displaystyle x nT of x t displaystyle x t are sufficient to create a periodic summation of X f displaystyle X f The result is X s f k X f k f s n T x n T e i 2 p n T f displaystyle X s f triangleq sum k infty infty X left f kf s right sum n infty infty T cdot x nT e i2 pi nTf Eq 1 X f top blue and XA f bottom blue are continuous Fourier transforms of two different functions x t displaystyle x t and x A t displaystyle x A t not shown When the functions are sampled at rate f s displaystyle f s the images green are added to the original transforms blue when one examines the discrete time Fourier transforms DTFT of the sequences In this hypothetical example the DTFTs are identical which means the sampled sequences are identical even though the original continuous pre sampled functions are not If these were audio signals x t displaystyle x t and x A t displaystyle x A t might not sound the same But their samples taken at rate fs are identical and would lead to identical reproduced sounds thus xA t is an alias of x t at this sample rate which is a periodic function and its equivalent representation as a Fourier series whose coefficients are T x n T displaystyle T cdot x nT This function is also known as the discrete time Fourier transform DTFT of the sample sequence As depicted copies of X f displaystyle X f are shifted by multiples of the sampling rate f s displaystyle f s and combined by addition For a band limited function X f 0 for all f B displaystyle X f 0 text for all f geq B and sufficiently large f s displaystyle f s it is possible for the copies to remain distinct from each other But if the Nyquist criterion is not satisfied adjacent copies overlap and it is not possible in general to discern an unambiguous X f displaystyle X f Any frequency component above f s 2 displaystyle f s 2 is indistinguishable from a lower frequency component called an alias associated with one of the copies In such cases the customary interpolation techniques produce the alias rather than the original component When the sample rate is pre determined by other considerations such as an industry standard x t displaystyle x t is usually filtered to reduce its high frequencies to acceptable levels before it is sampled The type of filter required is a lowpass filter and in this application it is called an anti aliasing filter Spectrum Xs f of a properly sampled bandlimited signal blue and the adjacent DTFT images green that do not overlap A brick wall low pass filter H f removes the images leaves the original spectrum X f and recovers the original signal from its samples The figure on the left shows a function in gray black being sampled and reconstructed in gold at steadily increasing sample densities while the figure on the right shows the frequency spectrum of the gray black function which does not change The highest frequency in the spectrum is the width of the entire spectrum The width of the steadily increasing pink shading is equal to the sample rate When it encompasses the entire frequency spectrum it is twice as large as the highest frequency and that is when the reconstructed waveform matches the sampled one Derivation as a special case of Poisson summation EditWhen there is no overlap of the copies also known as images of X f displaystyle X f the k 0 displaystyle k 0 term of Eq 1 can be recovered by the product X f H f X s f displaystyle X f H f cdot X s f where H f 1 f lt B 0 f gt f s B displaystyle H f triangleq begin cases 1 amp f lt B 0 amp f gt f s B end cases The sampling theorem is proved since X f displaystyle X f uniquely determines x t displaystyle x t All that remains is to derive the formula for reconstruction H f displaystyle H f need not be precisely defined in the region B f s B displaystyle B f s B because X s f displaystyle X s f is zero in that region However the worst case is when B f s 2 displaystyle B f s 2 the Nyquist frequency A function that is sufficient for that and all less severe cases is H f r e c t f f s 1 f lt f s 2 0 f gt f s 2 displaystyle H f mathrm rect left frac f f s right begin cases 1 amp f lt frac f s 2 0 amp f gt frac f s 2 end cases where rect is the rectangular function Therefore X f r e c t f f s X s f displaystyle X f mathrm rect left frac f f s right cdot X s f r e c t T f n T x n T e i 2 p n T f displaystyle mathrm rect Tf cdot sum n infty infty T cdot x nT e i2 pi nTf from Eq 1 above n x n T T r e c t T f e i 2 p n T f F s i n c t n T T displaystyle sum n infty infty x nT cdot underbrace T cdot mathrm rect Tf cdot e i2 pi nTf mathcal F left mathrm sinc left frac t nT T right right A dd dd The inverse transform of both sides produces the Whittaker Shannon interpolation formula x t n x n T s i n c t n T T displaystyle x t sum n infty infty x nT cdot mathrm sinc left frac t nT T right which shows how the samples x n T displaystyle x nT can be combined to reconstruct x t displaystyle x t Larger than necessary values of fs smaller values of T called oversampling have no effect on the outcome of the reconstruction and have the benefit of leaving room for a transition band in which H f is free to take intermediate values Undersampling which causes aliasing is not in general a reversible operation Theoretically the interpolation formula can be implemented as a low pass filter whose impulse response is sinc t T and whose input is n x n T d t n T displaystyle textstyle sum n infty infty x nT cdot delta t nT which is a Dirac comb function modulated by the signal samples Practical digital to analog converters DAC implement an approximation like the zero order hold In that case oversampling can reduce the approximation error Shannon s original proof EditPoisson shows that the Fourier series in Eq 1 produces the periodic summation of X f displaystyle X f regardless of f s displaystyle f s and B displaystyle B Shannon however only derives the series coefficients for the case f s 2 B displaystyle f s 2B Virtually quoting Shannon s original paper Let X w displaystyle X omega be the spectrum of x t displaystyle x t Thenx t 1 2 p X w e i w t d w 1 2 p 2 p B 2 p B X w e i w t d w displaystyle x t 1 over 2 pi int infty infty X omega e i omega t rm d omega 1 over 2 pi int 2 pi B 2 pi B X omega e i omega t rm d omega dd because X w displaystyle X omega is assumed to be zero outside the band w 2 p lt B displaystyle left tfrac omega 2 pi right lt B If we let t n 2 B displaystyle t tfrac n 2B where n displaystyle n is any positive or negative integer we obtain x n 2 B 1 2 p 2 p B 2 p B X w e i w n 2 B d w displaystyle x left tfrac n 2B right 1 over 2 pi int 2 pi B 2 pi B X omega e i omega n over 2B rm d omega Eq 2 dd On the left are values of x t displaystyle x t at the sampling points The integral on the right will be recognized as essentially a the nth coefficient in a Fourier series expansion of the function X w displaystyle X omega taking the interval B displaystyle B to B displaystyle B as a fundamental period This means that the values of the samples x n 2 B displaystyle x n 2B determine the Fourier coefficients in the series expansion of X w displaystyle X omega Thus they determine X w displaystyle X omega since X w displaystyle X omega is zero for frequencies greater than B and for lower frequencies X w displaystyle X omega is determined if its Fourier coefficients are determined But X w displaystyle X omega determines the original function x t displaystyle x t completely since a function is determined if its spectrum is known Therefore the original samples determine the function x t displaystyle x t completely Shannon s proof of the theorem is complete at that point but he goes on to discuss reconstruction via sinc functions what we now call the Whittaker Shannon interpolation formula as discussed above He does not derive or prove the properties of the sinc function but these would have been weasel words familiar to engineers reading his works at the time since the Fourier pair relationship between rect the rectangular function and sinc was well known Let x n displaystyle x n be the nth sample Then the function x t displaystyle x t is represented by x t n x n sin p 2 B t n p 2 B t n displaystyle x t sum n infty infty x n sin pi 2Bt n over pi 2Bt n dd As in the other proof the existence of the Fourier transform of the original signal is assumed so the proof does not say whether the sampling theorem extends to bandlimited stationary random processes Notes Edit Multiplying both sides of Eq 2 by T 1 2 B displaystyle T 1 2B produces on the left the scaled sample values T x n T displaystyle T cdot x nT in Poisson s formula Eq 1 and on the right the actual formula for Fourier expansion coefficients Application to multivariable signals and images EditMain article Multidimensional sampling Subsampled image showing a Moire pattern Properly sampled image The sampling theorem is usually formulated for functions of a single variable Consequently the theorem is directly applicable to time dependent signals and is normally formulated in that context However the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables Grayscale images for example are often represented as two dimensional arrays or matrices of real numbers representing the relative intensities of pixels picture elements located at the intersections of row and column sample locations As a result images require two independent variables or indices to specify each pixel uniquely one for the row and one for the column Color images typically consist of a composite of three separate grayscale images one to represent each of the three primary colors red green and blue or RGB for short Other colorspaces using 3 vectors for colors include HSV CIELAB XYZ etc Some colorspaces such as cyan magenta yellow and black CMYK may represent color by four dimensions All of these are treated as vector valued functions over a two dimensional sampled domain Similar to one dimensional discrete time signals images can also suffer from aliasing if the sampling resolution or pixel density is inadequate For example a digital photograph of a striped shirt with high frequencies in other words the distance between the stripes is small can cause aliasing of the shirt when it is sampled by the camera s image sensor The aliasing appears as a moire pattern The solution to higher sampling in the spatial domain for this case would be to move closer to the shirt use a higher resolution sensor or to optically blur the image before acquiring it with the sensor using an optical low pass filter Another example is shown to the right in the brick patterns The top image shows the effects when the sampling theorem s condition is not satisfied When software rescales an image the same process that creates the thumbnail shown in the lower image it in effect runs the image through a low pass filter first and then downsamples the image to result in a smaller image that does not exhibit the moire pattern The top image is what happens when the image is downsampled without low pass filtering aliasing results The sampling theorem applies to camera systems where the scene and lens constitute an analog spatial signal source and the image sensor is a spatial sampling device Each of these components is characterized by a modulation transfer function MTF representing the precise resolution spatial bandwidth available in that component Effects of aliasing or blurring can occur when the lens MTF and sensor MTF are mismatched When the optical image which is sampled by the sensor device contains higher spatial frequencies than the sensor the under sampling acts as a low pass filter to reduce or eliminate aliasing When the area of the sampling spot the size of the pixel sensor is not large enough to provide sufficient spatial anti aliasing a separate anti aliasing filter optical low pass filter may be included in a camera system to reduce the MTF of the optical image Instead of requiring an optical filter the graphics processing unit of smartphone cameras performs digital signal processing to remove aliasing with a digital filter Digital filters also apply sharpening to amplify the contrast from the lens at high spatial frequencies which otherwise falls off rapidly at diffraction limits The sampling theorem also applies to post processing digital images such as to up or down sampling Effects of aliasing blurring and sharpening may be adjusted with digital filtering implemented in software which necessarily follows the theoretical principles Critical frequency EditTo illustrate the necessity of f s gt 2 B displaystyle f s gt 2B consider the family of sinusoids generated by different values of 8 displaystyle theta in this formula x t cos 2 p B t 8 cos 8 cos 2 p B t sin 2 p B t tan 8 p 2 lt 8 lt p 2 displaystyle x t frac cos 2 pi Bt theta cos theta cos 2 pi Bt sin 2 pi Bt tan theta quad pi 2 lt theta lt pi 2 A family of sinusoids at the critical frequency all having the same sample sequences of alternating 1 and 1 That is they all are aliases of each other even though their frequency is not above half the sample rate With f s 2 B displaystyle f s 2B or equivalently T 1 2 B displaystyle T 1 2B the samples are given by x n T cos p n sin p n 0 tan 8 1 n displaystyle x nT cos pi n underbrace sin pi n 0 tan theta 1 n regardless of the value of 8 displaystyle theta That sort of ambiguity is the reason for the strict inequality of the sampling theorem s condition Sampling of non baseband signals EditAs discussed by Shannon 2 A similar result is true if the band does not start at zero frequency but at some higher value and can be proved by a linear translation corresponding physically to single sideband modulation of the zero frequency case In this case the elementary pulse is obtained from sin x x by single side band modulation That is a sufficient no loss condition for sampling signals that do not have baseband components exists that involves the width of the non zero frequency interval as opposed to its highest frequency component See Sampling signal processing for more details and examples For example in order to sample the FM radio signals in the frequency range of 100 102 MHz it is not necessary to sample at 204 MHz twice the upper frequency but rather it is sufficient to sample at 4 MHz twice the width of the frequency interval A bandpass condition is that X f 0 for all nonnegative f outside the open band of frequencies N 2 f s N 1 2 f s displaystyle left frac N 2 f mathrm s frac N 1 2 f mathrm s right for some nonnegative integer N This formulation includes the normal baseband condition as the case N 0 The corresponding interpolation function is the impulse response of an ideal brick wall bandpass filter as opposed to the ideal brick wall lowpass filter used above with cutoffs at the upper and lower edges of the specified band which is the difference between a pair of lowpass impulse responses N 1 sinc N 1 t T N sinc N t T displaystyle N 1 operatorname sinc left frac N 1 t T right N operatorname sinc left frac Nt T right Other generalizations for example to signals occupying multiple non contiguous bands are possible as well Even the most generalized form of the sampling theorem does not have a provably true converse That is one cannot conclude that information is necessarily lost just because the conditions of the sampling theorem are not satisfied from an engineering perspective however it is generally safe to assume that if the sampling theorem is not satisfied then information will most likely be lost Nonuniform sampling EditThe sampling theory of Shannon can be generalized for the case of nonuniform sampling that is samples not taken equally spaced in time The Shannon sampling theory for non uniform sampling states that a band limited signal can be perfectly reconstructed from its samples if the average sampling rate satisfies the Nyquist condition 4 Therefore although uniformly spaced samples may result in easier reconstruction algorithms it is not a necessary condition for perfect reconstruction The general theory for non baseband and nonuniform samples was developed in 1967 by Henry Landau 5 He proved that the average sampling rate uniform or otherwise must be twice the occupied bandwidth of the signal assuming it is a priori known what portion of the spectrum was occupied In the late 1990s this work was partially extended to cover signals whose amount of occupied bandwidth was known but the actual occupied portion of the spectrum was unknown 6 In the 2000s a complete theory was developed see the section Sampling below the Nyquist rate under additional restrictions below using compressed sensing In particular the theory using signal processing language is described in this 2009 paper 7 They show among other things that if the frequency locations are unknown then it is necessary to sample at least at twice the Nyquist criteria in other words you must pay at least a factor of 2 for not knowing the location of the spectrum Note that minimum sampling requirements do not necessarily guarantee stability Sampling below the Nyquist rate under additional restrictions EditMain article Undersampling The Nyquist Shannon sampling theorem provides a sufficient condition for the sampling and reconstruction of a band limited signal When reconstruction is done via the Whittaker Shannon interpolation formula the Nyquist criterion is also a necessary condition to avoid aliasing in the sense that if samples are taken at a slower rate than twice the band limit then there are some signals that will not be correctly reconstructed However if further restrictions are imposed on the signal then the Nyquist criterion may no longer be a necessary condition A non trivial example of exploiting extra assumptions about the signal is given by the recent field of compressed sensing which allows for full reconstruction with a sub Nyquist sampling rate Specifically this applies to signals that are sparse or compressible in some domain As an example compressed sensing deals with signals that may have a low over all bandwidth say the effective bandwidth EB but the frequency locations are unknown rather than all together in a single band so that the passband technique does not apply In other words the frequency spectrum is sparse Traditionally the necessary sampling rate is thus 2B Using compressed sensing techniques the signal could be perfectly reconstructed if it is sampled at a rate slightly lower than 2EB With this approach reconstruction is no longer given by a formula but instead by the solution to a linear optimization program Another example where sub Nyquist sampling is optimal arises under the additional constraint that the samples are quantized in an optimal manner as in a combined system of sampling and optimal lossy compression 8 This setting is relevant in cases where the joint effect of sampling and quantization is to be considered and can provide a lower bound for the minimal reconstruction error that can be attained in sampling and quantizing a random signal For stationary Gaussian random signals this lower bound is usually attained at a sub Nyquist sampling rate indicating that sub Nyquist sampling is optimal for this signal model under optimal quantization 9 Historical background EditThe sampling theorem was implied by the work of Harry Nyquist in 1928 10 in which he showed that up to 2B independent pulse samples could be sent through a system of bandwidth B but he did not explicitly consider the problem of sampling and reconstruction of continuous signals About the same time Karl Kupfmuller showed a similar result 11 and discussed the sinc function impulse response of a band limiting filter via its integral the step response sine integral this bandlimiting and reconstruction filter that is so central to the sampling theorem is sometimes referred to as a Kupfmuller filter but seldom so in English The sampling theorem essentially a dual of Nyquist s result was proved by Claude E Shannon 2 The mathematician E T Whittaker published similar results in 1915 12 J M Whittaker in 1935 13 and Gabor in 1946 Theory of communication In 1948 and 1949 Claude E Shannon published the two revolutionary articles in which he founded the information theory 14 15 2 In Shannon 1948 the sampling theorem is formulated as Theorem 13 Let f t contain no frequencies over W Then f t n X n sin p 2 W t n p 2 W t n displaystyle f t sum n infty infty X n frac sin pi 2Wt n pi 2Wt n where X n f n 2 W displaystyle X n f left frac n 2W right It was not until these articles were published that the theorem known as Shannon s sampling theorem became common property among communication engineers although Shannon himself writes that this is a fact which is common knowledge in the communication art B A few lines further on however he adds but in spite of its evident importance it seems not to have appeared explicitly in the literature of communication theory Other discoverers Edit Others who have independently discovered or played roles in the development of the sampling theorem have been discussed in several historical articles for example by Jerri 16 and by Luke 17 For example Luke points out that H Raabe an assistant to Kupfmuller proved the theorem in his 1939 Ph D dissertation the term Raabe condition came to be associated with the criterion for unambiguous representation sampling rate greater than twice the bandwidth Meijering 18 mentions several other discoverers and names in a paragraph and pair of footnotes As pointed out by Higgins 135 the sampling theorem should really be considered in two parts as done above the first stating the fact that a bandlimited function is completely determined by its samples the second describing how to reconstruct the function using its samples Both parts of the sampling theorem were given in a somewhat different form by J M Whittaker 350 351 353 and before him also by Ogura 241 242 They were probably not aware of the fact that the first part of the theorem had been stated as early as 1897 by Borel 25 27 As we have seen Borel also used around that time what became known as the cardinal series However he appears not to have made the link 135 In later years it became known that the sampling theorem had been presented before Shannon to the Russian communication community by Kotel nikov 173 In more implicit verbal form it had also been described in the German literature by Raabe 257 Several authors 33 205 have mentioned that Someya 296 introduced the theorem in the Japanese literature parallel to Shannon In the English literature Weston 347 introduced it independently of Shannon around the same time 2827 Several authors following Black 16 have claimed that this first part of the sampling theorem was stated even earlier by Cauchy in a paper 41 published in 1841 However the paper of Cauchy does not contain such a statement as has been pointed out by Higgins 135 28 As a consequence of the discovery of the several independent introductions of the sampling theorem people started to refer to the theorem by including the names of the aforementioned authors resulting in such catchphrases as the Whittaker Kotel nikov Shannon WKS sampling theorem 155 or even the Whittaker Kotel nikov Raabe Shannon Someya sampling theorem 33 To avoid confusion perhaps the best thing to do is to refer to it as the sampling theorem rather than trying to find a title that does justice to all claimants 136 Why Nyquist Edit Exactly how when or why Harry Nyquist had his name attached to the sampling theorem remains obscure The term Nyquist Sampling Theorem capitalized thus appeared as early as 1959 in a book from his former employer Bell Labs 19 and appeared again in 1963 20 and not capitalized in 1965 21 It had been called the Shannon Sampling Theorem as early as 1954 22 but also just the sampling theorem by several other books in the early 1950s In 1958 Blackman and Tukey cited Nyquist s 1928 article as a reference for the sampling theorem of information theory 23 even though that article does not treat sampling and reconstruction of continuous signals as others did Their glossary of terms includes these entries Sampling theorem of information theory Nyquist s result that equi spaced data with two or more points per cycle of highest frequency allows reconstruction of band limited functions See Cardinal theorem Cardinal theorem of interpolation theory A precise statement of the conditions under which values given at a doubly infinite set of equally spaced points can be interpolated to yield a continuous band limited function with the aid of the function sin x x i x x i displaystyle frac sin x x i x x i Exactly what Nyquist s result they are referring to remains mysterious When Shannon stated and proved the sampling theorem in his 1949 article according to Meijering 18 he referred to the critical sampling interval T 1 2 W displaystyle T frac 1 2W as the Nyquist interval corresponding to the band W in recognition of Nyquist s discovery of the fundamental importance of this interval in connection with telegraphy This explains Nyquist s name on the critical interval but not on the theorem Similarly Nyquist s name was attached to Nyquist rate in 1953 by Harold S Black If the essential frequency range is limited to B cycles per second 2B was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved assuming the peak interference is less half a quantum step This rate is generally referred to as signaling at the Nyquist rate and 1 2 B displaystyle frac 1 2B has been termed a Nyquist interval 24 bold added for emphasis italics as in the original According to the OED this may be the origin of the term Nyquist rate In Black s usage it is not a sampling rate but a signaling rate See also Edit44 100 Hz a customary rate used to sample audible frequencies is based on the limits of human hearing and the sampling theorem Balian Low theorem a similar theoretical lower bound on sampling rates but which applies to time frequency transforms Cheung Marks theorem which specifies conditions where restoration of a signal by the sampling theorem can become ill posed Shannon Hartley theorem Nyquist ISI criterion Reconstruction from zero crossings Zero order hold Dirac combNotes Edit The sinc function follows from rows 202 and 102 of the transform tables Shannon 1949 p 448 References Edit Nemirovsky Jonathan Shimron Efrat 2015 Utilizing Bochners Theorem for Constrained Evaluation of Missing Fourier Data arXiv 1506 03300 physics med ph a b c d Shannon Claude E January 1949 Communication in the presence of noise Proceedings of the Institute of Radio Engineers 37 1 10 21 doi 10 1109 jrproc 1949 232969 S2CID 52873253 Reprint as classic paper in Proc IEEE Vol 86 No 2 Feb 1998 Archived 2010 02 08 at the Wayback Machine Ahmed N Rao K R July 10 1975 Orthogonal Transforms for Digital Signal Processing 1 ed Berlin Heidelberg New York Springer Verlag doi 10 1007 978 3 642 45450 9 ISBN 9783540065562 Marvasti F ed 2000 Nonuniform Sampling Theory and Practice New York Kluwer Academic Plenum Publishers Landau H J 1967 Necessary density conditions for sampling and interpolation of certain entire functions Acta Math 117 1 37 52 doi 10 1007 BF02395039 see e g Feng P 1997 Universal minimum rate sampling and spectrum blind reconstruction for multiband signals Ph D dissertation University of Illinois at Urbana Champaign Mishali Moshe Eldar Yonina C March 2009 Blind Multiband Signal Reconstruction Compressed Sensing for Analog Signals IEEE Trans Signal Process 57 3 993 1009 Bibcode 2009ITSP 57 993M CiteSeerX 10 1 1 154 4255 doi 10 1109 TSP 2009 2012791 S2CID 2529543 Kipnis Alon Goldsmith Andrea J Eldar Yonina C Weissman Tsachy January 2016 Distortion rate function of sub Nyquist sampled Gaussian sources IEEE Transactions on Information Theory 62 401 429 arXiv 1405 5329 doi 10 1109 tit 2015 2485271 S2CID 47085927 Kipnis Alon Eldar Yonina Goldsmith Andrea 26 April 2018 Analog to Digital Compression A New Paradigm for Converting Signals to Bits IEEE Signal Processing Magazine 35 3 16 39 arXiv 1801 06718 Bibcode 2018ISPM 35 16K doi 10 1109 MSP 2017 2774249 S2CID 13693437 Nyquist Harry April 1928 Certain topics in telegraph transmission theory Trans AIEE 47 2 617 644 Bibcode 1928TAIEE 47 617N doi 10 1109 t aiee 1928 5055024 Reprint as classic paper in Proc IEEE Vol 90 No 2 Feb 2002 Archived 2013 09 26 at the Wayback Machine Kupfmuller Karl 1928 Uber die Dynamik der selbsttatigen Verstarkungsregler Elektrische Nachrichtentechnik in German 5 11 459 467 English translation 2005 Whittaker E T 1915 On the Functions Which are Represented by the Expansions of the Interpolation Theory Proc R Soc Edinburgh 35 181 194 doi 10 1017 s0370164600017806 Theorie der Kardinalfunktionen Whittaker J M 1935 Interpolatory Function Theory Cambridge England Cambridge Univ Press Shannon Claude E July 1948 A Mathematical Theory of Communication Bell System Technical Journal 27 3 379 423 doi 10 1002 j 1538 7305 1948 tb01338 x hdl 11858 00 001M 0000 002C 4317 B Shannon Claude E October 1948 A Mathematical Theory of Communication Bell System Technical Journal 27 4 623 666 doi 10 1002 j 1538 7305 1948 tb00917 x hdl 11858 00 001M 0000 002C 4314 2 Jerri Abdul November 1977 The Shannon Sampling Theorem Its Various Extensions and Applications A Tutorial Review Proceedings of the IEEE 65 11 1565 1596 Bibcode 1977IEEEP 65 1565J doi 10 1109 proc 1977 10771 S2CID 37036141 See also Jerri Abdul April 1979 Correction to The Shannon sampling theorem Its various extensions and applications A tutorial review Proceedings of the IEEE 67 4 695 doi 10 1109 proc 1979 11307 Luke Hans Dieter April 1999 The Origins of the Sampling Theorem PDF IEEE Communications Magazine 37 4 106 108 CiteSeerX 10 1 1 163 2887 doi 10 1109 35 755459 a b Meijering Erik March 2002 A Chronology of Interpolation From Ancient Astronomy to Modern Signal and Image Processing PDF Proc IEEE 90 3 319 342 doi 10 1109 5 993400 Members of the Technical Staff of Bell Telephone Lababoratories 1959 Transmission Systems for Communications AT amp T pp 26 4 Vol 2 Guillemin Ernst Adolph 1963 Theory of Linear Physical Systems Wiley ISBN 9780471330707 Roberts Richard A Barton Ben F 1965 Theory of Signal Detectability Composite Deferred Decision Theory Gray Truman S 1954 Applied Electronics A First Course in Electronics Electron Tubes and Associated Circuits Physics Today 7 11 17 Bibcode 1954PhT 7k 17G doi 10 1063 1 3061438 hdl 2027 mdp 39015002049487 Blackman R B Tukey J W 1958 The Measurement of Power Spectra From the Point of View of Communications Engineering PDF New York Dover Archived PDF from the original on 2022 10 09 permanent dead link Black Harold S 1953 Modulation Theory Further reading EditHiggins J R 1985 Five short stories about the cardinal series Bulletin of the AMS 12 1 45 89 doi 10 1090 S0273 0979 1985 15293 0 Kupfmuller Karl 1931 Utjamningsforlopp inom Telegraf och Telefontekniken Transients in telegraph and telephone engineering Teknisk Tidskrift 9 153 160 and 10 pp 178 182 Marks II R J 1991 Introduction to Shannon Sampling and Interpolation Theory PDF Springer Texts in Electrical Engineering Springer doi 10 1007 978 1 4613 9708 3 ISBN 978 1 4613 9708 3 Archived PDF from the original on 2011 07 14 Marks II R J ed 1993 Advanced Topics in Shannon Sampling and Interpolation Theory PDF Springer Texts in Electrical Engineering Springer doi 10 1007 978 1 4613 9757 1 ISBN 978 1 4613 9757 1 Archived PDF from the original on 2011 10 06 Marks II R J 2009 Chapters 5 8 Handbook of Fourier Analysis and Its Applications Oxford University Press ISBN 978 0 19 804430 7 Press WH Teukolsky SA Vetterling WT Flannery BP 2007 13 11 Numerical Use of the Sampling Theorem Numerical Recipes The Art of Scientific Computing 3rd ed Cambridge University Press ISBN 978 0 521 88068 8 Unser Michael April 2000 Sampling 50 Years after Shannon PDF Proc IEEE 88 4 569 587 doi 10 1109 5 843002 S2CID 11657280 Archived PDF from the original on 2022 11 03 External links Edit Wikimedia Commons has media related to Nyquist Shannon theorem Learning by Simulations Interactive simulation of the effects of inadequate sampling Interactive presentation of the sampling and reconstruction in a web demo Institute of Telecommunications University of Stuttgart Undersampling and an application of it Sampling Theory For Digital Audio Journal devoted to Sampling Theory Huang J Padmanabhan K Collins O M June 2011 The Sampling Theorem With Constant Amplitude Variable Width Pulses IEEE Transactions on Circuits and Systems I Regular Papers 58 6 1178 90 doi 10 1109 TCSI 2010 2094350 S2CID 13590085 Luke Hans Dieter April 1999 The Origins of the Sampling Theorem PDF IEEE Communications Magazine 37 4 106 108 CiteSeerX 10 1 1 163 2887 doi 10 1109 35 755459 Retrieved from https en wikipedia org w index php title Nyquist Shannon sampling theorem amp oldid 1136263436, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.