fbpx
Wikipedia

Savitzky–Golay filter

A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, (or derivatives of the smoothed signal) at the central point of each sub-set. The method, based on established mathematical procedures,[1][2] was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964.[3][4] Some errors in the tables have been corrected.[5] The method has been extended for the treatment of 2- and 3-dimensional data.

Animation showing smoothing being applied, passing through the data from left to right. The red line represents the local polynomial being used to fit a sub-set of the data. The smoothed values are shown as circles.

Savitzky and Golay's paper is one of the most widely cited papers in the journal Analytical Chemistry[6] and is classed by that journal as one of its "10 seminal papers" saying "it can be argued that the dawn of the computer-controlled analytical instrument can be traced to this article".[7]

Applications

The data consists of a set of points {xj, yj}, j = 1, ..., n, where xj is an independent variable and yj is an observed value. They are treated with a set of m convolution coefficients, Ci, according to the expression

 

Selected convolution coefficients are shown in the tables, below. For example, for smoothing by a 5-point quadratic polynomial, m = 5, i = −2, −1, 0, 1, 2 and the jth smoothed data point, Yj, is given by

 ,

where, C−2 = −3/35, C−1 = 12 / 35, etc. There are numerous applications of smoothing, which is performed primarily to make the data appear to be less noisy than it really is. The following are applications of numerical differentiation of data.[8] Note When calculating the nth derivative, an additional scaling factor of   may be applied to all calculated data points to obtain absolute values (see expressions for  , below, for details).

 
(1) Synthetic Lorentzian + noise (blue) and 1st derivative (green)
 
(2) Titration curve (blue) for malonic acid and 2nd derivative (green). The part in the light blue box is magnified 10 times
 
(3) Lorentzian on exponential baseline (blue) and 2nd derivative (green)
 
(4) Sum of two Lorentzians (blue) and 2nd derivative (green)
 
(5) 4th derivative of the sum of two Lorentzians
  1. Location of maxima and minima in experimental data curves. This was the application that first motivated Savitzky.[4] The first derivative of a function is zero at a maximum or minimum. The diagram shows data points belonging to a synthetic Lorentzian curve, with added noise (blue diamonds). Data are plotted on a scale of half width, relative to the peak maximum at zero. The smoothed curve (red line) and 1st derivative (green) were calculated with 7-point cubic Savitzky–Golay filters. Linear interpolation of the first derivative values at positions either side of the zero-crossing gives the position of the peak maximum. 3rd derivatives can also be used for this purpose.
  2. Location of an end-point in a titration curve. An end-point is an inflection point where the second derivative of the function is zero.[9] The titration curve for malonic acid illustrates the power of the method. The first end-point at 4 ml is barely visible, but the second derivative allows its value to be easily determined by linear interpolation to find the zero crossing.
  3. Baseline flattening. In analytical chemistry it is sometimes necessary to measure the height of an absorption band against a curved baseline.[10] Because the curvature of the baseline is much less than the curvature of the absorption band, the second derivative effectively flattens the baseline. Three measures of the derivative height, which is proportional to the absorption band height, are the "peak-to-valley" distances h1 and h2, and the height from baseline, h3.[11]
  4. Resolution enhancement in spectroscopy. Bands in the second derivative of a spectroscopic curve are narrower than the bands in the spectrum: they have reduced half-width. This allows partially overlapping bands to be "resolved" into separate (negative) peaks.[12] The diagram illustrates how this may be used also for chemical analysis, using measurement of "peak-to-valley" distances. In this case the valleys are a property of the 2nd derivative of a Lorentzian. (x-axis position is relative to the position of the peak maximum on a scale of half width at half height).
  5. Resolution enhancement with 4th derivative (positive peaks). The minima are a property of the 4th derivative of a Lorentzian.

Moving average

A moving average filter is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles. It is often used in technical analysis of financial data, like stock prices, returns or trading volumes. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series.

An unweighted moving average filter is the simplest convolution filter. Each subset of the data set is fitted by a straight horizontal line. It was not included in the Savitzsky-Golay tables of convolution coefficients as all the coefficient values are identical, with the value 1/m.

Derivation of convolution coefficients

When the data points are equally spaced, an analytical solution to the least-squares equations can be found.[2] This solution forms the basis of the convolution method of numerical smoothing and differentiation. Suppose that the data consists of a set of n points (xj, yj) (j = 1, ..., n), where xj is an independent variable and yj is a datum value. A polynomial will be fitted by linear least squares to a set of m (an odd number) adjacent data points, each separated by an interval h. Firstly, a change of variable is made

 

where   is the value of the central point. z takes the values   (e.g. m = 5 → z = −2, −1, 0, 1, 2).[note 1] The polynomial, of degree k is defined as

 [note 2]

The coefficients a0, a1 etc. are obtained by solving the normal equations (bold a represents a vector, bold J represents a matrix).

 

where   is a Vandermonde matrix, that is  -th row of   has values  .

For example, for a cubic polynomial fitted to 5 points, z= −2, −1, 0, 1, 2 the normal equations are solved as follows.

 
 

Now, the normal equations can be factored into two separate sets of equations, by rearranging rows and columns, with

 

Expressions for the inverse of each of these matrices can be obtained using Cramer's rule

 

The normal equations become

 

and

 

Multiplying out and removing common factors,

 

The coefficients of y in these expressions are known as convolution coefficients. They are elements of the matrix

 

In general,

 

In matrix notation this example is written as

 

Tables of convolution coefficients, calculated in the same way for m up to 25, were published for the Savitzky–Golay smoothing filter in 1964,[3][5] The value of the central point, z = 0, is obtained from a single set of coefficients, a0 for smoothing, a1 for 1st derivative etc. The numerical derivatives are obtained by differentiating Y. This means that the derivatives are calculated for the smoothed data curve. For a cubic polynomial

 

In general, polynomials of degree (0 and 1),[note 3] (2 and 3), (4 and 5) etc. give the same coefficients for smoothing and even derivatives. Polynomials of degree (1 and 2), (3 and 4) etc. give the same coefficients for odd derivatives.

Algebraic expressions

It is not necessary always to use the Savitzky–Golay tables. The summations in the matrix JTJ can be evaluated in closed form,

 

so that algebraic formulae can be derived for the convolution coefficients.[13][note 4] Functions that are suitable for use with a curve that has an inflection point are:

Smoothing, polynomial degree 2,3 :   (the range of values for i also applies to the expressions below)
1st derivative: polynomial degree 3,4  
2nd derivative: polynomial degree 2,3  
3rd derivative: polynomial degree 3,4  

Simpler expressions that can be used with curves that don't have an inflection point are:

Smoothing, polynomial degree 0,1 (moving average):  
1st derivative, polynomial degree 1,2:  

Higher derivatives can be obtained. For example, a fourth derivative can be obtained by performing two passes of a second derivative function.[14]

Use of orthogonal polynomials

An alternative to fitting m data points by a simple polynomial in the subsidiary variable, z, is to use orthogonal polynomials.

 

where P0, ..., Pk is a set of mutually orthogonal polynomials of degree 0, ..., k. Full details on how to obtain expressions for the orthogonal polynomials and the relationship between the coefficients b and a are given by Guest.[2] Expressions for the convolution coefficients are easily obtained because the normal equations matrix, JTJ, is a diagonal matrix as the product of any two orthogonal polynomials is zero by virtue of their mutual orthogonality. Therefore, each non-zero element of its inverse is simply the reciprocal the corresponding element in the normal equation matrix. The calculation is further simplified by using recursion to build orthogonal Gram polynomials. The whole calculation can be coded in a few lines of PASCAL, a computer language well-adapted for calculations involving recursion.[15]

Treatment of first and last points

Savitzky–Golay filters are most commonly used to obtain the smoothed or derivative value at the central point, z = 0, using a single set of convolution coefficients. (m − 1)/2 points at the start and end of the series cannot be calculated using this process. Various strategies can be employed to avoid this inconvenience.

  • The data could be artificially extended by adding, in reverse order, copies of the first (m − 1)/2 points at the beginning and copies of the last (m − 1)/2 points at the end. For instance, with m = 5, two points are added at the start and end of the data y1, ..., yn.
y3,y2,y1, ... ,yn, yn−1, yn−2.
  • Looking again at the fitting polynomial, it is obvious that data can be calculated for all values of z by using all sets of convolution coefficients for a single polynomial, a0 .. ak.
For a cubic polynomial
 
  • Convolution coefficients for the missing first and last points can also be easily obtained.[15] This is also equivalent to fitting the first (m + 1)/2 points with the same polynomial, and similarly for the last points.

Weighting the data

It is implicit in the above treatment that the data points are all given equal weight. Technically, the objective function

 

being minimized in the least-squares process has unit weights, wi = 1. When weights are not all the same the normal equations become

 ,

If the same set of diagonal weights is used for all data subsets,  , an analytical solution to the normal equations can be written down. For example, with a quadratic polynomial,

 

An explicit expression for the inverse of this matrix can be obtained using Cramer's rule. A set of convolution coefficients may then be derived as

 

Alternatively the coefficients, C, could be calculated in a spreadsheet, employing a built-in matrix inversion routine to obtain the inverse of the normal equations matrix. This set of coefficients, once calculated and stored, can be used with all calculations in which the same weighting scheme applies. A different set of coefficients is needed for each different weighting scheme.

It was shown that Savitzky–Golay filter can be improved by introducing weights that decrease at the ends of the fitting interval.[16]

Two-dimensional convolution coefficients

Two-dimensional smoothing and differentiation can also be applied to tables of data values, such as intensity values in a photographic image which is composed of a rectangular grid of pixels.[17][18] Such a grid is referred as a kernel, and the data points that constitute the kernel are referred as nodes. The trick is to transform the rectangular kernel into a single row by a simple ordering of the indices of the nodes. Whereas the one-dimensional filter coefficients are found by fitting a polynomial in the subsidiary variable z to a set of m data points, the two-dimensional coefficients are found by fitting a polynomial in subsidiary variables v and w to a set of the values at the m × n kernel nodes. The following example, for a bivariate polynomial of total degree 3, m = 7, and n = 5, illustrates the process, which parallels the process for the one dimensional case, above.[19]

 
 

The rectangular kernel of 35 data values, d1 − d35

v
w
−3 −2 −1 0 1 2 3
−2 d1 d2 d3 d4 d5 d6 d7
−1 d8 d9 d10 d11 d12 d13 d14
0 d15 d16 d17 d18 d19 d20 d21
1 d22 d23 d24 d25 d26 d27 d28
2 d29 d30 d31 d32 d33 d34 d35

becomes a vector when the rows are placed one after another.

d = (d1 ... d35)T

The Jacobian has 10 columns, one for each of the parameters a00 − a03, and 35 rows, one for each pair of v and w values. Each row has the form

 

The convolution coefficients are calculated as

 

The first row of C contains 35 convolution coefficients, which can be multiplied with the 35 data values, respectively, to obtain the polynomial coefficient  , which is the smoothed value at the central node of the kernel (i.e. at the 18th node of the above table). Similarly, other rows of C can be multiplied with the 35 values to obtain other polynomial coefficients, which, in turn, can be used to obtain smoothed values and different smoothed partial derivatives at different nodes.

Nikitas and Pappa-Louisi showed that depending on the format of the used polynomial, the quality of smoothing may vary significantly.[20] They recommend using the polynomial of the form

 

because such polynomials can achieve good smoothing both in the central and in the near-boundary regions of a kernel, and therefore they can be confidently used in smoothing both at the internal and at the near-boundary data points of a sampled domain. In order to avoid ill-conditioning when solving the least-squares problem, p < m and q < n. For software that calculates the two-dimensional coefficients and for a database of such C's, see the section on multi-dimensional convolution coefficients, below.

Multi-dimensional convolution coefficients

The idea of two-dimensional convolution coefficients can be extended to the higher spatial dimensions as well, in a straightforward manner,[17][21] by arranging multidimensional distribution of the kernel nodes in a single row. Following the aforementioned finding by Nikitas and Pappa-Louisi[20] in two-dimensional cases, usage of the following form of the polynomial is recommended in multidimensional cases:

 

where D is the dimension of the space,  's are the polynomial coefficients, and u's are the coordinates in the different spatial directions. Algebraic expressions for partial derivatives of any order, be it mixed or otherwise, can be easily derived from the above expression.[21] Note that C depends on the manner in which the kernel nodes are arranged in a row and on the manner in which the different terms of the expanded form of the above polynomial is arranged, when preparing the Jacobian.

Accurate computation of C in multidimensional cases becomes challenging, as precision of standard floating point numbers available in computer programming languages no longer remain sufficient. The insufficient precision causes the floating point truncation errors to become comparable to the magnitudes of some C elements, which, in turn, severely degrades its accuracy and renders it useless. Chandra Shekhar has brought forth two open source softwares, Advanced Convolution Coefficient Calculator (ACCC) and Precise Convolution Coefficient Calculator (PCCC), which handle these accuracy issues adequately. ACCC performs the computation by using floating point numbers, in an iterative manner.[22] The precision of the floating-point numbers is gradually increased in each iteration, by using GNU MPFR. Once the obtained C's in two consecutive iterations start having same significant digits until a pre-specified distance, the convergence is assumed to have reached. If the distance is sufficiently large, the computation yields a highly accurate C. PCCC employs rational number calculations, by using GNU Multiple Precision Arithmetic Library, and yields a fully accurate C, in the rational number format.[23] In the end, these rational numbers are converted into floating point numbers, until a pre-specified number of significant digits.

A database of C's that are calculated by using ACCC, for symmetric kernels and both symmetric and asymmetric polynomials, on unity-spaced kernel nodes, in the 1, 2, 3, and 4 dimensional spaces, is made available.[24] Chandra Shekhar has also laid out a mathematical framework that describes usage of C calculated on unity-spaced kernel nodes to perform filtering and partial differentiations (of various orders) on non-uniformly spaced kernel nodes,[21] allowing usage of C provided in the aforementioned database. Although this method yields approximate results only, they are acceptable in most engineering applications, provided that non-uniformity of the kernel nodes is weak.

Some properties of convolution

  1. The sum of convolution coefficients for smoothing is equal to one. The sum of coefficients for odd derivatives is zero.[25]
  2. The sum of squared convolution coefficients for smoothing is equal to the value of the central coefficient.[26]
  3. Smoothing of a function leaves the area under the function unchanged.[25]
  4. Convolution of a symmetric function with even-derivative coefficients conserves the centre of symmetry.[25]
  5. Properties of derivative filters.[27]

Signal distortion and noise reduction

It is inevitable that the signal will be distorted in the convolution process. From property 3 above, when data which has a peak is smoothed the peak height will be reduced and the half-width will be increased. Both the extent of the distortion and S/N (signal-to-noise ratio) improvement:

  • decrease as the degree of the polynomial increases
  • increase as the width, m of the convolution function increases
 
Effect of smoothing on data points with uncorrelated noise of unit standard deviation

For example, If the noise in all data points is uncorrelated and has a constant standard deviation, σ, the standard deviation on the noise will be decreased by convolution with an m-point smoothing function to[26][note 5]

polynomial degree 0 or 1:  (moving average)
polynomial degree 2 or 3:  .

These functions are shown in the plot at the right. For example, with a 9-point linear function (moving average) two thirds of the noise is removed and with a 9-point quadratic/cubic smoothing function only about half the noise is removed. Most of the noise remaining is low-frequency noise(see Frequency characteristics of convolution filters, below).

Although the moving average function gives the best noise reduction it is unsuitable for smoothing data which has curvature over m points. A quadratic filter function is unsuitable for getting a derivative of a data curve with an inflection point because a quadratic polynomial does not have one. The optimal choice of polynomial order and number of convolution coefficients will be a compromise between noise reduction and distortion.[28]

Multipass filters

One way to mitigate distortion and improve noise removal is to use a filter of smaller width and perform more than one convolution with it. For two passes of the same filter this is equivalent to one pass of a filter obtained by convolution of the original filter with itself.[29] For example, 2 passes of the filter with coefficients (1/3, 1/3, 1/3) is equivalent to 1 pass of the filter with coefficients (1/9, 2/9, 3/9, 2/9, 1/9).

The disadvantage of multipassing is that the equivalent filter width for   passes of an  –point function is   so multipassing is subject to greater end-effects. Nevertheless, multipassing has been used to great advantage. For instance, some 40–80 passes on data with a signal-to-noise ratio of only 5 gave useful results.[30] The noise reduction formulae given above do not apply because correlation between calculated data points increases with each pass.

Frequency characteristics of convolution filters

 
Fourier transform of the 9-point quadratic/cubic smoothing function

Convolution maps to multiplication in the Fourier co-domain. The discrete Fourier transform of a convolution filter is a real-valued function which can be represented as

 

θ runs from 0 to 180 degrees, after which the function merely repeats itself. The plot for a 9-point quadratic/cubic smoothing function is typical. At very low angle, the plot is almost flat, meaning that low-frequency components of the data will be virtually unchanged by the smoothing operation. As the angle increases the value decreases so that higher frequency components are more and more attenuated. This shows that the convolution filter can be described as a low-pass filter: the noise that is removed is primarily high-frequency noise and low-frequency noise passes through the filter.[31] Some high-frequency noise components are attenuated more than others, as shown by undulations in the Fourier transform at large angles. This can give rise to small oscillations in the smoothed data[32] and phase reversal, i.e., high-frequency oscillations in the data get inverted by Savitzky–Golay filtering.[33]

Convolution and correlation

Convolution affects the correlation between errors in the data. The effect of convolution can be expressed as a linear transformation.

 

By the law of error propagation, the variance-covariance matrix of the data, A will be transformed into B according to

 

To see how this applies in practice, consider the effect of a 3-point moving average on the first three calculated points, Y2 − Y4, assuming that the data points have equal variance and that there is no correlation between them. A will be an identity matrix multiplied by a constant, σ2, the variance at each point.

 

In this case the correlation coefficients,

 

between calculated points i and j will be

 

In general, the calculated values are correlated even when the observed values are not correlated. The correlation extends over m − 1 calculated points at a time.[34]

Multipass filters

To illustrate the effect of multipassing on the noise and correlation of a set of data, consider the effects of a second pass of a 3-point moving average filter. For the second pass[note 6]

 

After two passes, the standard deviation of the central point has decreased to  , compared to 0.58σ for one pass. The noise reduction is a little less than would be obtained with one pass of a 5-point moving average which, under the same conditions, would result in the smoothed points having the smaller standard deviation of 0.45σ.

Correlation now extends over a span of 4 sequential points with correlation coefficients

 

The advantage obtained by performing two passes with the narrower smoothing function is that it introduces less distortion into the calculated data.

Comparison with other filters and alternatives

Compared with other smoothing filters, e.g. convolution with a Gaussian or multi-pass moving-average filtering, Savitzky–Golay filters have an initially flatter response and sharper cutoff in the frequency domain, especially for high orders of the fit polynomial (see frequency characteristics). For data with limited signal bandwidth, this means that Savitzky–Golay filtering can provide better signal-to-noise ratio than many other filters; e.g., peak heights of spectra are better preserved than for other filters with similar noise suppression. Disadvantages of the Savitzky–Golay filters are comparably poor suppression of some high frequencies (poor stopband suppression) and artifacts when using polynomial fits for the first and last points.[16]

Alternative smoothing methods that share the advantages of Savitzky–Golay filters and mitigate at least some of their disadvantages are Savitzky–Golay filters with properly chosen fitting weights, Whittaker–Henderson smoothing (a method closely related to smoothing splines), and convolution with a windowed sinc function.[16]

See also

Appendix

Tables of selected convolution coefficients

Consider a set of data points  . The Savitzky–Golay tables refer to the case that the step   is constant, h. Examples of the use of the so-called convolution coefficients, with a cubic polynomial and a window size, m, of 5 points are as follows.

Smoothing
  ;
1st derivative
  ;
2nd derivative
 .

Selected values of the convolution coefficients for polynomials of degree 1, 2, 3, 4 and 5 are given in the following tables[note 7] The values were calculated using the PASCAL code provided in Gorry.[15]

Coefficients for smoothing
Polynomial
Degree
quadratic or cubic
2 or 3
quartic or quintic
4 or 5
Window size 5 7 9 7 9
−4 −21 15
−3 −2 14 5 −55
−2 −3 3 39 −30 30
−1 12 6 54 75 135
0 17 7 59 131 179
1 12 6 54 75 135
2 −3 3 39 −30 30
3 −2 14 5 −55
4 −21 15
Normalisation 35 21 231 231 429
Coefficients for 1st derivative
Polynomial
Degree
linear or quadratic
1 or 2
cubic or quartic
3 or 4
Window size 3 5 7 9 5 7 9
−4 −4 86
−3 −3 −3 22 −142
−2 −2 −2 −2 1 −67 −193
−1 -1 −1 −1 −1 −8 −58 −126
0 0 0 0 0 0 0 0
1 1 1 1 1 8 58 126
2 2 2 2 −1 67 193
3 3 3 −22 142
4 4 −86
Normalisation 2 10 28 60 12 252 1,188
Coefficients for 2nd derivative
Polynomial
Degree
quadratic or cubic
2 or 3
quartic or quintic
4 or 5
Window size 5 7 9 5 7 9
−4 28 −126
−3 5 7 −13 371
−2 2 0 −8 −1 67 151
−1 −1 −3 −17 16 −19 −211
0 −2 −4 −20 −30 −70 −370
1 −1 −3 −17 16 −19 −211
2 2 0 −8 −1 67 151
3 5 7 −13 371
4 28 −126
Normalisation 7 42 462 12 132 1716
Coefficients for 3rd derivative
Polynomial
Degree
cubic or quartic
3 or 4
quintic or sextic
5 or 6
Window size 5 7 9 7 9
−4 −14 100
−3 −1 7 1 −457
−2 −1 1 13 −8 256
−1 2 1 9 13 459
0 0 0 0 0 0
1 −2 −1 −9 −13 −459
2 1 −1 −13 8 −256
3 1 −7 −1 457
4 14 −100
Normalisation 2 6 198 8 1144
Coefficients for 4th derivative
Polynomial
Degree
quartic or quintic
4 or 5
Window size 7 9
−4 14
−3 3 −21
−2 −7 −11
−1 1 9
0 6 18
1 1 9
2 -7 −11
3 3 −21
4 14
Normalisation 11 143

Notes

  1. ^ With even values of m, z will run from 1 − m to m − 1 in steps of 2
  2. ^ The simple moving average is a special case with k = 0, Y = a0. In this case all convolution coefficients are equal to 1/m.
  3. ^ Smoothing using the moving average is equivalent, with equally spaced points, to local fitting with a (sloping) straight line
  4. ^ The expressions given here are different from those of Madden, which are given in terms of the variable m' = (m − 1)/2.
  5. ^ The expressions under the square root sign are the same as the expression for the convolution coefficient with z=0
  6. ^ The same result is obtained with one pass of the equivalent filter with coefficients (1/9, 2/9, 3/9, 2/9, 1/9) and an identity variance-covariance matrix
  7. ^ More extensive tables and the method to calculate additional coefficients were originally provided by Savitzky and Golay.[3]

References

  1. ^ Whittaker, E.T; Robinson, G (1924). The Calculus Of Observations. Blackie & Son. pp. 291–6. OCLC 1187948.. "Graduation Formulae obtained by fitting a Polynomial."
  2. ^ a b c Guest, P.G. (2012) [1961]. "Ch. 7: Estimation of Polynomial Coefficients". Numerical Methods of Curve Fitting. Cambridge University Press. pp. 147–. ISBN 978-1-107-64695-7.
  3. ^ a b c Savitzky, A.; Golay, M.J.E. (1964). "Smoothing and Differentiation of Data by Simplified Least Squares Procedures". Analytical Chemistry. 36 (8): 1627–39. Bibcode:1964AnaCh..36.1627S. doi:10.1021/ac60214a047.
  4. ^ a b Savitzky, Abraham (1989). "A Historic Collaboration". Analytical Chemistry. 61 (15): 921A–3A. doi:10.1021/ac00190a744.
  5. ^ a b Steinier, Jean; Termonia, Yves; Deltour, Jules (1972). "Smoothing and differentiation of data by simplified least square procedure". Analytical Chemistry. 44 (11): 1906–9. doi:10.1021/ac60319a045. PMID 22324618.
  6. ^ Larive, Cynthia K.; Sweedler, Jonathan V. (2013). "Celebrating the 75th Anniversary of the ACS Division of Analytical Chemistry: A Special Collection of the Most Highly Cited Analytical Chemistry Papers Published between 1938 and 2012". Analytical Chemistry. 85 (9): 4201–2. doi:10.1021/ac401048d. PMID 23647149.
  7. ^ Riordon, James; Zubritsky, Elizabeth; Newman, Alan (2000). "Top 10 Articles". Analytical Chemistry. 72 (9): 24 A–329 A. doi:10.1021/ac002801q.
  8. ^ Talsky, Gerhard (1994-10-04). Derivative Spectrophotometry. Wiley. ISBN 978-3527282944.
  9. ^ Abbaspour, Abdolkarim; Khajehzadeha, Abdolreza (2012). "End point detection of precipitation titration by scanometry method without using indicator". Anal. Methods. 4 (4): 923–932. doi:10.1039/C2AY05492B.
  10. ^ Li, N; Li, XY; Zou, XZ; Lin, LR; Li, YQ (2011). "A novel baseline-correction method for standard addition based derivative spectra and its application to quantitative analysis of benzo(a)pyrene in vegetable oil samples". Analyst. 136 (13): 2802–10. Bibcode:2011Ana...136.2802L. doi:10.1039/c0an00751j. PMID 21594244.
  11. ^ Dixit, L.; Ram, S. (1985). "Quantitative Analysis by Derivative Electronic Spectroscopy". Applied Spectroscopy Reviews. 21 (4): 311–418. Bibcode:1985ApSRv..21..311D. doi:10.1080/05704928508060434.
  12. ^ Giese, Arthur T.; French, C. Stacey (1955). "The Analysis of Overlapping Spectral Absorption Bands by Derivative Spectrophotometry". Appl. Spectrosc. 9 (2): 78–96. Bibcode:1955ApSpe...9...78G. doi:10.1366/000370255774634089. S2CID 97784067.
  13. ^ Madden, Hannibal H. (1978). "Comments on the Savitzky–Golay convolution method for least-squares-fit smoothing and differentiation of digital data" (PDF). Anal. Chem. 50 (9): 1383–6. doi:10.1021/ac50031a048.
  14. ^ Gans 1992, pp. 153–7, "Repeated smoothing and differentiation"
  15. ^ a b c A., Gorry (1990). "General least-squares smoothing and differentiation by the convolution (Savitzky–Golay) method". Analytical Chemistry. 62 (6): 570–3. doi:10.1021/ac00205a007.
  16. ^ a b c Schmid, Michael; Rath, David; Diebold, Ulrike (2022). "Why and how Savitzky–Golay filters should be replaced". ACS Measurement Science Au. 2 (2): 185–196. doi:10.1021/acsmeasuresciau.1c00054.
  17. ^ a b Thornley, David J. Anisotropic Multidimensional Savitzky Golay kernels for Smoothing, Differentiation and Reconstruction (PDF) (Technical report). Imperial College Department of Computing. 2066/8.
  18. ^ Ratzlaff, Kenneth L.; Johnson, Jean T. (1989). "Computation of two-dimensional polynomial least-squares convolution smoothing integers". Anal. Chem. 61 (11): 1303–5. doi:10.1021/ac00186a026.
  19. ^ Krumm, John. "Savitzky–Golay filters for 2D Images". Microsoft Research, Redmond.
  20. ^ a b Nikitas and Pappa-Louisi (2000). "Comments on the two-dimensional smoothing of data". Analytica Chimica Acta. 415 (1–2): 117–125. doi:10.1016/s0003-2670(00)00861-8.
  21. ^ a b c Shekhar, Chandra (2015). On Simplified Application of Multidimensional Savitzky-Golay Filters and Differentiators. Progress in Applied Mathematics in Science and Engineering. AIP Conference Proceedings. Vol. 1705. p. 020014. Bibcode:2016AIPC.1705b0014S. doi:10.1063/1.4940262.
  22. ^ Chandra, Shekhar (2017-08-02). "Advanced Convolution Coefficient Calculator". Zenodo. doi:10.5281/zenodo.835283.
  23. ^ Chandra, Shekhar (2018-06-02). "Precise Convolution Coefficient Calculator". Zenodo. doi:10.5281/zenodo.1257898.
  24. ^ Shekhar, Chandra. "Convolution Coefficient Database for Multidimensional Least-Squares Filters".
  25. ^ a b c Gans, 1992 & Appendix 7
  26. ^ a b Ziegler, Horst (1981). "Properties of Digital Smoothing Polynomial (DISPO) Filters". Applied Spectroscopy. 35 (1): 88–92. Bibcode:1981ApSpe..35...88Z. doi:10.1366/0003702814731798. S2CID 97777604.
  27. ^ Luo, Jianwen; Ying, Kui; He, Ping; Bai, Jing (2005). "Properties of Savitzky–Golay digital differentiators" (PDF). Digital Signal Processing. 15 (2): 122–136. doi:10.1016/j.dsp.2004.09.008.
  28. ^ Gans, Peter; Gill, J. Bernard (1983). "Examination of the Convolution Method for Numerical Smoothing and Differentiation of Spectroscopic Data in Theory and in Practice". Applied Spectroscopy. 37 (6): 515–520. Bibcode:1983ApSpe..37..515G. doi:10.1366/0003702834634712. S2CID 97649068.
  29. ^ Gans 1992, pp. 153
  30. ^ Procter, Andrew; Sherwood, Peter M.A. (1980). "Smoothing of digital x-ray photoelectron spectra by an extended sliding least-squares approach". Anal. Chem. 52 (14): 2315–21. doi:10.1021/ac50064a018.
  31. ^ Gans 1992, pp. 207
  32. ^ Bromba, Manfred U.A; Ziegler, Horst (1981). "Application hints for Savitzky–Golay digital smoothing filters". Anal. Chem. 53 (11): 1583–6. doi:10.1021/ac00234a011.
  33. ^ Marchand, P.; Marmet, L. (1983). "Binomial smoothing filter: A way to avoid some pitfalls of least‐squares polynomial smoothing". Review of Scientific Instruments. 54 (8): 1034–1041. doi:10.1063/1.1137498.
  34. ^ Gans 1992, pp. 157
  • Gans, Peter (1992). Data fitting in the chemical sciences: By the method of least squares. ISBN 9780471934127.

External links

  • Advanced Convolution Coefficient Calculator (ACCC) for multidimensional least-squares filters
  • Savitzky–Golay filter in Fundamentals of Statistics
  • A wider range of coefficients for a range of data set sizes, orders of fit, and offsets from the centre point

savitzky, golay, filter, digital, filter, that, applied, digital, data, points, purpose, smoothing, data, that, increase, precision, data, without, distorting, signal, tendency, this, achieved, process, known, convolution, fitting, successive, sets, adjacent, . A Savitzky Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data that is to increase the precision of the data without distorting the signal tendency This is achieved in a process known as convolution by fitting successive sub sets of adjacent data points with a low degree polynomial by the method of linear least squares When the data points are equally spaced an analytical solution to the least squares equations can be found in the form of a single set of convolution coefficients that can be applied to all data sub sets to give estimates of the smoothed signal or derivatives of the smoothed signal at the central point of each sub set The method based on established mathematical procedures 1 2 was popularized by Abraham Savitzky and Marcel J E Golay who published tables of convolution coefficients for various polynomials and sub set sizes in 1964 3 4 Some errors in the tables have been corrected 5 The method has been extended for the treatment of 2 and 3 dimensional data Animation showing smoothing being applied passing through the data from left to right The red line represents the local polynomial being used to fit a sub set of the data The smoothed values are shown as circles Savitzky and Golay s paper is one of the most widely cited papers in the journal Analytical Chemistry 6 and is classed by that journal as one of its 10 seminal papers saying it can be argued that the dawn of the computer controlled analytical instrument can be traced to this article 7 Contents 1 Applications 1 1 Moving average 2 Derivation of convolution coefficients 2 1 Algebraic expressions 2 2 Use of orthogonal polynomials 2 3 Treatment of first and last points 2 4 Weighting the data 3 Two dimensional convolution coefficients 4 Multi dimensional convolution coefficients 5 Some properties of convolution 5 1 Signal distortion and noise reduction 5 1 1 Multipass filters 5 2 Frequency characteristics of convolution filters 5 3 Convolution and correlation 5 3 1 Multipass filters 6 Comparison with other filters and alternatives 7 See also 8 Appendix 8 1 Tables of selected convolution coefficients 9 Notes 10 References 11 External linksApplications EditThe data consists of a set of points xj yj j 1 n where xj is an independent variable and yj is an observed value They are treated with a set of m convolution coefficients Ci according to the expression Y j i 1 m 2 m 1 2 C i y j i m 1 2 j n m 1 2 displaystyle Y j sum i tfrac 1 m 2 tfrac m 1 2 C i y j i qquad frac m 1 2 leq j leq n frac m 1 2 Selected convolution coefficients are shown in the tables below For example for smoothing by a 5 point quadratic polynomial m 5 i 2 1 0 1 2 and the jth smoothed data point Yj is given by Y j 1 35 3 y j 2 12 y j 1 17 y j 12 y j 1 3 y j 2 displaystyle Y j frac 1 35 3y j 2 12y j 1 17y j 12y j 1 3y j 2 where C 2 3 35 C 1 12 35 etc There are numerous applications of smoothing which is performed primarily to make the data appear to be less noisy than it really is The following are applications of numerical differentiation of data 8 Note When calculating the nth derivative an additional scaling factor of n h n displaystyle frac n h n may be applied to all calculated data points to obtain absolute values see expressions for d n Y d x n displaystyle frac d n Y dx n below for details 1 Synthetic Lorentzian noise blue and 1st derivative green 2 Titration curve blue for malonic acid and 2nd derivative green The part in the light blue box is magnified 10 times 3 Lorentzian on exponential baseline blue and 2nd derivative green 4 Sum of two Lorentzians blue and 2nd derivative green 5 4th derivative of the sum of two Lorentzians Location of maxima and minima in experimental data curves This was the application that first motivated Savitzky 4 The first derivative of a function is zero at a maximum or minimum The diagram shows data points belonging to a synthetic Lorentzian curve with added noise blue diamonds Data are plotted on a scale of half width relative to the peak maximum at zero The smoothed curve red line and 1st derivative green were calculated with 7 point cubic Savitzky Golay filters Linear interpolation of the first derivative values at positions either side of the zero crossing gives the position of the peak maximum 3rd derivatives can also be used for this purpose Location of an end point in a titration curve An end point is an inflection point where the second derivative of the function is zero 9 The titration curve for malonic acid illustrates the power of the method The first end point at 4 ml is barely visible but the second derivative allows its value to be easily determined by linear interpolation to find the zero crossing Baseline flattening In analytical chemistry it is sometimes necessary to measure the height of an absorption band against a curved baseline 10 Because the curvature of the baseline is much less than the curvature of the absorption band the second derivative effectively flattens the baseline Three measures of the derivative height which is proportional to the absorption band height are the peak to valley distances h1 and h2 and the height from baseline h3 11 Resolution enhancement in spectroscopy Bands in the second derivative of a spectroscopic curve are narrower than the bands in the spectrum they have reduced half width This allows partially overlapping bands to be resolved into separate negative peaks 12 The diagram illustrates how this may be used also for chemical analysis using measurement of peak to valley distances In this case the valleys are a property of the 2nd derivative of a Lorentzian x axis position is relative to the position of the peak maximum on a scale of half width at half height Resolution enhancement with 4th derivative positive peaks The minima are a property of the 4th derivative of a Lorentzian Moving average Edit Main article Moving average A moving average filter is commonly used with time series data to smooth out short term fluctuations and highlight longer term trends or cycles It is often used in technical analysis of financial data like stock prices returns or trading volumes It is also used in economics to examine gross domestic product employment or other macroeconomic time series An unweighted moving average filter is the simplest convolution filter Each subset of the data set is fitted by a straight horizontal line It was not included in the Savitzsky Golay tables of convolution coefficients as all the coefficient values are identical with the value 1 m Derivation of convolution coefficients EditWhen the data points are equally spaced an analytical solution to the least squares equations can be found 2 This solution forms the basis of the convolution method of numerical smoothing and differentiation Suppose that the data consists of a set of n points xj yj j 1 n where xj is an independent variable and yj is a datum value A polynomial will be fitted by linear least squares to a set of m an odd number adjacent data points each separated by an interval h Firstly a change of variable is made z x x h displaystyle z x bar x over h where x displaystyle bar x is the value of the central point z takes the values 1 m 2 0 m 1 2 displaystyle tfrac 1 m 2 cdots 0 cdots tfrac m 1 2 e g m 5 z 2 1 0 1 2 note 1 The polynomial of degree k is defined as Y a 0 a 1 z a 2 z 2 a k z k displaystyle Y a 0 a 1 z a 2 z 2 cdots a k z k note 2 The coefficients a0 a1 etc are obtained by solving the normal equations bold a represents a vector bold J represents a matrix a J T J 1 J T y displaystyle mathbf a left mathbf J mathbf T mathbf J right mathbf 1 mathbf J mathbf T mathbf y where J displaystyle mathbf J is a Vandermonde matrix that is i displaystyle i th row of J displaystyle mathbf J has values 1 z i z i 2 displaystyle 1 z i z i 2 dots For example for a cubic polynomial fitted to 5 points z 2 1 0 1 2 the normal equations are solved as follows J 1 2 4 8 1 1 1 1 1 0 0 0 1 1 1 1 1 2 4 8 displaystyle mathbf J begin pmatrix 1 amp 2 amp 4 amp 8 1 amp 1 amp 1 amp 1 1 amp 0 amp 0 amp 0 1 amp 1 amp 1 amp 1 1 amp 2 amp 4 amp 8 end pmatrix J T J m z z 2 z 3 z z 2 z 3 z 4 z 2 z 3 z 4 z 5 z 3 z 4 z 5 z 6 m 0 z 2 0 0 z 2 0 z 4 z 2 0 z 4 0 0 z 4 0 z 6 5 0 10 0 0 10 0 34 10 0 34 0 0 34 0 130 displaystyle mathbf J T J begin pmatrix m amp sum z amp sum z 2 amp sum z 3 sum z amp sum z 2 amp sum z 3 amp sum z 4 sum z 2 amp sum z 3 amp sum z 4 amp sum z 5 sum z 3 amp sum z 4 amp sum z 5 amp sum z 6 end pmatrix begin pmatrix m amp 0 amp sum z 2 amp 0 0 amp sum z 2 amp 0 amp sum z 4 sum z 2 amp 0 amp sum z 4 amp 0 0 amp sum z 4 amp 0 amp sum z 6 end pmatrix begin pmatrix 5 amp 0 amp 10 amp 0 0 amp 10 amp 0 amp 34 10 amp 0 amp 34 amp 0 0 amp 34 amp 0 amp 130 end pmatrix Now the normal equations can be factored into two separate sets of equations by rearranging rows and columns with J T J even 5 10 10 34 a n d J T J odd 10 34 34 130 displaystyle mathbf J T J text even begin pmatrix 5 amp 10 10 amp 34 end pmatrix quad mathrm and quad mathbf J T J text odd begin pmatrix 10 amp 34 34 amp 130 end pmatrix Expressions for the inverse of each of these matrices can be obtained using Cramer s rule J T J even 1 1 70 34 10 10 5 a n d J T J odd 1 1 144 130 34 34 10 displaystyle mathbf J T J text even 1 1 over 70 begin pmatrix 34 amp 10 10 amp 5 end pmatrix quad mathrm and quad mathbf J T J text odd 1 1 over 144 begin pmatrix 130 amp 34 34 amp 10 end pmatrix The normal equations become a 0 a 2 j 1 70 34 10 10 5 1 1 1 1 1 4 1 0 1 4 y j 2 y j 1 y j y j 1 y j 2 displaystyle begin pmatrix a 0 a 2 end pmatrix j 1 over 70 begin pmatrix 34 amp 10 10 amp 5 end pmatrix begin pmatrix 1 amp 1 amp 1 amp 1 amp 1 4 amp 1 amp 0 amp 1 amp 4 end pmatrix begin pmatrix y j 2 y j 1 y j y j 1 y j 2 end pmatrix and a 1 a 3 j 1 144 130 34 34 10 2 1 0 1 2 8 1 0 1 8 y j 2 y j 1 y j y j 1 y j 2 displaystyle begin pmatrix a 1 a 3 end pmatrix j 1 over 144 begin pmatrix 130 amp 34 34 amp 10 end pmatrix begin pmatrix 2 amp 1 amp 0 amp 1 amp 2 8 amp 1 amp 0 amp 1 amp 8 end pmatrix begin pmatrix y j 2 y j 1 y j y j 1 y j 2 end pmatrix Multiplying out and removing common factors a 0 j 1 35 3 y j 2 12 y j 1 17 y j 12 y j 1 3 y j 2 a 1 j 1 12 y j 2 8 y j 1 8 y j 1 y j 2 a 2 j 1 14 2 y j 2 y j 1 2 y j y j 1 2 y j 2 a 3 j 1 12 y j 2 2 y j 1 2 y j 1 y j 2 displaystyle begin aligned a 0 j amp 1 over 35 3y j 2 12y j 1 17y j 12y j 1 3y j 2 a 1 j amp 1 over 12 y j 2 8y j 1 8y j 1 y j 2 a 2 j amp 1 over 14 2y j 2 y j 1 2y j y j 1 2y j 2 a 3 j amp 1 over 12 y j 2 2y j 1 2y j 1 y j 2 end aligned The coefficients of y in these expressions are known as convolution coefficients They are elements of the matrix C J T J 1 J T displaystyle mathbf C J T J 1 J T In general C y j Y j i m 1 2 m 1 2 C i y j i m 1 2 j n m 1 2 displaystyle C otimes y j Y j sum i tfrac m 1 2 tfrac m 1 2 C i y j i qquad frac m 1 2 leq j leq n frac m 1 2 In matrix notation this example is written as Y 3 Y 4 Y 5 1 35 3 12 17 12 3 0 0 0 3 12 17 12 3 0 0 0 3 12 17 12 3 y 1 y 2 y 3 y 4 y 5 y 6 y 7 displaystyle begin pmatrix Y 3 Y 4 Y 5 vdots end pmatrix 1 over 35 begin pmatrix 3 amp 12 amp 17 amp 12 amp 3 amp 0 amp 0 amp cdots 0 amp 3 amp 12 amp 17 amp 12 amp 3 amp 0 amp cdots 0 amp 0 amp 3 amp 12 amp 17 amp 12 amp 3 amp cdots vdots amp vdots amp vdots amp vdots amp vdots amp vdots amp vdots amp ddots end pmatrix begin pmatrix y 1 y 2 y 3 y 4 y 5 y 6 y 7 vdots end pmatrix Tables of convolution coefficients calculated in the same way for m up to 25 were published for the Savitzky Golay smoothing filter in 1964 3 5 The value of the central point z 0 is obtained from a single set of coefficients a0 for smoothing a1 for 1st derivative etc The numerical derivatives are obtained by differentiating Y This means that the derivatives are calculated for the smoothed data curve For a cubic polynomial Y a 0 a 1 z a 2 z 2 a 3 z 3 a 0 at z 0 x x d Y d x 1 h a 1 2 a 2 z 3 a 3 z 2 1 h a 1 at z 0 x x d 2 Y d x 2 1 h 2 2 a 2 6 a 3 z 2 h 2 a 2 at z 0 x x d 3 Y d x 3 6 h 3 a 3 displaystyle begin aligned Y amp a 0 a 1 z a 2 z 2 a 3 z 3 a 0 amp text at z 0 x bar x frac dY dx amp frac 1 h left a 1 2a 2 z 3a 3 z 2 right frac 1 h a 1 amp text at z 0 x bar x frac d 2 Y dx 2 amp frac 1 h 2 left 2a 2 6a 3 z right frac 2 h 2 a 2 amp text at z 0 x bar x frac d 3 Y dx 3 amp frac 6 h 3 a 3 end aligned In general polynomials of degree 0 and 1 note 3 2 and 3 4 and 5 etc give the same coefficients for smoothing and even derivatives Polynomials of degree 1 and 2 3 and 4 etc give the same coefficients for odd derivatives Algebraic expressions Edit It is not necessary always to use the Savitzky Golay tables The summations in the matrix JTJ can be evaluated in closed form z m 1 2 m 1 2 z 2 m m 2 1 12 z 4 m m 2 1 3 m 2 7 240 z 6 m m 2 1 3 m 4 18 m 2 31 1344 displaystyle begin aligned sum z frac m 1 2 frac m 1 2 z 2 amp m m 2 1 over 12 sum z 4 amp m m 2 1 3m 2 7 over 240 sum z 6 amp m m 2 1 3m 4 18m 2 31 over 1344 end aligned so that algebraic formulae can be derived for the convolution coefficients 13 note 4 Functions that are suitable for use with a curve that has an inflection point are Smoothing polynomial degree 2 3 C 0 i 3 m 2 7 20 i 2 4 m m 2 4 3 1 m 2 i m 1 2 displaystyle C 0i frac left 3m 2 7 20i 2 right 4 m left m 2 4 right 3 quad frac 1 m 2 leq i leq frac m 1 2 the range of values for i also applies to the expressions below 1st derivative polynomial degree 3 4 C 1 i 5 3 m 4 18 m 2 31 i 28 3 m 2 7 i 3 m m 2 1 3 m 4 39 m 2 108 15 displaystyle C 1i frac 5 left 3m 4 18m 2 31 right i 28 left 3m 2 7 right i 3 m left m 2 1 right left 3m 4 39m 2 108 right 15 2nd derivative polynomial degree 2 3 C 2 i 12 m i 2 m m 2 1 m 2 m 2 1 m 2 4 30 displaystyle C 2i frac 12mi 2 m left m 2 1 right m 2 left m 2 1 right left m 2 4 right 30 3rd derivative polynomial degree 3 4 C 3 i 3 m 2 7 i 20 i 3 m m 2 1 3 m 4 39 m 2 108 420 displaystyle C 3i frac left 3m 2 7 right i 20i 3 m left m 2 1 right left 3m 4 39m 2 108 right 420 Simpler expressions that can be used with curves that don t have an inflection point are Smoothing polynomial degree 0 1 moving average C 0 i 1 m displaystyle C 0i frac 1 m 1st derivative polynomial degree 1 2 C 1 i i m m 2 1 12 displaystyle C 1i frac i m m 2 1 12 Higher derivatives can be obtained For example a fourth derivative can be obtained by performing two passes of a second derivative function 14 Use of orthogonal polynomials Edit An alternative to fitting m data points by a simple polynomial in the subsidiary variable z is to use orthogonal polynomials Y b 0 P 0 z b 1 P 1 z b k P k z displaystyle Y b 0 P 0 z b 1 P 1 z cdots b k P k z where P0 Pk is a set of mutually orthogonal polynomials of degree 0 k Full details on how to obtain expressions for the orthogonal polynomials and the relationship between the coefficients b and a are given by Guest 2 Expressions for the convolution coefficients are easily obtained because the normal equations matrix JTJ is a diagonal matrix as the product of any two orthogonal polynomials is zero by virtue of their mutual orthogonality Therefore each non zero element of its inverse is simply the reciprocal the corresponding element in the normal equation matrix The calculation is further simplified by using recursion to build orthogonal Gram polynomials The whole calculation can be coded in a few lines of PASCAL a computer language well adapted for calculations involving recursion 15 Treatment of first and last points Edit Savitzky Golay filters are most commonly used to obtain the smoothed or derivative value at the central point z 0 using a single set of convolution coefficients m 1 2 points at the start and end of the series cannot be calculated using this process Various strategies can be employed to avoid this inconvenience The data could be artificially extended by adding in reverse order copies of the first m 1 2 points at the beginning and copies of the last m 1 2 points at the end For instance with m 5 two points are added at the start and end of the data y1 yn y3 y2 y1 yn yn 1 yn 2 Looking again at the fitting polynomial it is obvious that data can be calculated for all values of z by using all sets of convolution coefficients for a single polynomial a0 ak For a cubic polynomial Y a 0 a 1 z a 2 z 2 a 3 z 3 d Y d x 1 h a 1 2 a 2 z 3 a 3 z 2 d 2 Y d x 2 1 h 2 2 a 2 6 a 3 z d 3 Y d x 3 6 h 3 a 3 displaystyle begin aligned Y amp a 0 a 1 z a 2 z 2 a 3 z 3 frac dY dx amp frac 1 h a 1 2a 2 z 3a 3 z 2 frac d 2 Y dx 2 amp frac 1 h 2 2a 2 6a 3 z frac d 3 Y dx 3 amp frac 6 h 3 a 3 end aligned Convolution coefficients for the missing first and last points can also be easily obtained 15 This is also equivalent to fitting the first m 1 2 points with the same polynomial and similarly for the last points Weighting the data Edit It is implicit in the above treatment that the data points are all given equal weight Technically the objective function U i w i Y i y i 2 displaystyle U sum i w i Y i y i 2 being minimized in the least squares process has unit weights wi 1 When weights are not all the same the normal equations become a J T W J 1 J T W y W i i 1 displaystyle mathbf a left mathbf J T W mathbf J right 1 mathbf J T W mathbf y qquad W i i neq 1 If the same set of diagonal weights is used for all data subsets W diag w 1 w 2 w m displaystyle W text diag w 1 w 2 w m an analytical solution to the normal equations can be written down For example with a quadratic polynomial J T W J m w i w i z i w i z i 2 w i z i w i z i 2 w i z i 3 w i z i 2 w i z i 3 w i z i 4 displaystyle mathbf J T WJ begin pmatrix m sum w i amp sum w i z i amp sum w i z i 2 sum w i z i amp sum w i z i 2 amp sum w i z i 3 sum w i z i 2 amp sum w i z i 3 amp sum w i z i 4 end pmatrix An explicit expression for the inverse of this matrix can be obtained using Cramer s rule A set of convolution coefficients may then be derived as C J T W J 1 J T W displaystyle mathbf C left mathbf J T W mathbf J right 1 mathbf J T W Alternatively the coefficients C could be calculated in a spreadsheet employing a built in matrix inversion routine to obtain the inverse of the normal equations matrix This set of coefficients once calculated and stored can be used with all calculations in which the same weighting scheme applies A different set of coefficients is needed for each different weighting scheme It was shown that Savitzky Golay filter can be improved by introducing weights that decrease at the ends of the fitting interval 16 Two dimensional convolution coefficients EditTwo dimensional smoothing and differentiation can also be applied to tables of data values such as intensity values in a photographic image which is composed of a rectangular grid of pixels 17 18 Such a grid is referred as a kernel and the data points that constitute the kernel are referred as nodes The trick is to transform the rectangular kernel into a single row by a simple ordering of the indices of the nodes Whereas the one dimensional filter coefficients are found by fitting a polynomial in the subsidiary variable z to a set of m data points the two dimensional coefficients are found by fitting a polynomial in subsidiary variables v and w to a set of the values at the m n kernel nodes The following example for a bivariate polynomial of total degree 3 m 7 and n 5 illustrates the process which parallels the process for the one dimensional case above 19 v x x h x w y y h y displaystyle v frac x bar x h x w frac y bar y h y Y a 00 a 10 v a 01 w a 20 v 2 a 11 v w a 02 w 2 a 30 v 3 a 21 v 2 w a 12 v w 2 a 03 w 3 displaystyle Y a 00 a 10 v a 01 w a 20 v 2 a 11 vw a 02 w 2 a 30 v 3 a 21 v 2 w a 12 vw 2 a 03 w 3 The rectangular kernel of 35 data values d1 d35 vw 3 2 1 0 1 2 3 2 d1 d2 d3 d4 d5 d6 d7 1 d8 d9 d10 d11 d12 d13 d140 d15 d16 d17 d18 d19 d20 d211 d22 d23 d24 d25 d26 d27 d282 d29 d30 d31 d32 d33 d34 d35becomes a vector when the rows are placed one after another d d1 d35 TThe Jacobian has 10 columns one for each of the parameters a00 a03 and 35 rows one for each pair of v and w values Each row has the form J row 1 v w v 2 v w w 2 v 3 v 2 w v w 2 w 3 displaystyle J text row 1 quad v quad w quad v 2 quad vw quad w 2 quad v 3 quad v 2 w quad vw 2 quad w 3 The convolution coefficients are calculated as C J T J 1 J T displaystyle mathbf C left mathbf J T mathbf J right 1 mathbf J T The first row of C contains 35 convolution coefficients which can be multiplied with the 35 data values respectively to obtain the polynomial coefficient a 00 displaystyle a 00 which is the smoothed value at the central node of the kernel i e at the 18th node of the above table Similarly other rows of C can be multiplied with the 35 values to obtain other polynomial coefficients which in turn can be used to obtain smoothed values and different smoothed partial derivatives at different nodes Nikitas and Pappa Louisi showed that depending on the format of the used polynomial the quality of smoothing may vary significantly 20 They recommend using the polynomial of the form Y i 0 p j 0 q a i j v i w j displaystyle Y sum i 0 p sum j 0 q a ij v i w j because such polynomials can achieve good smoothing both in the central and in the near boundary regions of a kernel and therefore they can be confidently used in smoothing both at the internal and at the near boundary data points of a sampled domain In order to avoid ill conditioning when solving the least squares problem p lt m and q lt n For software that calculates the two dimensional coefficients and for a database of such C s see the section on multi dimensional convolution coefficients below Multi dimensional convolution coefficients EditThe idea of two dimensional convolution coefficients can be extended to the higher spatial dimensions as well in a straightforward manner 17 21 by arranging multidimensional distribution of the kernel nodes in a single row Following the aforementioned finding by Nikitas and Pappa Louisi 20 in two dimensional cases usage of the following form of the polynomial is recommended in multidimensional cases Y i 1 0 p 1 i 2 0 p 2 i D 0 p D a i 1 i 2 i D u 1 i 1 u 2 i 2 u D i D displaystyle Y sum i 1 0 p 1 sum i 2 0 p 2 cdots sum i D 0 p D a i 1 i 2 cdots i D times u 1 i 1 u 2 i 2 cdots u D i D where D is the dimension of the space a displaystyle a s are the polynomial coefficients and u s are the coordinates in the different spatial directions Algebraic expressions for partial derivatives of any order be it mixed or otherwise can be easily derived from the above expression 21 Note that C depends on the manner in which the kernel nodes are arranged in a row and on the manner in which the different terms of the expanded form of the above polynomial is arranged when preparing the Jacobian Accurate computation of C in multidimensional cases becomes challenging as precision of standard floating point numbers available in computer programming languages no longer remain sufficient The insufficient precision causes the floating point truncation errors to become comparable to the magnitudes of some C elements which in turn severely degrades its accuracy and renders it useless Chandra Shekhar has brought forth two open source softwares Advanced Convolution Coefficient Calculator ACCC and Precise Convolution Coefficient Calculator PCCC which handle these accuracy issues adequately ACCC performs the computation by using floating point numbers in an iterative manner 22 The precision of the floating point numbers is gradually increased in each iteration by using GNU MPFR Once the obtained C s in two consecutive iterations start having same significant digits until a pre specified distance the convergence is assumed to have reached If the distance is sufficiently large the computation yields a highly accurate C PCCC employs rational number calculations by using GNU Multiple Precision Arithmetic Library and yields a fully accurate C in the rational number format 23 In the end these rational numbers are converted into floating point numbers until a pre specified number of significant digits A database of C s that are calculated by using ACCC for symmetric kernels and both symmetric and asymmetric polynomials on unity spaced kernel nodes in the 1 2 3 and 4 dimensional spaces is made available 24 Chandra Shekhar has also laid out a mathematical framework that describes usage of C calculated on unity spaced kernel nodes to perform filtering and partial differentiations of various orders on non uniformly spaced kernel nodes 21 allowing usage of C provided in the aforementioned database Although this method yields approximate results only they are acceptable in most engineering applications provided that non uniformity of the kernel nodes is weak Some properties of convolution EditThe sum of convolution coefficients for smoothing is equal to one The sum of coefficients for odd derivatives is zero 25 The sum of squared convolution coefficients for smoothing is equal to the value of the central coefficient 26 Smoothing of a function leaves the area under the function unchanged 25 Convolution of a symmetric function with even derivative coefficients conserves the centre of symmetry 25 Properties of derivative filters 27 Signal distortion and noise reduction Edit It is inevitable that the signal will be distorted in the convolution process From property 3 above when data which has a peak is smoothed the peak height will be reduced and the half width will be increased Both the extent of the distortion and S N signal to noise ratio improvement decrease as the degree of the polynomial increases increase as the width m of the convolution function increases Effect of smoothing on data points with uncorrelated noise of unit standard deviation For example If the noise in all data points is uncorrelated and has a constant standard deviation s the standard deviation on the noise will be decreased by convolution with an m point smoothing function to 26 note 5 polynomial degree 0 or 1 1 m s displaystyle sqrt 1 over m sigma moving average polynomial degree 2 or 3 3 3 m 2 7 4 m m 2 4 s displaystyle sqrt frac 3 3m 2 7 4m m 2 4 sigma These functions are shown in the plot at the right For example with a 9 point linear function moving average two thirds of the noise is removed and with a 9 point quadratic cubic smoothing function only about half the noise is removed Most of the noise remaining is low frequency noise see Frequency characteristics of convolution filters below Although the moving average function gives the best noise reduction it is unsuitable for smoothing data which has curvature over m points A quadratic filter function is unsuitable for getting a derivative of a data curve with an inflection point because a quadratic polynomial does not have one The optimal choice of polynomial order and number of convolution coefficients will be a compromise between noise reduction and distortion 28 Multipass filters Edit One way to mitigate distortion and improve noise removal is to use a filter of smaller width and perform more than one convolution with it For two passes of the same filter this is equivalent to one pass of a filter obtained by convolution of the original filter with itself 29 For example 2 passes of the filter with coefficients 1 3 1 3 1 3 is equivalent to 1 pass of the filter with coefficients 1 9 2 9 3 9 2 9 1 9 The disadvantage of multipassing is that the equivalent filter width for n displaystyle n passes of an m displaystyle m point function is n m 1 1 displaystyle n m 1 1 so multipassing is subject to greater end effects Nevertheless multipassing has been used to great advantage For instance some 40 80 passes on data with a signal to noise ratio of only 5 gave useful results 30 The noise reduction formulae given above do not apply because correlation between calculated data points increases with each pass Frequency characteristics of convolution filters Edit Fourier transform of the 9 point quadratic cubic smoothing function Convolution maps to multiplication in the Fourier co domain The discrete Fourier transform of a convolution filter is a real valued function which can be represented as F T 8 j 1 m 2 m 1 2 C j cos j 8 displaystyle FT theta sum j tfrac 1 m 2 tfrac m 1 2 C j cos j theta 8 runs from 0 to 180 degrees after which the function merely repeats itself The plot for a 9 point quadratic cubic smoothing function is typical At very low angle the plot is almost flat meaning that low frequency components of the data will be virtually unchanged by the smoothing operation As the angle increases the value decreases so that higher frequency components are more and more attenuated This shows that the convolution filter can be described as a low pass filter the noise that is removed is primarily high frequency noise and low frequency noise passes through the filter 31 Some high frequency noise components are attenuated more than others as shown by undulations in the Fourier transform at large angles This can give rise to small oscillations in the smoothed data 32 and phase reversal i e high frequency oscillations in the data get inverted by Savitzky Golay filtering 33 Convolution and correlation Edit Convolution affects the correlation between errors in the data The effect of convolution can be expressed as a linear transformation Y j i m 1 2 m 1 2 C i y j i displaystyle Y j sum i frac m 1 2 frac m 1 2 C i y j i By the law of error propagation the variance covariance matrix of the data A will be transformed into B according to B C A C T displaystyle mathbf B mathbf C mathbf A mathbf C T To see how this applies in practice consider the effect of a 3 point moving average on the first three calculated points Y2 Y4 assuming that the data points have equal variance and that there is no correlation between them A will be an identity matrix multiplied by a constant s2 the variance at each point B s 2 9 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 1 1 0 1 1 1 0 1 1 0 0 1 s 2 9 3 2 1 2 3 2 1 2 3 displaystyle mathbf B sigma 2 over 9 begin pmatrix 1 amp 1 amp 1 amp 0 amp 0 0 amp 1 amp 1 amp 1 amp 0 0 amp 0 amp 1 amp 1 amp 1 end pmatrix begin pmatrix 1 amp 0 amp 0 amp 0 amp 0 0 amp 1 amp 0 amp 0 amp 0 0 amp 0 amp 1 amp 0 amp 0 0 amp 0 amp 0 amp 1 amp 0 0 amp 0 amp 0 amp 0 amp 1 end pmatrix begin pmatrix 1 amp 0 amp 0 1 amp 1 amp 0 1 amp 1 amp 1 0 amp 1 amp 1 0 amp 0 amp 1 end pmatrix sigma 2 over 9 begin pmatrix 3 amp 2 amp 1 2 amp 3 amp 2 1 amp 2 amp 3 end pmatrix In this case the correlation coefficients r i j B i j B i i B j j i j displaystyle rho ij frac B ij sqrt B ii B jj i neq j between calculated points i and j will be r i i 1 2 3 0 66 r i i 2 1 3 0 33 displaystyle rho i i 1 2 over 3 0 66 rho i i 2 1 over 3 0 33 In general the calculated values are correlated even when the observed values are not correlated The correlation extends over m 1 calculated points at a time 34 Multipass filters Edit To illustrate the effect of multipassing on the noise and correlation of a set of data consider the effects of a second pass of a 3 point moving average filter For the second pass note 6 C B C T s 2 81 1 1 1 0 0 0 1 1 1 0 0 0 1 1 1 3 2 1 0 0 2 3 2 0 0 1 2 3 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 1 1 0 1 1 0 0 1 s 2 81 19 16 10 4 1 16 19 16 10 4 10 16 19 16 10 4 10 16 19 16 1 4 10 16 19 displaystyle mathbf CBC T sigma 2 over 81 begin pmatrix 1 amp 1 amp 1 amp 0 amp 0 0 amp 1 amp 1 amp 1 amp 0 0 amp 0 amp 1 amp 1 amp 1 end pmatrix begin pmatrix 3 amp 2 amp 1 amp 0 amp 0 2 amp 3 amp 2 amp 0 amp 0 1 amp 2 amp 3 amp 0 amp 0 0 amp 0 amp 0 amp 0 amp 0 0 amp 0 amp 0 amp 0 amp 0 end pmatrix begin pmatrix 1 amp 0 amp 0 1 amp 1 amp 0 1 amp 1 amp 1 0 amp 1 amp 1 0 amp 0 amp 1 end pmatrix sigma 2 over 81 begin pmatrix 19 amp 16 amp 10 amp 4 amp 1 16 amp 19 amp 16 amp 10 amp 4 10 amp 16 amp 19 amp 16 amp 10 4 amp 10 amp 16 amp 19 amp 16 1 amp 4 amp 10 amp 16 amp 19 end pmatrix After two passes the standard deviation of the central point has decreased to 19 81 s 0 48 s displaystyle sqrt tfrac 19 81 sigma 0 48 sigma compared to 0 58s for one pass The noise reduction is a little less than would be obtained with one pass of a 5 point moving average which under the same conditions would result in the smoothed points having the smaller standard deviation of 0 45s Correlation now extends over a span of 4 sequential points with correlation coefficients r i i 1 16 19 0 84 r i i 2 10 19 0 53 r i i 3 4 19 0 21 r i i 4 1 19 0 05 displaystyle rho i i 1 16 over 19 0 84 rho i i 2 10 over 19 0 53 rho i i 3 4 over 19 0 21 rho i i 4 1 over 19 0 05 The advantage obtained by performing two passes with the narrower smoothing function is that it introduces less distortion into the calculated data Comparison with other filters and alternatives EditCompared with other smoothing filters e g convolution with a Gaussian or multi pass moving average filtering Savitzky Golay filters have an initially flatter response and sharper cutoff in the frequency domain especially for high orders of the fit polynomial see frequency characteristics For data with limited signal bandwidth this means that Savitzky Golay filtering can provide better signal to noise ratio than many other filters e g peak heights of spectra are better preserved than for other filters with similar noise suppression Disadvantages of the Savitzky Golay filters are comparably poor suppression of some high frequencies poor stopband suppression and artifacts when using polynomial fits for the first and last points 16 Alternative smoothing methods that share the advantages of Savitzky Golay filters and mitigate at least some of their disadvantages are Savitzky Golay filters with properly chosen fitting weights Whittaker Henderson smoothing a method closely related to smoothing splines and convolution with a windowed sinc function 16 See also EditKernel smoother Different terminology for many of the same processes used in statistics Local regression the LOESS and LOWESS methods Numerical differentiation Application to differentiation of functions Smoothing spline Stencil numerical analysis Application to the solution of differential equationsAppendix EditTables of selected convolution coefficients Edit Consider a set of data points x j y j 1 j n displaystyle x j y j 1 leq j leq n The Savitzky Golay tables refer to the case that the step x j x j 1 displaystyle x j x j 1 is constant h Examples of the use of the so called convolution coefficients with a cubic polynomial and a window size m of 5 points are as follows Smoothing Y j 1 35 3 y j 2 12 y j 1 17 y j 12 y j 1 3 y j 2 displaystyle Y j frac 1 35 3 times y j 2 12 times y j 1 17 times y j 12 times y j 1 3 times y j 2 1st derivative Y j 1 12 h 1 y j 2 8 y j 1 0 y j 8 y j 1 1 y j 2 displaystyle Y j frac 1 12h 1 times y j 2 8 times y j 1 0 times y j 8 times y j 1 1 times y j 2 2nd derivative Y j 1 7 h 2 2 y j 2 1 y j 1 2 y j 1 y j 1 2 y j 2 displaystyle Y j frac 1 7h 2 2 times y j 2 1 times y j 1 2 times y j 1 times y j 1 2 times y j 2 Selected values of the convolution coefficients for polynomials of degree 1 2 3 4 and 5 are given in the following tables note 7 The values were calculated using the PASCAL code provided in Gorry 15 Coefficients for smoothing PolynomialDegree quadratic or cubic2 or 3 quartic or quintic4 or 5Window size 5 7 9 7 9 4 21 15 3 2 14 5 55 2 3 3 39 30 30 1 12 6 54 75 1350 17 7 59 131 1791 12 6 54 75 1352 3 3 39 30 303 2 14 5 554 21 15Normalisation 35 21 231 231 429 Coefficients for 1st derivative PolynomialDegree linear or quadratic 1 or 2 cubic or quartic3 or 4Window size 3 5 7 9 5 7 9 4 4 86 3 3 3 22 142 2 2 2 2 1 67 193 1 1 1 1 1 8 58 1260 0 0 0 0 0 0 01 1 1 1 1 8 58 1262 2 2 2 1 67 1933 3 3 22 1424 4 86Normalisation 2 10 28 60 12 252 1 188Coefficients for 2nd derivative PolynomialDegree quadratic or cubic2 or 3 quartic or quintic4 or 5Window size 5 7 9 5 7 9 4 28 126 3 5 7 13 371 2 2 0 8 1 67 151 1 1 3 17 16 19 2110 2 4 20 30 70 3701 1 3 17 16 19 2112 2 0 8 1 67 1513 5 7 13 3714 28 126Normalisation 7 42 462 12 132 1716 Coefficients for 3rd derivative PolynomialDegree cubic or quartic3 or 4 quintic or sextic 5 or 6Window size 5 7 9 7 9 4 14 100 3 1 7 1 457 2 1 1 13 8 256 1 2 1 9 13 4590 0 0 0 0 01 2 1 9 13 4592 1 1 13 8 2563 1 7 1 4574 14 100Normalisation 2 6 198 8 1144 Coefficients for 4th derivative PolynomialDegree quartic or quintic 4 or 5Window size 7 9 4 14 3 3 21 2 7 11 1 1 90 6 181 1 92 7 113 3 214 14Normalisation 11 143Notes Edit With even values of m z will run from 1 m to m 1 in steps of 2 The simple moving average is a special case with k 0 Y a0 In this case all convolution coefficients are equal to 1 m Smoothing using the moving average is equivalent with equally spaced points to local fitting with a sloping straight line The expressions given here are different from those of Madden which are given in terms of the variable m m 1 2 The expressions under the square root sign are the same as the expression for the convolution coefficient with z 0 The same result is obtained with one pass of the equivalent filter with coefficients 1 9 2 9 3 9 2 9 1 9 and an identity variance covariance matrix More extensive tables and the method to calculate additional coefficients were originally provided by Savitzky and Golay 3 References Edit Whittaker E T Robinson G 1924 The Calculus Of Observations Blackie amp Son pp 291 6 OCLC 1187948 Graduation Formulae obtained by fitting a Polynomial a b c Guest P G 2012 1961 Ch 7 Estimation of Polynomial Coefficients Numerical Methods of Curve Fitting Cambridge University Press pp 147 ISBN 978 1 107 64695 7 a b c Savitzky A Golay M J E 1964 Smoothing and Differentiation of Data by Simplified Least Squares Procedures Analytical Chemistry 36 8 1627 39 Bibcode 1964AnaCh 36 1627S doi 10 1021 ac60214a047 a b Savitzky Abraham 1989 A Historic Collaboration Analytical Chemistry 61 15 921A 3A doi 10 1021 ac00190a744 a b Steinier Jean Termonia Yves Deltour Jules 1972 Smoothing and differentiation of data by simplified least square procedure Analytical Chemistry 44 11 1906 9 doi 10 1021 ac60319a045 PMID 22324618 Larive Cynthia K Sweedler Jonathan V 2013 Celebrating the 75th Anniversary of the ACS Division of Analytical Chemistry A Special Collection of the Most Highly Cited Analytical Chemistry Papers Published between 1938 and 2012 Analytical Chemistry 85 9 4201 2 doi 10 1021 ac401048d PMID 23647149 Riordon James Zubritsky Elizabeth Newman Alan 2000 Top 10 Articles Analytical Chemistry 72 9 24 A 329 A doi 10 1021 ac002801q Talsky Gerhard 1994 10 04 Derivative Spectrophotometry Wiley ISBN 978 3527282944 Abbaspour Abdolkarim Khajehzadeha Abdolreza 2012 End point detection of precipitation titration by scanometry method without using indicator Anal Methods 4 4 923 932 doi 10 1039 C2AY05492B Li N Li XY Zou XZ Lin LR Li YQ 2011 A novel baseline correction method for standard addition based derivative spectra and its application to quantitative analysis of benzo a pyrene in vegetable oil samples Analyst 136 13 2802 10 Bibcode 2011Ana 136 2802L doi 10 1039 c0an00751j PMID 21594244 Dixit L Ram S 1985 Quantitative Analysis by Derivative Electronic Spectroscopy Applied Spectroscopy Reviews 21 4 311 418 Bibcode 1985ApSRv 21 311D doi 10 1080 05704928508060434 Giese Arthur T French C Stacey 1955 The Analysis of Overlapping Spectral Absorption Bands by Derivative Spectrophotometry Appl Spectrosc 9 2 78 96 Bibcode 1955ApSpe 9 78G doi 10 1366 000370255774634089 S2CID 97784067 Madden Hannibal H 1978 Comments on the Savitzky Golay convolution method for least squares fit smoothing and differentiation of digital data PDF Anal Chem 50 9 1383 6 doi 10 1021 ac50031a048 Gans 1992 pp 153 7 Repeated smoothing and differentiation a b c A Gorry 1990 General least squares smoothing and differentiation by the convolution Savitzky Golay method Analytical Chemistry 62 6 570 3 doi 10 1021 ac00205a007 a b c Schmid Michael Rath David Diebold Ulrike 2022 Why and how Savitzky Golay filters should be replaced ACS Measurement Science Au 2 2 185 196 doi 10 1021 acsmeasuresciau 1c00054 a b Thornley David J Anisotropic Multidimensional Savitzky Golay kernels for Smoothing Differentiation and Reconstruction PDF Technical report Imperial College Department of Computing 2066 8 Ratzlaff Kenneth L Johnson Jean T 1989 Computation of two dimensional polynomial least squares convolution smoothing integers Anal Chem 61 11 1303 5 doi 10 1021 ac00186a026 Krumm John Savitzky Golay filters for 2D Images Microsoft Research Redmond a b Nikitas and Pappa Louisi 2000 Comments on the two dimensional smoothing of data Analytica Chimica Acta 415 1 2 117 125 doi 10 1016 s0003 2670 00 00861 8 a b c Shekhar Chandra 2015 On Simplified Application of Multidimensional Savitzky Golay Filters and Differentiators Progress in Applied Mathematics in Science and Engineering AIP Conference Proceedings Vol 1705 p 020014 Bibcode 2016AIPC 1705b0014S doi 10 1063 1 4940262 Chandra Shekhar 2017 08 02 Advanced Convolution Coefficient Calculator Zenodo doi 10 5281 zenodo 835283 Chandra Shekhar 2018 06 02 Precise Convolution Coefficient Calculator Zenodo doi 10 5281 zenodo 1257898 Shekhar Chandra Convolution Coefficient Database for Multidimensional Least Squares Filters a b c Gans 1992 amp Appendix 7harvnb error no target CITEREFGans1992Appendix 7 help a b Ziegler Horst 1981 Properties of Digital Smoothing Polynomial DISPO Filters Applied Spectroscopy 35 1 88 92 Bibcode 1981ApSpe 35 88Z doi 10 1366 0003702814731798 S2CID 97777604 Luo Jianwen Ying Kui He Ping Bai Jing 2005 Properties of Savitzky Golay digital differentiators PDF Digital Signal Processing 15 2 122 136 doi 10 1016 j dsp 2004 09 008 Gans Peter Gill J Bernard 1983 Examination of the Convolution Method for Numerical Smoothing and Differentiation of Spectroscopic Data in Theory and in Practice Applied Spectroscopy 37 6 515 520 Bibcode 1983ApSpe 37 515G doi 10 1366 0003702834634712 S2CID 97649068 Gans 1992 pp 153 Procter Andrew Sherwood Peter M A 1980 Smoothing of digital x ray photoelectron spectra by an extended sliding least squares approach Anal Chem 52 14 2315 21 doi 10 1021 ac50064a018 Gans 1992 pp 207 Bromba Manfred U A Ziegler Horst 1981 Application hints for Savitzky Golay digital smoothing filters Anal Chem 53 11 1583 6 doi 10 1021 ac00234a011 Marchand P Marmet L 1983 Binomial smoothing filter A way to avoid some pitfalls of least squares polynomial smoothing Review of Scientific Instruments 54 8 1034 1041 doi 10 1063 1 1137498 Gans 1992 pp 157 Gans Peter 1992 Data fitting in the chemical sciences By the method of least squares ISBN 9780471934127 External links Edit Wikimedia Commons has media related to Savitzky Golay filter Advanced Convolution Coefficient Calculator ACCC for multidimensional least squares filters Savitzky Golay filter in Fundamentals of Statistics A wider range of coefficients for a range of data set sizes orders of fit and offsets from the centre point Retrieved from https en wikipedia org w index php title Savitzky Golay filter amp oldid 1132105472, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.