fbpx
Wikipedia

Similarities between Wiener and LMS

The Least mean squares filter solution converges to the Wiener filter solution, assuming that the unknown system is LTI and the noise is stationary. Both filters can be used to identify the impulse response of an unknown system, knowing only the original input signal and the output of the unknown system. By relaxing the error criterion to reduce current sample error instead of minimizing the total error over all of n, the LMS algorithm can be derived from the Wiener filter.

Derivation of the Wiener filter for system identification Edit

Given a known input signal  , the output of an unknown LTI system   can be expressed as:

 

where   is an unknown filter tap coefficients and   is noise.

The model system  , using a Wiener filter solution with an order N, can be expressed as:

 

where   are the filter tap coefficients to be determined.

The error between the model and the unknown system can be expressed as:

 

The total squared error   can be expressed as:

 

 

 

Use the Minimum mean-square error criterion over all of   by setting its gradient to zero:

  which is   for all  

 

Substitute the definition of  :

 

Distribute the partial derivative:

 

Using the definition of discrete cross-correlation:

 

 

Rearrange the terms:

  for all  

This system of N equations with N unknowns can be determined.

The resulting coefficients of the Wiener filter can be determined by:  , where   is the cross-correlation vector between   and  .

Derivation of the LMS algorithm Edit

By relaxing the infinite sum of the Wiener filter to just the error at time  , the LMS algorithm can be derived.

The squared error can be expressed as:

 

Using the Minimum mean-square error criterion, take the gradient:

 

Apply chain rule and substitute definition of y[n]

 

 

Using gradient descent and a step size  :

 

which becomes, for i = 0, 1, ..., N-1,

 

This is the LMS update equation.

See also Edit

References Edit

  • J.G. Proakis and D.G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applications, Prentice-Hall, 4th ed., 2007.

similarities, between, wiener, this, article, relies, excessively, references, primary, sources, please, improve, this, article, adding, secondary, tertiary, sources, find, sources, news, newspapers, books, scholar, jstor, 2008, learn, when, remove, this, temp. This article relies excessively on references to primary sources Please improve this article by adding secondary or tertiary sources Find sources Similarities between Wiener and LMS news newspapers books scholar JSTOR May 2008 Learn how and when to remove this template message This article may be confusing or unclear to readers Please help clarify the article There might be a discussion about this on the talk page February 2009 Learn how and when to remove this template message The Least mean squares filter solution converges to the Wiener filter solution assuming that the unknown system is LTI and the noise is stationary Both filters can be used to identify the impulse response of an unknown system knowing only the original input signal and the output of the unknown system By relaxing the error criterion to reduce current sample error instead of minimizing the total error over all of n the LMS algorithm can be derived from the Wiener filter Contents 1 Derivation of the Wiener filter for system identification 2 Derivation of the LMS algorithm 3 See also 4 ReferencesDerivation of the Wiener filter for system identification EditGiven a known input signal s n displaystyle s n nbsp the output of an unknown LTI system x n displaystyle x n nbsp can be expressed as x n k 0 N 1 h k s n k w n displaystyle x n sum k 0 N 1 h k s n k w n nbsp where h k displaystyle h k nbsp is an unknown filter tap coefficients and w n displaystyle w n nbsp is noise The model system x n displaystyle hat x n nbsp using a Wiener filter solution with an order N can be expressed as x n k 0 N 1 h k s n k displaystyle hat x n sum k 0 N 1 hat h k s n k nbsp where h k displaystyle hat h k nbsp are the filter tap coefficients to be determined The error between the model and the unknown system can be expressed as e n x n x n displaystyle e n x n hat x n nbsp The total squared error E displaystyle E nbsp can be expressed as E n e n 2 displaystyle E sum n infty infty e n 2 nbsp E n x n x n 2 displaystyle E sum n infty infty x n hat x n 2 nbsp E n x n 2 2 x n x n x n 2 displaystyle E sum n infty infty x n 2 2x n hat x n hat x n 2 nbsp Use the Minimum mean square error criterion over all of n displaystyle n nbsp by setting its gradient to zero E 0 displaystyle nabla E 0 nbsp which is E h i 0 displaystyle frac partial E partial hat h i 0 nbsp for all i 0 1 2 N 1 displaystyle i 0 1 2 N 1 nbsp E h i h i n x n 2 2 x n x n x n 2 displaystyle frac partial E partial hat h i frac partial partial hat h i sum n infty infty x n 2 2x n hat x n hat x n 2 nbsp Substitute the definition of x n displaystyle hat x n nbsp E h i h i n x n 2 2 x n k 0 N 1 h k s n k k 0 N 1 h k s n k 2 displaystyle frac partial E partial hat h i frac partial partial hat h i sum n infty infty x n 2 2x n sum k 0 N 1 hat h k s n k sum k 0 N 1 hat h k s n k 2 nbsp Distribute the partial derivative E h i n 2 x n s n i 2 k 0 N 1 h k s n k s n i displaystyle frac partial E partial hat h i sum n infty infty 2x n s n i 2 sum k 0 N 1 hat h k s n k s n i nbsp Using the definition of discrete cross correlation R x y i n x n y n i displaystyle R xy i sum n infty infty x n y n i nbsp E h i 2 R x s i 2 k 0 N 1 h k R s s i k 0 displaystyle frac partial E partial hat h i 2R xs i 2 sum k 0 N 1 hat h k R ss i k 0 nbsp Rearrange the terms R x s i k 0 N 1 h k R s s i k displaystyle R xs i sum k 0 N 1 hat h k R ss i k nbsp for all i 0 1 2 N 1 displaystyle i 0 1 2 N 1 nbsp This system of N equations with N unknowns can be determined The resulting coefficients of the Wiener filter can be determined by W R x x 1 P x s displaystyle W R xx 1 P xs nbsp where P x s displaystyle P xs nbsp is the cross correlation vector between x displaystyle x nbsp and s displaystyle s nbsp Derivation of the LMS algorithm EditBy relaxing the infinite sum of the Wiener filter to just the error at time n displaystyle n nbsp the LMS algorithm can be derived The squared error can be expressed as E d n y n 2 displaystyle E d n y n 2 nbsp Using the Minimum mean square error criterion take the gradient E w w d n y n 2 displaystyle frac partial E partial w frac partial partial w d n y n 2 nbsp Apply chain rule and substitute definition of y n E w 2 d n y n w d n k 0 N 1 w k x n k displaystyle frac partial E partial w 2 d n y n frac partial partial w d n sum k 0 N 1 hat w k x n k nbsp E w i 2 e n x n i displaystyle frac partial E partial w i 2 e n x n i nbsp Using gradient descent and a step size m displaystyle mu nbsp w n 1 w n m E w displaystyle w n 1 w n mu frac partial E partial w nbsp which becomes for i 0 1 N 1 w i n 1 w i n 2 m e n x n i displaystyle w i n 1 w i n 2 mu e n x n i nbsp This is the LMS update equation See also EditWiener filter Least mean squares filterReferences EditJ G Proakis and D G Manolakis Digital Signal Processing Principles Algorithms and Applications Prentice Hall 4th ed 2007 Retrieved from https en wikipedia org w index php title Similarities between Wiener and LMS amp oldid 1083529614, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.