fbpx
Wikipedia

Kolmogorov's inequality

In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound.

Statement of the inequality edit

Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[Xk] = 0 and variance Var[Xk] < +∞ for k = 1, ..., n. Then, for each λ > 0,

 

where Sk = X1 + ... + Xk.

The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval.

Proof edit

The following argument employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence   is a martingale. Define   as follows. Let  , and

 

for all  . Then   is also a martingale.

For any martingale   with  , we have that

 

Applying this result to the martingale  , we have

 

where the first inequality follows by Chebyshev's inequality.


This inequality was generalized by Hájek and Rényi in 1955.

See also edit

References edit

  • Billingsley, Patrick (1995). Probability and Measure. New York: John Wiley & Sons, Inc. ISBN 0-471-00710-2. (Theorem 22.4)
  • Feller, William (1968) [1950]. An Introduction to Probability Theory and its Applications, Vol 1 (Third ed.). New York: John Wiley & Sons, Inc. xviii+509. ISBN 0-471-25708-7.
  • Kahane, Jean-Pierre (1985) [1968]. Some random series of functions (Second ed.). Cambridge: Cambridge University Press. p. 29-30.


This article incorporates material from Kolmogorov's inequality on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.

kolmogorov, inequality, probability, theory, called, maximal, inequality, that, gives, bound, probability, that, partial, sums, finite, collection, independent, random, variables, exceed, some, specified, bound, contents, statement, inequality, proof, also, re. In probability theory Kolmogorov s inequality is a so called maximal inequality that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound Contents 1 Statement of the inequality 2 Proof 3 See also 4 ReferencesStatement of the inequality editLet X1 Xn W R be independent random variables defined on a common probability space W F Pr with expected value E Xk 0 and variance Var Xk lt for k 1 n Then for each l gt 0 Pr max1 k n Sk l 1l2Var Sn 1l2 k 1nVar Xk 1l2 k 1nE Xk2 displaystyle Pr left max 1 leq k leq n S k geq lambda right leq frac 1 lambda 2 operatorname Var S n equiv frac 1 lambda 2 sum k 1 n operatorname Var X k frac 1 lambda 2 sum k 1 n text E X k 2 nbsp where Sk X1 Xk The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval Proof editThe following argument employs discrete martingales As argued in the discussion of Doob s martingale inequality the sequence S1 S2 Sn displaystyle S 1 S 2 dots S n nbsp is a martingale Define Zi i 0n displaystyle Z i i 0 n nbsp as follows Let Z0 0 displaystyle Z 0 0 nbsp and Zi 1 Si 1 if max1 j i Sj lt lZi otherwise displaystyle Z i 1 left begin array ll S i 1 amp text if displaystyle max 1 leq j leq i S j lt lambda Z i amp text otherwise end array right nbsp for all i displaystyle i nbsp Then Zi i 0n displaystyle Z i i 0 n nbsp is also a martingale For any martingale Mi displaystyle M i nbsp with M0 0 displaystyle M 0 0 nbsp we have that i 1nE Mi Mi 1 2 i 1nE Mi2 2MiMi 1 Mi 12 i 1nE Mi2 2 Mi 1 Mi Mi 1 Mi 1 Mi 12 i 1nE Mi2 Mi 12 2E Mi 1 Mi Mi 1 E Mn2 E M02 E Mn2 displaystyle begin aligned sum i 1 n text E M i M i 1 2 amp sum i 1 n text E M i 2 2M i M i 1 M i 1 2 amp sum i 1 n text E left M i 2 2 M i 1 M i M i 1 M i 1 M i 1 2 right amp sum i 1 n text E left M i 2 M i 1 2 right 2 text E left M i 1 M i M i 1 right amp text E M n 2 text E M 0 2 text E M n 2 end aligned nbsp Applying this result to the martingale Si i 0n displaystyle S i i 0 n nbsp we havePr max1 i n Si l Pr Zn l 1l2E Zn2 1l2 i 1nE Zi Zi 1 2 1l2 i 1nE Si Si 1 2 1l2E Sn2 1l2Var Sn displaystyle begin aligned text Pr left max 1 leq i leq n S i geq lambda right amp text Pr Z n geq lambda amp leq frac 1 lambda 2 text E Z n 2 frac 1 lambda 2 sum i 1 n text E Z i Z i 1 2 amp leq frac 1 lambda 2 sum i 1 n text E S i S i 1 2 frac 1 lambda 2 text E S n 2 frac 1 lambda 2 text Var S n end aligned nbsp where the first inequality follows by Chebyshev s inequality This inequality was generalized by Hajek and Renyi in 1955 See also editChebyshev s inequality Etemadi s inequality Landau Kolmogorov inequality Markov s inequality Bernstein inequalities probability theory References editBillingsley Patrick 1995 Probability and Measure New York John Wiley amp Sons Inc ISBN 0 471 00710 2 Theorem 22 4 Feller William 1968 1950 An Introduction to Probability Theory and its Applications Vol 1 Third ed New York John Wiley amp Sons Inc xviii 509 ISBN 0 471 25708 7 Kahane Jean Pierre 1985 1968 Some random series of functions Second ed Cambridge Cambridge University Press p 29 30 This article incorporates material from Kolmogorov s inequality on PlanetMath which is licensed under the Creative Commons Attribution Share Alike License Retrieved from https en wikipedia org w index php title Kolmogorov 27s inequality amp oldid 1190618672, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.