fbpx
Wikipedia

Algorithmic information theory

Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory.[1] According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."[2]

Besides the formalization of a universal measure for irreducible information content of computably generated objects, some main achievements of AIT were to show that: in fact algorithmic complexity follows (in the self-delimited case) the same inequalities (except for a constant[3]) that entropy does, as in classical information theory;[1] randomness is incompressibility;[4] and, within the realm of randomly generated software, the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine.[5]

AIT principally studies measures of irreducible information content of strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers. One of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field of metamathematics, e.g., as shown by the incompleteness results mentioned below. Other main motivations came from surpassing the limitations of classical information theory for single and fixed objects, formalizing the concept of randomness, and finding a meaningful probabilistic inference without prior knowledge of the probability distribution (e.g., whether it is independent and identically distributed, Markovian, or even stationary). In this way, AIT is known to be basically founded upon three main mathematical concepts and the relations between them: algorithmic complexity, algorithmic randomness, and algorithmic probability.[6][4]

Overview edit

Algorithmic information theory principally studies complexity measures on strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers.

Informally, from the point of view of algorithmic information theory, the information content of a string is equivalent to the length of the most-compressed possible self-contained representation of that string. A self-contained representation is essentially a program—in some fixed but otherwise irrelevant universal programming language—that, when run, outputs the original string.

From this point of view, a 3000-page encyclopedia actually contains less information than 3000 pages of completely random letters, despite the fact that the encyclopedia is much more useful. This is because to reconstruct the entire sequence of random letters, one must know what every single letter is. On the other hand, if every vowel were removed from the encyclopedia, someone with reasonable knowledge of the English language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc hs lw nfrmtn cntnt" from the context and consonants present.

Unlike classical information theory, algorithmic information theory gives formal, rigorous definitions of a random string and a random infinite sequence that do not depend on physical or philosophical intuitions about nondeterminism or likelihood. (The set of random strings depends on the choice of the universal Turing machine used to define Kolmogorov complexity, but any choice gives identical asymptotic results because the Kolmogorov complexity of a string is invariant up to an additive constant depending only on the choice of universal Turing machine. For this reason the set of random infinite sequences is independent of the choice of universal machine.)

Some of the results of algorithmic information theory, such as Chaitin's incompleteness theorem, appear to challenge common mathematical and philosophical intuitions. Most notable among these is the construction of Chaitin's constant Ω, a real number that expresses the probability that a self-delimiting universal Turing machine will halt when its input is supplied by flips of a fair coin (sometimes thought of as the probability that a random computer program will eventually halt). Although Ω is easily defined, in any consistent axiomatizable theory one can only compute finitely many digits of Ω, so it is in some sense unknowable, providing an absolute limit on knowledge that is reminiscent of Gödel's incompleteness theorems. Although the digits of Ω cannot be determined, many properties of Ω are known; for example, it is an algorithmically random sequence and thus its binary digits are evenly distributed (in fact it is normal).

History edit

Algorithmic information theory was founded by Ray Solomonoff,[7] who published the basic ideas on which the field is based as part of his invention of algorithmic probability—a way to overcome serious problems associated with the application of Bayes' rules in statistics. He first described his results at a Conference at Caltech in 1960,[8] and in a report, February 1960, "A Preliminary Report on a General Theory of Inductive Inference."[9] Algorithmic information theory was later developed independently by Andrey Kolmogorov, in 1965 and Gregory Chaitin, around 1966.

There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974). Per Martin-Löf also contributed significantly to the information theory of infinite sequences. An axiomatic approach to algorithmic information theory based on the Blum axioms (Blum 1967) was introduced by Mark Burgin in a paper presented for publication by Andrey Kolmogorov (Burgin 1982). The axiomatic approach encompasses other approaches in the algorithmic information theory. It is possible to treat different measures of algorithmic information as particular cases of axiomatically defined measures of algorithmic information. Instead of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to algorithmic information theory was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003).

Precise definitions edit

A binary string is said to be random if the Kolmogorov complexity of the string is at least the length of the string. A simple counting argument shows that some strings of any given length are random, and almost all strings are very close to being random. Since Kolmogorov complexity depends on a fixed choice of universal Turing machine (informally, a fixed "description language" in which the "descriptions" are given), the collection of random strings does depend on the choice of fixed universal machine. Nevertheless, the collection of random strings, as a whole, has similar properties regardless of the fixed machine, so one can (and often does) talk about the properties of random strings as a group without having to first specify a universal machine.

An infinite binary sequence is said to be random if, for some constant c, for all n, the Kolmogorov complexity of the initial segment of length n of the sequence is at least n − c. It can be shown that almost every sequence (from the point of view of the standard measure—"fair coin" or Lebesgue measure—on the space of infinite binary sequences) is random. Also, since it can be shown that the Kolmogorov complexity relative to two different universal machines differs by at most a constant, the collection of random infinite sequences does not depend on the choice of universal machine (in contrast to finite strings). This definition of randomness is usually called Martin-Löf randomness, after Per Martin-Löf, to distinguish it from other similar notions of randomness. It is also sometimes called 1-randomness to distinguish it from other stronger notions of randomness (2-randomness, 3-randomness, etc.). In addition to Martin-Löf randomness concepts, there are also recursive randomness, Schnorr randomness, and Kurtz randomness etc. Yongge Wang showed[10] that all of these randomness concepts are different.

(Related definitions can be made for alphabets other than the set  .)

Specific sequence edit

Algorithmic information theory (AIT) is the information theory of individual objects, using computer science, and concerns itself with the relationship between computation, information, and randomness.

The information content or complexity of an object can be measured by the length of its shortest description. For instance the string

"0101010101010101010101010101010101010101010101010101010101010101"

has the short description "32 repetitions of '01'", while

"1100100001100001110111101110110011111010010000100101011110010110"

presumably has no simple description other than writing down the string itself.

More formally, the algorithmic complexity (AC) of a string x is defined as the length of the shortest program that computes or outputs x, where the program is run on some fixed reference universal computer.

A closely related notion is the probability that a universal computer outputs some string x when fed with a program chosen at random. This algorithmic "Solomonoff" probability (AP) is key in addressing the old philosophical problem of induction in a formal way.

The major drawback of AC and AP are their incomputability. Time-bounded "Levin" complexity penalizes a slow program by adding the logarithm of its running time to its length. This leads to computable variants of AC and AP, and universal "Levin" search (US) solves all inversion problems in optimal time (apart from some unrealistically large multiplicative constant).

AC and AP also allow a formal and rigorous definition of randomness of individual strings to not depend on physical or philosophical intuitions about non-determinism or likelihood. Roughly, a string is algorithmic "Martin-Löf" random (AR) if it is incompressible in the sense that its algorithmic complexity is equal to its length.

AC, AP, and AR are the core sub-disciplines of AIT, but AIT spawns into many other areas. It serves as the foundation of the Minimum Description Length (MDL) principle, can simplify proofs in computational complexity theory, has been used to define a universal similarity metric between objects, solves the Maxwell daemon problem, and many others.

See also edit

References edit

  1. ^ a b Chaitin 1975
  2. ^ . Archived from the original on January 23, 2016. Retrieved May 3, 2010.
  3. ^ or, for the mutual algorithmic information, informing the algorithmic complexity of the input along with the input itself.
  4. ^ a b Calude 2013
  5. ^ Downey, Rodney G.; Hirschfeldt, Denis R. (2010). Algorithmic Randomness and Complexity. Springer. ISBN 978-0-387-68441-3.
  6. ^ Li & Vitanyi 2013
  7. ^ Vitanyi, P. "Obituary: Ray Solomonoff, Founding Father of Algorithmic Information Theory"
  8. ^ Paper from conference on "Cerebral Systems and Computers", California Institute of Technology, February 8–11, 1960, cited in "A Formal Theory of Inductive Inference, Part 1, 1964, p. 1
  9. ^ Solomonoff, R., "A Preliminary Report on a General Theory of Inductive Inference", Report V-131, Zator Co., Cambridge, Ma., (November Revision of February 4, 1960 report.)
  10. ^ Wang, Yongge (1996). Randomness and Complexity (PDF) (PhD). University of Heidelberg.

External links edit

Further reading edit

  • Blum, M. (1967). "On the Size of Machines". Information and Control. 11 (3): 257–265. doi:10.1016/S0019-9958(67)90546-3.
  • Blum, M. (1967). "A Machine-independent Theory of Complexity of Recursive Functions". Journal of the ACM. 14 (2): 322–336. doi:10.1145/321386.321395. S2CID 15710280.
  • Burgin, M. (1982). "Generalized Kolmogorov complexity and duality in theory of computations". Soviet Math. Dokl. 25 (3): 19–23.
  • Burgin, M. (1990). "Generalized Kolmogorov Complexity and other Dual Complexity Measures". Cybernetics. 26 (4): 481–490. doi:10.1007/BF01068189. S2CID 121736453.
  • Burgin, M. (2005). Super-recursive algorithms. Monographs in computer science. Springer. ISBN 9780387955698.
  • Calude, C.S. (1996). (PDF). J. UCS. 2 (5): 439–441. Archived from the original (PDF) on November 28, 2021. Retrieved June 30, 2019.
  • Calude, C.S. (2013). Information and Randomness: An Algorithmic Perspective. Texts in Theoretical Computer Science. An EATCS Series (2nd ed.). Springer-Verlag. ISBN 9783662049785.
  • Chaitin, G.J. (1966). "On the Length of Programs for Computing Finite Binary Sequences". Journal of the Association for Computing Machinery. 13 (4): 547–569. doi:10.1145/321356.321363. S2CID 207698337.
  • Chaitin, G.J. (1969). "On the Simplicity and Speed of Programs for Computing Definite Sets of Natural Numbers". Journal of the Association for Computing Machinery. 16 (3): 407–412. doi:10.1145/321526.321530. S2CID 12584692.
  • Chaitin, G.J. (1975). "A Theory of Program Size Formally Identical to Information Theory". Journal of the Association for Computing Machinery. 22 (3): 329–340. doi:10.1145/321892.321894. S2CID 14133389.
  • Chaitin, G.J. (1977). "Algorithmic information theory". IBM Journal of Research and Development. 21 (4): 350–9. doi:10.1147/rd.214.0350.
  • Chaitin, G.J. (1987). Algorithmic Information Theory. Cambridge University Press. ISBN 9780521343060.
  • Kolmogorov, A.N. (1965). "Three approaches to the definition of the quantity of information". Problems of Information Transmission (1): 3–11.
  • Kolmogorov, A.N. (1968). "Logical basis for information theory and probability theory". IEEE Trans. Inf. Theory. IT-14 (5): 662–4. doi:10.1109/TIT.1968.1054210. S2CID 11402549.
  • Levin, L. A. (1974). "Laws of information (nongrowth) and aspects of the foundation of probability theory". Problems of Information Transmission. 10 (3): 206–210.
  • Levin, L.A. (1976). "Various Measures of Complexity for Finite Objects (Axiomatic Description)". Soviet Math. Dokl. 17: 522–526.
  • Li, M.; Vitanyi, P. (2013). An Introduction to Kolmogorov Complexity and its Applications (2nd ed.). Springer-Verlag. ISBN 9781475726060.
  • Solomonoff, R.J. (1960). A Preliminary Report on a General Theory of Inductive Inference (PDF) (Technical report). Cambridge, Mass: Zator Company. ZTB-138.
  • Solomonoff, R.J. (1964). "A Formal Theory of Inductive Inference". Information and Control. 7 (1): 1–22. doi:10.1016/S0019-9958(64)90223-2.
  • Solomonoff, R.J. (1964). "A Formal Theory of Inductive Inference". Information and Control. 7 (2): 224–254. doi:10.1016/S0019-9958(64)90131-7.
  • Solomonoff, R.J. (2009). Emmert-Streib, F.; Dehmer, M. (eds.). Algorithmic Probability: Theory and Applications, Information Theory and Statistical Learning. Springer. ISBN 978-0-387-84815-0.
  • Van Lambagen (1989). "Algorithmic Information Theory" (PDF). Journal of Symbolic Logic. 54 (4): 1389–1400. doi:10.1017/S0022481200041153. S2CID 250348327.
  • Zurek, W.H. (2018) [1991]. "Algorithmic Information Content, Church-Turing Thesis, physical entropy, and Maxwell's demon, in". Complexity, Entropy and the Physics of Information. Addison-Wesley. pp. 73–89. ISBN 9780429982514.
  • Zvonkin, A.K. and Levin, L. A. (1970). "The Complexity of Finite Objects and the Development of the Concepts of Information and Randomness by Means of the Theory of Algorithms". Russian Mathematical Surveys. 256 (6): 83–124. Bibcode:1970RuMaS..25...83Z. doi:10.1070/RM1970v025n06ABEH001269. S2CID 250850390.{{cite journal}}: CS1 maint: multiple names: authors list (link)

algorithmic, information, theory, branch, theoretical, computer, science, that, concerns, itself, with, relationship, between, computation, information, computably, generated, objects, opposed, stochastically, generated, such, strings, other, data, structure, . Algorithmic information theory AIT is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects as opposed to stochastically generated such as strings or any other data structure In other words it is shown within algorithmic information theory that computational incompressibility mimics except for a constant that only depends on the chosen universal programming language the relations or inequalities found in information theory 1 According to Gregory Chaitin it is the result of putting Shannon s information theory and Turing s computability theory into a cocktail shaker and shaking vigorously 2 Besides the formalization of a universal measure for irreducible information content of computably generated objects some main achievements of AIT were to show that in fact algorithmic complexity follows in the self delimited case the same inequalities except for a constant 3 that entropy does as in classical information theory 1 randomness is incompressibility 4 and within the realm of randomly generated software the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine 5 AIT principally studies measures of irreducible information content of strings or other data structures Because most mathematical objects can be described in terms of strings or as the limit of a sequence of strings it can be used to study a wide variety of mathematical objects including integers One of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field of metamathematics e g as shown by the incompleteness results mentioned below Other main motivations came from surpassing the limitations of classical information theory for single and fixed objects formalizing the concept of randomness and finding a meaningful probabilistic inference without prior knowledge of the probability distribution e g whether it is independent and identically distributed Markovian or even stationary In this way AIT is known to be basically founded upon three main mathematical concepts and the relations between them algorithmic complexity algorithmic randomness and algorithmic probability 6 4 Contents 1 Overview 2 History 3 Precise definitions 4 Specific sequence 5 See also 6 References 7 External links 8 Further readingOverview editAlgorithmic information theory principally studies complexity measures on strings or other data structures Because most mathematical objects can be described in terms of strings or as the limit of a sequence of strings it can be used to study a wide variety of mathematical objects including integers Informally from the point of view of algorithmic information theory the information content of a string is equivalent to the length of the most compressed possible self contained representation of that string A self contained representation is essentially a program in some fixed but otherwise irrelevant universal programming language that when run outputs the original string From this point of view a 3000 page encyclopedia actually contains less information than 3000 pages of completely random letters despite the fact that the encyclopedia is much more useful This is because to reconstruct the entire sequence of random letters one must know what every single letter is On the other hand if every vowel were removed from the encyclopedia someone with reasonable knowledge of the English language could reconstruct it just as one could likely reconstruct the sentence Ths sntnc hs lw nfrmtn cntnt from the context and consonants present Unlike classical information theory algorithmic information theory gives formal rigorous definitions of a random string and a random infinite sequence that do not depend on physical or philosophical intuitions about nondeterminism or likelihood The set of random strings depends on the choice of the universal Turing machine used to define Kolmogorov complexity but any choice gives identical asymptotic results because the Kolmogorov complexity of a string is invariant up to an additive constant depending only on the choice of universal Turing machine For this reason the set of random infinite sequences is independent of the choice of universal machine Some of the results of algorithmic information theory such as Chaitin s incompleteness theorem appear to challenge common mathematical and philosophical intuitions Most notable among these is the construction of Chaitin s constant W a real number that expresses the probability that a self delimiting universal Turing machine will halt when its input is supplied by flips of a fair coin sometimes thought of as the probability that a random computer program will eventually halt Although W is easily defined in any consistent axiomatizable theory one can only compute finitely many digits of W so it is in some sense unknowable providing an absolute limit on knowledge that is reminiscent of Godel s incompleteness theorems Although the digits of W cannot be determined many properties of W are known for example it is an algorithmically random sequence and thus its binary digits are evenly distributed in fact it is normal History editAlgorithmic information theory was founded by Ray Solomonoff 7 who published the basic ideas on which the field is based as part of his invention of algorithmic probability a way to overcome serious problems associated with the application of Bayes rules in statistics He first described his results at a Conference at Caltech in 1960 8 and in a report February 1960 A Preliminary Report on a General Theory of Inductive Inference 9 Algorithmic information theory was later developed independently by Andrey Kolmogorov in 1965 and Gregory Chaitin around 1966 There are several variants of Kolmogorov complexity or algorithmic information the most widely used one is based on self delimiting programs and is mainly due to Leonid Levin 1974 Per Martin Lof also contributed significantly to the information theory of infinite sequences An axiomatic approach to algorithmic information theory based on the Blum axioms Blum 1967 was introduced by Mark Burgin in a paper presented for publication by Andrey Kolmogorov Burgin 1982 The axiomatic approach encompasses other approaches in the algorithmic information theory It is possible to treat different measures of algorithmic information as particular cases of axiomatically defined measures of algorithmic information Instead of proving similar theorems such as the basic invariance theorem for each particular measure it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting This is a general advantage of the axiomatic approach in mathematics The axiomatic approach to algorithmic information theory was further developed in the book Burgin 2005 and applied to software metrics Burgin and Debnath 2003 Debnath and Burgin 2003 Precise definitions editMain article Kolmogorov complexity A binary string is said to be random if the Kolmogorov complexity of the string is at least the length of the string A simple counting argument shows that some strings of any given length are random and almost all strings are very close to being random Since Kolmogorov complexity depends on a fixed choice of universal Turing machine informally a fixed description language in which the descriptions are given the collection of random strings does depend on the choice of fixed universal machine Nevertheless the collection of random strings as a whole has similar properties regardless of the fixed machine so one can and often does talk about the properties of random strings as a group without having to first specify a universal machine Main article Algorithmically random sequence An infinite binary sequence is said to be random if for some constant c for all n the Kolmogorov complexity of the initial segment of length n of the sequence is at least n c It can be shown that almost every sequence from the point of view of the standard measure fair coin or Lebesgue measure on the space of infinite binary sequences is random Also since it can be shown that the Kolmogorov complexity relative to two different universal machines differs by at most a constant the collection of random infinite sequences does not depend on the choice of universal machine in contrast to finite strings This definition of randomness is usually called Martin Lof randomness after Per Martin Lof to distinguish it from other similar notions of randomness It is also sometimes called 1 randomness to distinguish it from other stronger notions of randomness 2 randomness 3 randomness etc In addition to Martin Lof randomness concepts there are also recursive randomness Schnorr randomness and Kurtz randomness etc Yongge Wang showed 10 that all of these randomness concepts are different Related definitions can be made for alphabets other than the set 0 1 displaystyle 0 1 nbsp Specific sequence editAlgorithmic information theory AIT is the information theory of individual objects using computer science and concerns itself with the relationship between computation information and randomness The information content or complexity of an object can be measured by the length of its shortest description For instance the string 0101010101010101010101010101010101010101010101010101010101010101 has the short description 32 repetitions of 01 while 1100100001100001110111101110110011111010010000100101011110010110 presumably has no simple description other than writing down the string itself More formally the algorithmic complexity AC of a string x is defined as the length of the shortest program that computes or outputs x where the program is run on some fixed reference universal computer A closely related notion is the probability that a universal computer outputs some string x when fed with a program chosen at random This algorithmic Solomonoff probability AP is key in addressing the old philosophical problem of induction in a formal way The major drawback of AC and AP are their incomputability Time bounded Levin complexity penalizes a slow program by adding the logarithm of its running time to its length This leads to computable variants of AC and AP and universal Levin search US solves all inversion problems in optimal time apart from some unrealistically large multiplicative constant AC and AP also allow a formal and rigorous definition of randomness of individual strings to not depend on physical or philosophical intuitions about non determinism or likelihood Roughly a string is algorithmic Martin Lof random AR if it is incompressible in the sense that its algorithmic complexity is equal to its length AC AP and AR are the core sub disciplines of AIT but AIT spawns into many other areas It serves as the foundation of the Minimum Description Length MDL principle can simplify proofs in computational complexity theory has been used to define a universal similarity metric between objects solves the Maxwell daemon problem and many others See also editAlgorithmic probability mathematical method of assigning a prior probability to a given observationPages displaying wikidata descriptions as a fallback Algorithmically random sequence Sequence of binary digits that appears random to any algorithm running on a universal Turing machine Chaitin s constant Halting probability of a random computer program Computational indistinguishability In computer science relationship between two families of distributions Distribution ensemble sequence of probability distributions or random variablesPages displaying wikidata descriptions as a fallback Epistemology Branch of philosophy concerning knowledge Inductive reasoning Method of logical reasoning Inductive probability Determining the probability of future events based on past events Invariance theorem Kolmogorov complexity Measure of algorithmic complexity Minimum description length Model selection principle Minimum message length Formal information theory restatement of Occam s Razor Pseudorandom ensemble Pseudorandom generator Term used in theoretical computer science and cryptography Simplicity theory cognitive theoryPages displaying wikidata descriptions as a fallback Shannon s source coding theorem Establishes the limits to possible data compression Solomonoff s theory of inductive inference mathematical formalization of Occam s razor that assuming the world is generated by a computer program the most likely one is the shortest using Bayesian inferencePages displaying wikidata descriptions as a fallbackReferences edit a b Chaitin 1975 Algorithmic Information Theory Archived from the original on January 23 2016 Retrieved May 3 2010 or for the mutual algorithmic information informing the algorithmic complexity of the input along with the input itself a b Calude 2013 Downey Rodney G Hirschfeldt Denis R 2010 Algorithmic Randomness and Complexity Springer ISBN 978 0 387 68441 3 Li amp Vitanyi 2013 Vitanyi P Obituary Ray Solomonoff Founding Father of Algorithmic Information Theory Paper from conference on Cerebral Systems and Computers California Institute of Technology February 8 11 1960 cited in A Formal Theory of Inductive Inference Part 1 1964 p 1 Solomonoff R A Preliminary Report on a General Theory of Inductive Inference Report V 131 Zator Co Cambridge Ma November Revision of February 4 1960 report Wang Yongge 1996 Randomness and Complexity PDF PhD University of Heidelberg External links editAlgorithmic Information Theory at Scholarpedia Chaitin s account of the history of AIT Further reading editBlum M 1967 On the Size of Machines Information and Control 11 3 257 265 doi 10 1016 S0019 9958 67 90546 3 Blum M 1967 A Machine independent Theory of Complexity of Recursive Functions Journal of the ACM 14 2 322 336 doi 10 1145 321386 321395 S2CID 15710280 Burgin M 1982 Generalized Kolmogorov complexity and duality in theory of computations Soviet Math Dokl 25 3 19 23 Burgin M 1990 Generalized Kolmogorov Complexity and other Dual Complexity Measures Cybernetics 26 4 481 490 doi 10 1007 BF01068189 S2CID 121736453 Burgin M 2005 Super recursive algorithms Monographs in computer science Springer ISBN 9780387955698 Calude C S 1996 Algorithmic information theory Open problems PDF J UCS 2 5 439 441 Archived from the original PDF on November 28 2021 Retrieved June 30 2019 Calude C S 2013 Information and Randomness An Algorithmic Perspective Texts in Theoretical Computer Science An EATCS Series 2nd ed Springer Verlag ISBN 9783662049785 Chaitin G J 1966 On the Length of Programs for Computing Finite Binary Sequences Journal of the Association for Computing Machinery 13 4 547 569 doi 10 1145 321356 321363 S2CID 207698337 Chaitin G J 1969 On the Simplicity and Speed of Programs for Computing Definite Sets of Natural Numbers Journal of the Association for Computing Machinery 16 3 407 412 doi 10 1145 321526 321530 S2CID 12584692 Chaitin G J 1975 A Theory of Program Size Formally Identical to Information Theory Journal of the Association for Computing Machinery 22 3 329 340 doi 10 1145 321892 321894 S2CID 14133389 Chaitin G J 1977 Algorithmic information theory IBM Journal of Research and Development 21 4 350 9 doi 10 1147 rd 214 0350 Chaitin G J 1987 Algorithmic Information Theory Cambridge University Press ISBN 9780521343060 Kolmogorov A N 1965 Three approaches to the definition of the quantity of information Problems of Information Transmission 1 3 11 Kolmogorov A N 1968 Logical basis for information theory and probability theory IEEE Trans Inf Theory IT 14 5 662 4 doi 10 1109 TIT 1968 1054210 S2CID 11402549 Levin L A 1974 Laws of information nongrowth and aspects of the foundation of probability theory Problems of Information Transmission 10 3 206 210 Levin L A 1976 Various Measures of Complexity for Finite Objects Axiomatic Description Soviet Math Dokl 17 522 526 Li M Vitanyi P 2013 An Introduction to Kolmogorov Complexity and its Applications 2nd ed Springer Verlag ISBN 9781475726060 Solomonoff R J 1960 A Preliminary Report on a General Theory of Inductive Inference PDF Technical report Cambridge Mass Zator Company ZTB 138 Solomonoff R J 1964 A Formal Theory of Inductive Inference Information and Control 7 1 1 22 doi 10 1016 S0019 9958 64 90223 2 Solomonoff R J 1964 A Formal Theory of Inductive Inference Information and Control 7 2 224 254 doi 10 1016 S0019 9958 64 90131 7 Solomonoff R J 2009 Emmert Streib F Dehmer M eds Algorithmic Probability Theory and Applications Information Theory and Statistical Learning Springer ISBN 978 0 387 84815 0 Van Lambagen 1989 Algorithmic Information Theory PDF Journal of Symbolic Logic 54 4 1389 1400 doi 10 1017 S0022481200041153 S2CID 250348327 Zurek W H 2018 1991 Algorithmic Information Content Church Turing Thesis physical entropy and Maxwell s demon in Complexity Entropy and the Physics of Information Addison Wesley pp 73 89 ISBN 9780429982514 Zvonkin A K and Levin L A 1970 The Complexity of Finite Objects and the Development of the Concepts of Information and Randomness by Means of the Theory of Algorithms Russian Mathematical Surveys 256 6 83 124 Bibcode 1970RuMaS 25 83Z doi 10 1070 RM1970v025n06ABEH001269 S2CID 250850390 a href Template Cite journal html title Template Cite journal cite journal a CS1 maint multiple names authors list link Retrieved from https en wikipedia org w index php title Algorithmic information theory amp oldid 1211304413, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.