fbpx
Wikipedia

Computational learning theory

In computer science, computational learning theory (or just learning theory) is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms.[1]

Overview

Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples, including samples that have not been seen previously by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.

In addition to performance bounds, computational learning theory studies the time complexity and feasibility of learning.[citation needed] In computational learning theory, a computation is considered feasible if it can be done in polynomial time.[citation needed] There are two kinds of time complexity results:

  • Positive results – Showing that a certain class of functions is learnable in polynomial time.
  • Negative results – Showing that certain classes cannot be learned in polynomial time.

Negative results often rely on commonly believed, but yet unproven assumptions,[citation needed] such as:

There are several different approaches to computational learning theory based on making different assumptions about the inference principles used to generalise from limited data. This includes different definitions of probability (see frequency probability, Bayesian probability) and different assumptions on the generation of samples.[citation needed] The different approaches include:

While its primary goal is to understand learning abstractly, computational learning theory has led to the development of practical algorithms. For example, PAC theory inspired boosting, VC theory led to support vector machines, and Bayesian inference led to belief networks.

See also

References

  1. ^ "ACL - Association for Computational Learning".
  2. ^ Valiant, Leslie (1984). "A Theory of the Learnable" (PDF). Communications of the ACM. 27 (11): 1134–1142.
  3. ^ Vapnik, V.; Chervonenkis, A. (1971). "On the uniform convergence of relative frequencies of events to their probabilities" (PDF). Theory of Probability and Its Applications. 16 (2): 264–280.
  4. ^ Gold, E. Mark (1967). "Language identification in the limit" (PDF). Information and Control. 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5.

Surveys

  • Angluin, D. 1992. Computational learning theory: Survey and selected bibliography. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Theory of Computing (May 1992), pages 351–369. http://portal.acm.org/citation.cfm?id=129712.129746
  • D. Haussler. Probably approximately correct learning. In AAAI-90 Proceedings of the Eight National Conference on Artificial Intelligence, Boston, MA, pages 1101–1108. American Association for Artificial Intelligence, 1990. http://citeseer.ist.psu.edu/haussler90probably.html

VC dimension

  • V. Vapnik and A. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and Its Applications, 16(2):264–280, 1971.

Feature selection

Inductive inference

  • Gold, E. Mark (1967). "Language identification in the limit" (PDF). Information and Control. 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5.

Optimal O notation learning

Negative results

Boosting (machine learning)

Occam learning

  • Blumer, A.; Ehrenfeucht, A.; Haussler, D.; Warmuth, M. K. Occam's razor Inf.Proc.Lett. 24, 377–380, 1987.
  • Blumer, A.; Ehrenfeucht, A.; Haussler, D.; Warmuth, M. K. Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM, 36(4):929–865, 1989.

Probably approximately correct learning

  • L. Valiant. A Theory of the Learnable. Communications of the ACM, 27(11):1134–1142, 1984.

Error tolerance

Equivalence

  • D.Haussler, M.Kearns, N.Littlestone and M. Warmuth, Equivalence of models for polynomial learnability, Proc. 1st ACM Workshop on Computational Learning Theory, (1988) 42-55.
  • Pitt, L.; Warmuth, M. K. (1990). "Prediction-Preserving Reducibility". Journal of Computer and System Sciences. 41 (3): 430–467. doi:10.1016/0022-0000(90)90028-J.

A description of some of these publications is given at important publications in machine learning.

Distribution learning theory

External links

  • Basics of Bayesian inference

computational, learning, theory, also, statistical, learning, theory, this, article, needs, additional, citations, verification, please, help, improve, this, article, adding, citations, reliable, sources, unsourced, material, challenged, removed, find, sources. See also Statistical learning theory This article needs additional citations for verification Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed Find sources Computational learning theory news newspapers books scholar JSTOR November 2018 Learn how and when to remove this template message In computer science computational learning theory or just learning theory is a subfield of artificial intelligence devoted to studying the design and analysis of machine learning algorithms 1 Contents 1 Overview 2 See also 3 References 3 1 Surveys 3 2 VC dimension 3 3 Feature selection 3 4 Inductive inference 3 5 Optimal O notation learning 3 6 Negative results 3 7 Boosting machine learning 3 8 Occam learning 3 9 Probably approximately correct learning 3 10 Error tolerance 3 11 Equivalence 3 12 Distribution learning theory 4 External linksOverview EditTheoretical results in machine learning mainly deal with a type of inductive learning called supervised learning In supervised learning an algorithm is given samples that are labeled in some useful way For example the samples might be descriptions of mushrooms and the labels could be whether or not the mushrooms are edible The algorithm takes these previously labeled samples and uses them to induce a classifier This classifier is a function that assigns labels to samples including samples that have not been seen previously by the algorithm The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples In addition to performance bounds computational learning theory studies the time complexity and feasibility of learning citation needed In computational learning theory a computation is considered feasible if it can be done in polynomial time citation needed There are two kinds of time complexity results Positive results Showing that a certain class of functions is learnable in polynomial time Negative results Showing that certain classes cannot be learned in polynomial time Negative results often rely on commonly believed but yet unproven assumptions citation needed such as Computational complexity P NP the P versus NP problem Cryptographic One way functions exist There are several different approaches to computational learning theory based on making different assumptions about the inference principles used to generalise from limited data This includes different definitions of probability see frequency probability Bayesian probability and different assumptions on the generation of samples citation needed The different approaches include Exact learning proposed by Dana Angluin citation needed Probably approximately correct learning PAC learning proposed by Leslie Valiant 2 VC theory proposed by Vladimir Vapnik and Alexey Chervonenkis 3 Bayesian inference citation needed Algorithmic learning theory from the work of E Mark Gold 4 Online machine learning from the work of Nick Littlestone citation needed While its primary goal is to understand learning abstractly computational learning theory has led to the development of practical algorithms For example PAC theory inspired boosting VC theory led to support vector machines and Bayesian inference led to belief networks See also EditGrammar induction Information theory Stability learning theory Error Tolerance PAC learning References Edit ACL Association for Computational Learning Valiant Leslie 1984 A Theory of the Learnable PDF Communications of the ACM 27 11 1134 1142 Vapnik V Chervonenkis A 1971 On the uniform convergence of relative frequencies of events to their probabilities PDF Theory of Probability and Its Applications 16 2 264 280 Gold E Mark 1967 Language identification in the limit PDF Information and Control 10 5 447 474 doi 10 1016 S0019 9958 67 91165 5 Surveys Edit Angluin D 1992 Computational learning theory Survey and selected bibliography In Proceedings of the Twenty Fourth Annual ACM Symposium on Theory of Computing May 1992 pages 351 369 http portal acm org citation cfm id 129712 129746 D Haussler Probably approximately correct learning In AAAI 90 Proceedings of the Eight National Conference on Artificial Intelligence Boston MA pages 1101 1108 American Association for Artificial Intelligence 1990 http citeseer ist psu edu haussler90probably htmlVC dimension Edit V Vapnik and A Chervonenkis On the uniform convergence of relative frequencies of events to their probabilities Theory of Probability and Its Applications 16 2 264 280 1971 Feature selection Edit A Dhagat and L Hellerstein PAC learning with irrelevant attributes in Proceedings of the IEEE Symp on Foundation of Computer Science 1994 http citeseer ist psu edu dhagat94pac htmlInductive inference Edit Gold E Mark 1967 Language identification in the limit PDF Information and Control 10 5 447 474 doi 10 1016 S0019 9958 67 91165 5 Optimal O notation learning Edit Oded Goldreich Dana Ron On universal learning algorithms http citeseerx ist psu edu viewdoc summary doi 10 1 1 47 2224Negative results Edit M Kearns and Leslie Valiant 1989 Cryptographic limitations on learning boolean formulae and finite automata In Proceedings of the 21st Annual ACM Symposium on Theory of Computing pages 433 444 New York ACM http citeseer ist psu edu kearns89cryptographic htmlBoosting machine learning Edit Robert E Schapire The strength of weak learnability Machine Learning 5 2 197 227 1990 http citeseer ist psu edu schapire90strength htmlOccam learning Edit Blumer A Ehrenfeucht A Haussler D Warmuth M K Occam s razor Inf Proc Lett 24 377 380 1987 Blumer A Ehrenfeucht A Haussler D Warmuth M K Learnability and the Vapnik Chervonenkis dimension Journal of the ACM 36 4 929 865 1989 Probably approximately correct learning Edit L Valiant A Theory of the Learnable Communications of the ACM 27 11 1134 1142 1984 Error tolerance Edit Michael Kearns and Ming Li Learning in the presence of malicious errors SIAM Journal on Computing 22 4 807 837 August 1993 http citeseer ist psu edu kearns93learning html Kearns M 1993 Efficient noise tolerant learning from statistical queries In Proceedings of the Twenty Fifth Annual ACM Symposium on Theory of Computing pages 392 401 http citeseer ist psu edu kearns93efficient htmlEquivalence Edit D Haussler M Kearns N Littlestone and M Warmuth Equivalence of models for polynomial learnability Proc 1st ACM Workshop on Computational Learning Theory 1988 42 55 Pitt L Warmuth M K 1990 Prediction Preserving Reducibility Journal of Computer and System Sciences 41 3 430 467 doi 10 1016 0022 0000 90 90028 J A description of some of these publications is given at important publications in machine learning Distribution learning theory EditExternal links EditBasics of Bayesian inference Retrieved from https en wikipedia org w index php title Computational learning theory amp oldid 1129334814, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.