fbpx
Wikipedia

Theoretical computer science

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, lambda calculus, and type theory.

An artistic representation of a Turing machine. Turing machines are used to model general computing devices.

It is difficult to circumscribe the theoretical areas precisely. The ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT) provides the following description:[1]

TCS covers a wide variety of topics including algorithms, data structures, computational complexity, parallel and distributed computation, probabilistic computation, quantum computation, automata theory, information theory, cryptography, program semantics and verification, algorithmic game theory, machine learning, computational biology, computational economics, computational geometry, and computational number theory and algebra. Work in this field is often distinguished by its emphasis on mathematical technique and rigor.

History

While logical inference and mathematical proof had existed previously, in 1931 Kurt Gödel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved.

Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory[citation needed].

With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously. This led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time, which, if implemented, would render some modern public key cryptography algorithms like RSA insecure.[citation needed]

Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below:

          P = NP ?
Mathematical logic Automata theory Number theory Graph theory Computability theory Computational complexity theory
GNITIRW-TERCES          
Cryptography Type theory Category theory Computational geometry Combinatorial optimization Quantum computing theory

Topics

Algorithms

An algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning.

An algorithm is an effective method expressed as a finite list[2] of well-defined instructions[3] for calculating a function.[4] Starting from an initial state and initial input (perhaps empty),[5] the instructions describe a computation that, when executed, proceeds through a finite[6] number of well-defined successive states, eventually producing "output"[7] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.[8]

Automata theory

Automata theory is the study of abstract machines and automata, as well as the computational problems that can be solved using them. It is a theory in theoretical computer science, under discrete mathematics (a section of mathematics and also of computer science). Automata comes from the Greek word αὐτόματα meaning "self-acting".

Automata Theory is the study of self-operating virtual machines to help in the logical understanding of input and output process, without or with intermediate stage(s) of computation (or any function/process).

Coding theory

Coding theory is the study of the properties of codes and their fitness for a specific application. Codes are used for data compression, cryptography, error-correction and more recently also for network coding. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction (or detection) of errors in the transmitted data.

Computational biology

Computational biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems.[9] The field is broadly defined and includes foundations in computer science, applied mathematics, animation, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, ecology, evolution, anatomy, neuroscience, and visualization.[10]

Computational biology is different from biological computation, which is a subfield of computer science and computer engineering using bioengineering and biology to build computers, but is similar to bioinformatics, which is an interdisciplinary science using computers to store and process biological data.

Computational complexity theory

Computational complexity theory is a branch of the theory of computation that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm.

A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are also used, such as the amount of communication (used in communication complexity), the number of gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do.

Computational geometry

Computational geometry is a branch of computer science devoted to the study of algorithms that can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry.

The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization.

Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), computer vision (3D reconstruction).

Computational learning theory

Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. The algorithm takes these previously labeled samples and uses them to induce a classifier. This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm. The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples.

Computational number theory

Computational number theory, also known as algorithmic number theory, is the study of algorithms for performing number theoretic computations. The best known problem in the field is integer factorization.

Cryptography

Cryptography is the practice and study of techniques for secure communication in the presence of third parties (called adversaries).[11] More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries[12] and that are related to various aspects in information security such as data confidentiality, data integrity, authentication, and non-repudiation.[13] Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.

Modern cryptography is heavily based on mathematical theory and computer science practice; cryptographic algorithms are designed around computational hardness assumptions, making such algorithms hard to break in practice by any adversary. It is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. These schemes are therefore termed computationally secure; theoretical advances, e.g., improvements in integer factorization algorithms, and faster computing technology require these solutions to be continually adapted. There exist information-theoretically secure schemes that provably cannot be broken even with unlimited computing power—an example is the one-time pad—but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms.

Data structures

A data structure is a particular way of organizing data in a computer so that it can be used efficiently.[14][15]

Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, databases use B-tree indexes for small percentages of data retrieval and compilers and databases use dynamic hash tables as look up tables.

Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both main memory and in secondary memory.

Distributed computation

Distributed computing studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages.[16] The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components.[16] Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications, and blockchain networks like Bitcoin.

A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs.[17] There are many alternatives for the message passing mechanism, including RPC-like connectors and message queues. An important goal and challenge of distributed systems is location transparency.

Information-based complexity

Information-based complexity (IBC) studies optimal algorithms and computational complexity for continuous problems. IBC has studied continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration.

Formal methods

Formal methods are a particular kind of mathematics based techniques for the specification, development and verification of software and hardware systems.[18] The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.[19]

Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.[20]

Information theory

Information theory is a branch of applied mathematics, electrical engineering, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Since its inception it has broadened to find applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology,[21] the evolution[22] and function[23] of molecular codes, model selection in statistics,[24] thermal physics,[25] quantum computing, linguistics, plagiarism detection,[26] pattern recognition, anomaly detection and other forms of data analysis.[27]

Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information.

Machine learning

Machine learning is a scientific discipline that deals with the construction and study of algorithms that can learn from data.[28] Such algorithms operate by building a model based on inputs[29]: 2  and using that to make predictions or decisions, rather than following only explicitly programmed instructions.

Machine learning can be considered a subfield of computer science and statistics. It has strong ties to artificial intelligence and optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Example applications include spam filtering, optical character recognition (OCR),[30] search engines and computer vision. Machine learning is sometimes conflated with data mining,[31] although that focuses more on exploratory data analysis.[32] Machine learning and pattern recognition "can be viewed as two facets of the same field."[29]: vii 

Parallel computation

Parallel computing is a form of computation in which many calculations are carried out simultaneously,[33] operating on the principle that large problems can often be divided into smaller ones, which are then solved "in parallel". There are several different forms of parallel computing: bit-level, instruction level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling.[34] As power consumption (and consequently heat generation) by computers has become a concern in recent years,[35] parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.[36]

Parallel computer programs are more difficult to write than sequential ones,[37] because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance.

The maximum possible speed-up of a single program as a result of parallelization is known as Amdahl's law.

Program semantics

In programming language theory, semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages. It does so by evaluating the meaning of syntactically legal strings defined by a specific programming language, showing the computation involved. In such a case that the evaluation would be of syntactically illegal strings, the result would be non-computation. Semantics describes the processes a computer follows when executing a program in that specific language. This can be shown by describing the relationship between the input and output of a program, or an explanation of how the program will execute on a certain platform, hence creating a model of computation.

Quantum computation

A quantum computer is a computation system that makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data.[38] Quantum computers are different from digital computers based on transistors. Whereas digital computers require data to be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses qubits (quantum bits), which can be in superpositions of states. A theoretical model is the quantum Turing machine, also known as the universal quantum computer. Quantum computers share theoretical similarities with non-deterministic and probabilistic computers; one example is the ability to be in more than one state simultaneously. The field of quantum computing was first introduced by Yuri Manin in 1980[39] and Richard Feynman in 1982.[40][41] A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968.[42]

As of 2014, quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits.[43] Both practical and theoretical research continues, and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes, such as cryptanalysis.[44]

Symbolic computation

Computer algebra, also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although, properly speaking, computer algebra should be a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have not any given value and are thus manipulated as symbols (therefore the name of symbolic computation).

Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc.

Very-large-scale integration

Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI allows IC makers to add all of these circuits into one chip.

Organizations

Journals and newsletters

Conferences

See also

Notes

  1. ^ "SIGACT". Retrieved 2017-01-19.
  2. ^ "Any classical mathematical algorithm, for example, can be described in a finite number of English words". Rogers, Hartley Jr. (1967). Theory of Recursive Functions and Effective Computability. McGraw-Hill. Page 2.
  3. ^ Well defined with respect to the agent that executes the algorithm: "There is a computing agent, usually human, which can react to the instructions and carry out the computations" (Rogers 1967, p. 2).
  4. ^ "an algorithm is a procedure for computing a function (with respect to some chosen notation for integers) ... this limitation (to numerical functions) results in no loss of generality", (Rogers 1967, p. 1).
  5. ^ "An algorithm has zero or more inputs, i.e., quantities which are given to it initially before the algorithm begins" (Knuth 1973:5).
  6. ^ "A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a 'computational method'" (Knuth 1973:5).
  7. ^ "An algorithm has one or more outputs, i.e. quantities which have a specified relation to the inputs" (Knuth 1973:5).
  8. ^ Whether or not a process with random interior processes (not including the input) is an algorithm is debatable. Rogers opines that: "a computation is carried out in a discrete stepwise fashion, without the use of continuous methods or analog devices . . . carried forward deterministically, without resort to random methods or devices, e.g., dice" (Rogers 1967, p. 2).
  9. ^ (PDF). Biomedical Information Science and Technology Initiative. 17 July 2000. Archived from the original (PDF) on 5 September 2012. Retrieved 18 August 2012.
  10. ^ "About the CCMB". Center for Computational Molecular Biology. Retrieved 18 August 2012.
  11. ^ Rivest, Ronald L. (1990). "Cryptology". In J. Van Leeuwen (ed.). Handbook of Theoretical Computer Science. Vol. 1. Elsevier.
  12. ^ Bellare, Mihir; Rogaway, Phillip (21 September 2005). "Introduction". Introduction to Modern Cryptography. p. 10.
  13. ^ Menezes, A. J.; van Oorschot, P. C.; Vanstone, S. A. (1997). Handbook of Applied Cryptography. ISBN 978-0-8493-8523-0.
  14. ^ Paul E. Black (ed.), entry for data structure in Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and Technology. 15 December 2004. Online version Accessed May 21, 2009.
  15. ^ Entry data structure in the Encyclopædia Britannica (2009) Online entry accessed on May 21, 2009.
  16. ^ a b Coulouris, George; Jean Dollimore; Tim Kindberg; Gordon Blair (2011). Distributed Systems: Concepts and Design (5th ed.). Boston: Addison-Wesley. ISBN 978-0-132-14301-1.
  17. ^ Andrews (2000). Dolev (2000). Ghosh (2007), p. 10.
  18. ^ R. W. Butler (2001-08-06). "What is Formal Methods?". Retrieved 2006-11-16.
  19. ^ C. Michael Holloway. (PDF). 16th Digital Avionics Systems Conference (27–30 October 1997). Archived from the original (PDF) on 16 November 2006. Retrieved 2006-11-16. {{cite journal}}: Cite journal requires |journal= (help)
  20. ^ Monin, pp.3–4
  21. ^ F. Rieke; D. Warland; R Ruyter van Steveninck; W Bialek (1997). Spikes: Exploring the Neural Code. The MIT press. ISBN 978-0262681087.
  22. ^ Huelsenbeck, J. P.; Ronquist, F.; Nielsen, R.; Bollback, J. P. (2001-12-14). "Bayesian Inference of Phylogeny and Its Impact on Evolutionary Biology". Science. American Association for the Advancement of Science (AAAS). 294 (5550): 2310–2314. Bibcode:2001Sci...294.2310H. doi:10.1126/science.1065889. ISSN 0036-8075. PMID 11743192. S2CID 2138288.
  23. ^ Rando Allikmets, Wyeth W. Wasserman, Amy Hutchinson, Philip Smallwood, Jeremy Nathans, Peter K. Rogan, Thomas D. Schneider, Michael Dean (1998) Organization of the ABCR gene: analysis of promoter and splice junction sequences, Gene 215:1, 111–122
  24. ^ Burnham, K. P. and Anderson D. R. (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Second Edition (Springer Science, New York) ISBN 978-0-387-95364-9.
  25. ^ Jaynes, E. T. (1957-05-15). "Information Theory and Statistical Mechanics". Physical Review. American Physical Society (APS). 106 (4): 620–630. Bibcode:1957PhRv..106..620J. doi:10.1103/physrev.106.620. ISSN 0031-899X.
  26. ^ Charles H. Bennett, Ming Li, and Bin Ma (2003) Chain Letters and Evolutionary Histories, Scientific American 288:6, 76–81
  27. ^ David R. Anderson (November 1, 2003). (PDF). Archived from the original (PDF) on July 23, 2011. Retrieved 2010-06-23.
  28. ^ Ron Kovahi; Foster Provost (1998). "Glossary of terms". Machine Learning. 30: 271–274. doi:10.1023/A:1007411609915.
  29. ^ a b C. M. Bishop (2006). Pattern Recognition and Machine Learning. Springer. ISBN 978-0-387-31073-2.
  30. ^ Wernick, Yang, Brankov, Yourganov and Strother, Machine Learning in Medical Imaging, IEEE Signal Processing Magazine, vol. 27, no. 4, July 2010, pp. 25–38
  31. ^ Mannila, Heikki (1996). Data mining: machine learning, statistics, and databases. Int'l Conf. Scientific and Statistical Database Management. IEEE Computer Society.
  32. ^ Friedman, Jerome H. (1998). "Data Mining and Statistics: What's the connection?". Computing Science and Statistics. 29 (1): 3–9.
  33. ^ Gottlieb, Allan; Almasi, George S. (1989). Highly parallel computing. Redwood City, Calif.: Benjamin/Cummings. ISBN 978-0-8053-0177-9.
  34. ^ S.V. Adve et al. (November 2008). "Parallel Computing Research at Illinois: The UPCRC Agenda" 2008-12-09 at the Wayback Machine (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits – increased clock frequency and smarter but increasingly complex architectures – are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster."
  35. ^ Asanovic et al. Old [conventional wisdom]: Power is free, but transistors are expensive. New [conventional wisdom] is [that] power is expensive, but transistors are "free".
  36. ^ Asanovic, Krste et al. (December 18, 2006). "The Landscape of Parallel Computing Research: A View from Berkeley" (PDF). University of California, Berkeley. Technical Report No. UCB/EECS-2006-183. "Old [conventional wisdom]: Increasing clock frequency is the primary method of improving processor performance. New [conventional wisdom]: Increasing parallelism is the primary method of improving processor performance ... Even representatives from Intel, a company generally associated with the 'higher clock-speed is better' position, warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limit."
  37. ^ Hennessy, John L.; Patterson, David A.; Larus, James R. (1999). Computer organization and design : the hardware/software interface (2. ed., 3rd print. ed.). San Francisco: Kaufmann. ISBN 978-1-55860-428-5.
  38. ^ "Quantum Computing with Molecules" article in Scientific American by Neil Gershenfeld and Isaac L. Chuang
  39. ^ Manin, Yu. I. (1980). [Computable and Noncomputable] (in Russian). Sov.Radio. pp. 13–15. Archived from the original on 10 May 2013. Retrieved 4 March 2013.
  40. ^ Feynman, R. P. (1982). "Simulating physics with computers". International Journal of Theoretical Physics. 21 (6): 467–488. Bibcode:1982IJTP...21..467F. CiteSeerX 10.1.1.45.9310. doi:10.1007/BF02650179. S2CID 124545445.
  41. ^ Deutsch, David (1992-01-06). "Quantum computation". Physics World. 5 (6): 57–61. doi:10.1088/2058-7058/5/6/38.
  42. ^ Finkelstein, David (1968). "Space-Time Structure in High Energy Interactions". In Gudehus, T.; Kaiser, G. (eds.). Fundamental Interactions at High Energy. New York: Gordon & Breach.
  43. ^ "New qubit control bodes well for future of quantum computing". Retrieved 26 October 2014.
  44. ^ Quantum Information Science and Technology Roadmap for a sense of where the research is heading.
  45. ^ a b c d e The 2007 Australian Ranking of ICT Conferences 2009-10-02 at the Wayback Machine: tier A+.
  46. ^ MFCS 2017
  47. ^ CSR 2018
  48. ^ a b c d e f g h i j The 2007 Australian Ranking of ICT Conferences 2009-10-02 at the Wayback Machine: tier A.
  49. ^ FCT 2011 (retrieved 2013-06-03)

Further reading

External links

  • Theory Matters Wiki Theoretical Computer Science (TCS) Advocacy Wiki
  • List of academic conferences in the area of theoretical computer science at confsearch
  • Theoretical Computer Science – StackExchange, a Question and Answer site for researchers in theoretical computer science
  • Computer Science Animated
  • Theory of computation @ Massachusetts Institute of Technology

theoretical, computer, science, this, article, about, branch, computer, science, mathematics, journal, theoretical, computer, science, journal, subset, general, computer, science, mathematics, that, focuses, mathematical, aspects, computer, science, such, theo. This article is about the branch of computer science and mathematics For the journal see Theoretical Computer Science journal Theoretical computer science TCS is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation lambda calculus and type theory An artistic representation of a Turing machine Turing machines are used to model general computing devices It is difficult to circumscribe the theoretical areas precisely The ACM s Special Interest Group on Algorithms and Computation Theory SIGACT provides the following description 1 TCS covers a wide variety of topics including algorithms data structures computational complexity parallel and distributed computation probabilistic computation quantum computation automata theory information theory cryptography program semantics and verification algorithmic game theory machine learning computational biology computational economics computational geometry and computational number theory and algebra Work in this field is often distinguished by its emphasis on mathematical technique and rigor Contents 1 History 2 Topics 2 1 Algorithms 2 2 Automata theory 2 3 Coding theory 2 4 Computational biology 2 5 Computational complexity theory 2 6 Computational geometry 2 7 Computational learning theory 2 8 Computational number theory 2 9 Cryptography 2 10 Data structures 2 11 Distributed computation 2 12 Information based complexity 2 13 Formal methods 2 14 Information theory 2 15 Machine learning 2 16 Parallel computation 2 17 Program semantics 2 18 Quantum computation 2 19 Symbolic computation 2 20 Very large scale integration 3 Organizations 4 Journals and newsletters 5 Conferences 6 See also 7 Notes 8 Further reading 9 External linksHistory EditMain article History of computer science While logical inference and mathematical proof had existed previously in 1931 Kurt Godel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon In the same decade Donald Hebb introduced a mathematical model of learning in the brain With mounting biological data supporting this hypothesis with some modification the fields of neural networks and parallel distributed processing were established In 1971 Stephen Cook and working independently Leonid Levin proved that there exist practically relevant problems that are NP complete a landmark result in computational complexity theory citation needed With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction In other words one could compute functions on multiple states simultaneously This led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time which if implemented would render some modern public key cryptography algorithms like RSA insecure citation needed Modern theoretical computer science research is based on these basic developments but includes many other mathematical and interdisciplinary problems that have been posed as shown below P Q displaystyle P rightarrow Q P NP Mathematical logic Automata theory Number theory Graph theory Computability theory Computational complexity theoryGNITIRW TERCES G x Int displaystyle Gamma vdash x text Int Cryptography Type theory Category theory Computational geometry Combinatorial optimization Quantum computing theoryTopics EditAlgorithms Edit Main article Algorithm An algorithm is a step by step procedure for calculations Algorithms are used for calculation data processing and automated reasoning An algorithm is an effective method expressed as a finite list 2 of well defined instructions 3 for calculating a function 4 Starting from an initial state and initial input perhaps empty 5 the instructions describe a computation that when executed proceeds through a finite 6 number of well defined successive states eventually producing output 7 and terminating at a final ending state The transition from one state to the next is not necessarily deterministic some algorithms known as randomized algorithms incorporate random input 8 Automata theory Edit Main article Automata theory Automata theory is the study of abstract machines and automata as well as the computational problems that can be solved using them It is a theory in theoretical computer science under discrete mathematics a section of mathematics and also of computer science Automata comes from the Greek word aὐtomata meaning self acting Automata Theory is the study of self operating virtual machines to help in the logical understanding of input and output process without or with intermediate stage s of computation or any function process Coding theory Edit Main article Coding theory Coding theory is the study of the properties of codes and their fitness for a specific application Codes are used for data compression cryptography error correction and more recently also for network coding Codes are studied by various scientific disciplines such as information theory electrical engineering mathematics and computer science for the purpose of designing efficient and reliable data transmission methods This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data Computational biology Edit Main article Computational biology Computational biology involves the development and application of data analytical and theoretical methods mathematical modeling and computational simulation techniques to the study of biological behavioral and social systems 9 The field is broadly defined and includes foundations in computer science applied mathematics animation statistics biochemistry chemistry biophysics molecular biology genetics genomics ecology evolution anatomy neuroscience and visualization 10 Computational biology is different from biological computation which is a subfield of computer science and computer engineering using bioengineering and biology to build computers but is similar to bioinformatics which is an interdisciplinary science using computers to store and process biological data Computational complexity theory Edit Main article Computational complexity theory Computational complexity theory is a branch of the theory of computation that focuses on classifying computational problems according to their inherent difficulty and relating those classes to each other A computational problem is understood to be a task that is in principle amenable to being solved by a computer which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps such as an algorithm A problem is regarded as inherently difficult if its solution requires significant resources whatever the algorithm used The theory formalizes this intuition by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them such as time and storage Other complexity measures are also used such as the amount of communication used in communication complexity the number of gates in a circuit used in circuit complexity and the number of processors used in parallel computing One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do Computational geometry Edit Main article Computational geometry Computational geometry is a branch of computer science devoted to the study of algorithms that can be stated in terms of geometry Some purely geometrical problems arise out of the study of computational geometric algorithms and such problems are also considered to be part of computational geometry The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer aided design and manufacturing CAD CAM but many problems in computational geometry are classical in nature and may come from mathematical visualization Other important applications of computational geometry include robotics motion planning and visibility problems geographic information systems GIS geometrical location and search route planning integrated circuit design IC geometry design and verification computer aided engineering CAE mesh generation computer vision 3D reconstruction Computational learning theory Edit Main article Computational learning theory Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning In supervised learning an algorithm is given samples that are labeled in some useful way For example the samples might be descriptions of mushrooms and the labels could be whether or not the mushrooms are edible The algorithm takes these previously labeled samples and uses them to induce a classifier This classifier is a function that assigns labels to samples including the samples that have never been previously seen by the algorithm The goal of the supervised learning algorithm is to optimize some measure of performance such as minimizing the number of mistakes made on new samples Computational number theory Edit Main article Computational number theory Computational number theory also known as algorithmic number theory is the study of algorithms for performing number theoretic computations The best known problem in the field is integer factorization Cryptography Edit Main article Cryptography Cryptography is the practice and study of techniques for secure communication in the presence of third parties called adversaries 11 More generally it is about constructing and analyzing protocols that overcome the influence of adversaries 12 and that are related to various aspects in information security such as data confidentiality data integrity authentication and non repudiation 13 Modern cryptography intersects the disciplines of mathematics computer science and electrical engineering Applications of cryptography include ATM cards computer passwords and electronic commerce Modern cryptography is heavily based on mathematical theory and computer science practice cryptographic algorithms are designed around computational hardness assumptions making such algorithms hard to break in practice by any adversary It is theoretically possible to break such a system but it is infeasible to do so by any known practical means These schemes are therefore termed computationally secure theoretical advances e g improvements in integer factorization algorithms and faster computing technology require these solutions to be continually adapted There exist information theoretically secure schemes that provably cannot be broken even with unlimited computing power an example is the one time pad but these schemes are more difficult to implement than the best theoretically breakable but computationally secure mechanisms Data structures Edit Main article Data structure A data structure is a particular way of organizing data in a computer so that it can be used efficiently 14 15 Different kinds of data structures are suited to different kinds of applications and some are highly specialized to specific tasks For example databases use B tree indexes for small percentages of data retrieval and compilers and databases use dynamic hash tables as look up tables Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services Usually efficient data structures are key to designing efficient algorithms Some formal design methods and programming languages emphasize data structures rather than algorithms as the key organizing factor in software design Storing and retrieving can be carried out on data stored in both main memory and in secondary memory Distributed computation Edit Main article Distributed computation Distributed computing studies distributed systems A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages 16 The components interact with each other in order to achieve a common goal Three significant characteristics of distributed systems are concurrency of components lack of a global clock and independent failure of components 16 Examples of distributed systems vary from SOA based systems to massively multiplayer online games to peer to peer applications and blockchain networks like Bitcoin A computer program that runs in a distributed system is called a distributed program and distributed programming is the process of writing such programs 17 There are many alternatives for the message passing mechanism including RPC like connectors and message queues An important goal and challenge of distributed systems is location transparency Information based complexity Edit Main article Information based complexity Information based complexity IBC studies optimal algorithms and computational complexity for continuous problems IBC has studied continuous problems as path integration partial differential equations systems of ordinary differential equations nonlinear equations integral equations fixed points and very high dimensional integration Formal methods Edit Main article Formal methods Formal methods are a particular kind of mathematics based techniques for the specification development and verification of software and hardware systems 18 The use of formal methods for software and hardware design is motivated by the expectation that as in other engineering disciplines performing appropriate mathematical analysis can contribute to the reliability and robustness of a design 19 Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals in particular logic calculi formal languages automata theory and program semantics but also type systems and algebraic data types to problems in software and hardware specification and verification 20 Information theory Edit Main article Information theory Information theory is a branch of applied mathematics electrical engineering and computer science involving the quantification of information Information theory was developed by Claude E Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data Since its inception it has broadened to find applications in many other areas including statistical inference natural language processing cryptography neurobiology 21 the evolution 22 and function 23 of molecular codes model selection in statistics 24 thermal physics 25 quantum computing linguistics plagiarism detection 26 pattern recognition anomaly detection and other forms of data analysis 27 Applications of fundamental topics of information theory include lossless data compression e g ZIP files lossy data compression e g MP3s and JPEGs and channel coding e g for Digital Subscriber Line DSL The field is at the intersection of mathematics statistics computer science physics neurobiology and electrical engineering Its impact has been crucial to the success of the Voyager missions to deep space the invention of the compact disc the feasibility of mobile phones the development of the Internet the study of linguistics and of human perception the understanding of black holes and numerous other fields Important sub fields of information theory are source coding channel coding algorithmic complexity theory algorithmic information theory information theoretic security and measures of information Machine learning Edit Main article Machine learning Machine learning is a scientific discipline that deals with the construction and study of algorithms that can learn from data 28 Such algorithms operate by building a model based on inputs 29 2 and using that to make predictions or decisions rather than following only explicitly programmed instructions Machine learning can be considered a subfield of computer science and statistics It has strong ties to artificial intelligence and optimization which deliver methods theory and application domains to the field Machine learning is employed in a range of computing tasks where designing and programming explicit rule based algorithms is infeasible Example applications include spam filtering optical character recognition OCR 30 search engines and computer vision Machine learning is sometimes conflated with data mining 31 although that focuses more on exploratory data analysis 32 Machine learning and pattern recognition can be viewed as two facets of the same field 29 vii Parallel computation Edit Main article Parallel computation Parallel computing is a form of computation in which many calculations are carried out simultaneously 33 operating on the principle that large problems can often be divided into smaller ones which are then solved in parallel There are several different forms of parallel computing bit level instruction level data and task parallelism Parallelism has been employed for many years mainly in high performance computing but interest in it has grown lately due to the physical constraints preventing frequency scaling 34 As power consumption and consequently heat generation by computers has become a concern in recent years 35 parallel computing has become the dominant paradigm in computer architecture mainly in the form of multi core processors 36 Parallel computer programs are more difficult to write than sequential ones 37 because concurrency introduces several new classes of potential software bugs of which race conditions are the most common Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance The maximum possible speed up of a single program as a result of parallelization is known as Amdahl s law Program semantics Edit Main article Program semantics In programming language theory semantics is the field concerned with the rigorous mathematical study of the meaning of programming languages It does so by evaluating the meaning of syntactically legal strings defined by a specific programming language showing the computation involved In such a case that the evaluation would be of syntactically illegal strings the result would be non computation Semantics describes the processes a computer follows when executing a program in that specific language This can be shown by describing the relationship between the input and output of a program or an explanation of how the program will execute on a certain platform hence creating a model of computation Quantum computation Edit Main article Quantum computation A quantum computer is a computation system that makes direct use of quantum mechanical phenomena such as superposition and entanglement to perform operations on data 38 Quantum computers are different from digital computers based on transistors Whereas digital computers require data to be encoded into binary digits bits each of which is always in one of two definite states 0 or 1 quantum computation uses qubits quantum bits which can be in superpositions of states A theoretical model is the quantum Turing machine also known as the universal quantum computer Quantum computers share theoretical similarities with non deterministic and probabilistic computers one example is the ability to be in more than one state simultaneously The field of quantum computing was first introduced by Yuri Manin in 1980 39 and Richard Feynman in 1982 40 41 A quantum computer with spins as quantum bits was also formulated for use as a quantum space time in 1968 42 As of 2014 update quantum computing is still in its infancy but experiments have been carried out in which quantum computational operations were executed on a very small number of qubits 43 Both practical and theoretical research continues and many national governments and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes such as cryptanalysis 44 Symbolic computation Edit Main article Symbolic computation Computer algebra also called symbolic computation or algebraic computation is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects Although properly speaking computer algebra should be a subfield of scientific computing they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers while symbolic computation emphasizes exact computation with expressions containing variables that have not any given value and are thus manipulated as symbols therefore the name of symbolic computation Software applications that perform symbolic calculations are called computer algebra systems with the term system alluding to the complexity of the main applications that include at least a method to represent mathematical data in a computer a user programming language usually different from the language used for the implementation a dedicated memory manager a user interface for the input output of mathematical expressions a large set of routines to perform usual operations like simplification of expressions differentiation using chain rule polynomial factorization indefinite integration etc Very large scale integration Edit Main article VLSI Very large scale integration VLSI is the process of creating an integrated circuit IC by combining thousands of transistors into a single chip VLSI began in the 1970s when complex semiconductor and communication technologies were being developed The microprocessor is a VLSI device Before the introduction of VLSI technology most ICs had a limited set of functions they could perform An electronic circuit might consist of a CPU ROM RAM and other glue logic VLSI allows IC makers to add all of these circuits into one chip Organizations EditEuropean Association for Theoretical Computer Science SIGACT Simons Institute for the Theory of ComputingJournals and newsletters EditDiscrete Mathematics and Theoretical Computer Science Information and Computation Theory of Computing open access journal Formal Aspects of Computing Journal of the ACM SIAM Journal on Computing SICOMP SIGACT News Theoretical Computer Science Theory of Computing Systems TheoretiCS open access journal International Journal of Foundations of Computer Science Chicago Journal of Theoretical Computer Science open access journal Foundations and Trends in Theoretical Computer Science Journal of Automata Languages and Combinatorics Acta Informatica Fundamenta Informaticae ACM Transactions on Computation Theory Computational Complexity Journal of Complexity ACM Transactions on Algorithms Information Processing Letters Open Computer Science open access journal Conferences EditAnnual ACM Symposium on Theory of Computing STOC 45 Annual IEEE Symposium on Foundations of Computer Science FOCS 45 Innovations in Theoretical Computer Science ITCS Mathematical Foundations of Computer Science MFCS 46 International Computer Science Symposium in Russia CSR 47 ACM SIAM Symposium on Discrete Algorithms SODA 45 IEEE Symposium on Logic in Computer Science LICS 45 Computational Complexity Conference CCC 48 International Colloquium on Automata Languages and Programming ICALP 48 Annual Symposium on Computational Geometry SoCG 48 ACM Symposium on Principles of Distributed Computing PODC 45 ACM Symposium on Parallelism in Algorithms and Architectures SPAA 48 Annual Conference on Learning Theory COLT 48 Symposium on Theoretical Aspects of Computer Science STACS 48 European Symposium on Algorithms ESA 48 Workshop on Approximation Algorithms for Combinatorial Optimization Problems APPROX 48 Workshop on Randomization and Computation RANDOM 48 International Symposium on Algorithms and Computation ISAAC 48 International Symposium on Fundamentals of Computation Theory FCT 49 International Workshop on Graph Theoretic Concepts in Computer Science WG See also EditFormal science Unsolved problems in computer science Sun Ni lawNotes Edit SIGACT Retrieved 2017 01 19 Any classical mathematical algorithm for example can be described in a finite number of English words Rogers Hartley Jr 1967 Theory of Recursive Functions and Effective Computability McGraw Hill Page 2 Well defined with respect to the agent that executes the algorithm There is a computing agent usually human which can react to the instructions and carry out the computations Rogers 1967 p 2 an algorithm is a procedure for computing a function with respect to some chosen notation for integers this limitation to numerical functions results in no loss of generality Rogers 1967 p 1 An algorithm has zero or more inputs i e quantities which are given to it initially before the algorithm begins Knuth 1973 5 A procedure which has all the characteristics of an algorithm except that it possibly lacks finiteness may be called a computational method Knuth 1973 5 An algorithm has one or more outputs i e quantities which have a specified relation to the inputs Knuth 1973 5 Whether or not a process with random interior processes not including the input is an algorithm is debatable Rogers opines that a computation is carried out in a discrete stepwise fashion without the use of continuous methods or analog devices carried forward deterministically without resort to random methods or devices e g dice Rogers 1967 p 2 NIH working definition of bioinformatics and computational biology PDF Biomedical Information Science and Technology Initiative 17 July 2000 Archived from the original PDF on 5 September 2012 Retrieved 18 August 2012 About the CCMB Center for Computational Molecular Biology Retrieved 18 August 2012 Rivest Ronald L 1990 Cryptology In J Van Leeuwen ed Handbook of Theoretical Computer Science Vol 1 Elsevier Bellare Mihir Rogaway Phillip 21 September 2005 Introduction Introduction to Modern Cryptography p 10 Menezes A J van Oorschot P C Vanstone S A 1997 Handbook of Applied Cryptography ISBN 978 0 8493 8523 0 Paul E Black ed entry for data structure in Dictionary of Algorithms and Data Structures U S National Institute of Standards and Technology 15 December 2004 Online version Accessed May 21 2009 Entry data structure in the Encyclopaedia Britannica 2009 Online entry accessed on May 21 2009 a b Coulouris George Jean Dollimore Tim Kindberg Gordon Blair 2011 Distributed Systems Concepts and Design 5th ed Boston Addison Wesley ISBN 978 0 132 14301 1 Andrews 2000 harvtxt error no target CITEREFAndrews2000 help Dolev 2000 harvtxt error no target CITEREFDolev2000 help Ghosh 2007 harvtxt error no target CITEREFGhosh2007 help p 10 R W Butler 2001 08 06 What is Formal Methods Retrieved 2006 11 16 C Michael Holloway Why Engineers Should Consider Formal Methods PDF 16th Digital Avionics Systems Conference 27 30 October 1997 Archived from the original PDF on 16 November 2006 Retrieved 2006 11 16 a href Template Cite journal html title Template Cite journal cite journal a Cite journal requires journal help Monin pp 3 4 F Rieke D Warland R Ruyter van Steveninck W Bialek 1997 Spikes Exploring the Neural Code The MIT press ISBN 978 0262681087 Huelsenbeck J P Ronquist F Nielsen R Bollback J P 2001 12 14 Bayesian Inference of Phylogeny and Its Impact on Evolutionary Biology Science American Association for the Advancement of Science AAAS 294 5550 2310 2314 Bibcode 2001Sci 294 2310H doi 10 1126 science 1065889 ISSN 0036 8075 PMID 11743192 S2CID 2138288 Rando Allikmets Wyeth W Wasserman Amy Hutchinson Philip Smallwood Jeremy Nathans Peter K Rogan Thomas D Schneider Michael Dean 1998 Organization of the ABCR gene analysis of promoter and splice junction sequences Gene 215 1 111 122 Burnham K P and Anderson D R 2002 Model Selection and Multimodel Inference A Practical Information Theoretic Approach Second Edition Springer Science New York ISBN 978 0 387 95364 9 Jaynes E T 1957 05 15 Information Theory and Statistical Mechanics Physical Review American Physical Society APS 106 4 620 630 Bibcode 1957PhRv 106 620J doi 10 1103 physrev 106 620 ISSN 0031 899X Charles H Bennett Ming Li and Bin Ma 2003 Chain Letters and Evolutionary Histories Scientific American 288 6 76 81 David R Anderson November 1 2003 Some background on why people in the empirical sciences may want to better understand the information theoretic methods PDF Archived from the original PDF on July 23 2011 Retrieved 2010 06 23 Ron Kovahi Foster Provost 1998 Glossary of terms Machine Learning 30 271 274 doi 10 1023 A 1007411609915 a b C M Bishop 2006 Pattern Recognition and Machine Learning Springer ISBN 978 0 387 31073 2 Wernick Yang Brankov Yourganov and Strother Machine Learning in Medical Imaging IEEE Signal Processing Magazine vol 27 no 4 July 2010 pp 25 38 Mannila Heikki 1996 Data mining machine learning statistics and databases Int l Conf Scientific and Statistical Database Management IEEE Computer Society Friedman Jerome H 1998 Data Mining and Statistics What s the connection Computing Science and Statistics 29 1 3 9 Gottlieb Allan Almasi George S 1989 Highly parallel computing Redwood City Calif Benjamin Cummings ISBN 978 0 8053 0177 9 S V Adve et al November 2008 Parallel Computing Research at Illinois The UPCRC Agenda Archived 2008 12 09 at the Wayback Machine PDF Parallel Illinois University of Illinois at Urbana Champaign The main techniques for these performance benefits increased clock frequency and smarter but increasingly complex architectures are now hitting the so called power wall The computer industry has accepted that future performance increases must largely come from increasing the number of processors or cores on a die rather than making a single core go faster Asanovic et al Old conventional wisdom Power is free but transistors are expensive New conventional wisdom is that power is expensive but transistors are free Asanovic Krste et al December 18 2006 The Landscape of Parallel Computing Research A View from Berkeley PDF University of California Berkeley Technical Report No UCB EECS 2006 183 Old conventional wisdom Increasing clock frequency is the primary method of improving processor performance New conventional wisdom Increasing parallelism is the primary method of improving processor performance Even representatives from Intel a company generally associated with the higher clock speed is better position warned that traditional approaches to maximizing performance through maximizing clock speed have been pushed to their limit Hennessy John L Patterson David A Larus James R 1999 Computer organization and design the hardware software interface 2 ed 3rd print ed San Francisco Kaufmann ISBN 978 1 55860 428 5 Quantum Computing with Molecules article in Scientific American by Neil Gershenfeld and Isaac L Chuang Manin Yu I 1980 Vychislimoe i nevychislimoe Computable and Noncomputable in Russian Sov Radio pp 13 15 Archived from the original on 10 May 2013 Retrieved 4 March 2013 Feynman R P 1982 Simulating physics with computers International Journal of Theoretical Physics 21 6 467 488 Bibcode 1982IJTP 21 467F CiteSeerX 10 1 1 45 9310 doi 10 1007 BF02650179 S2CID 124545445 Deutsch David 1992 01 06 Quantum computation Physics World 5 6 57 61 doi 10 1088 2058 7058 5 6 38 Finkelstein David 1968 Space Time Structure in High Energy Interactions In Gudehus T Kaiser G eds Fundamental Interactions at High Energy New York Gordon amp Breach New qubit control bodes well for future of quantum computing Retrieved 26 October 2014 Quantum Information Science and Technology Roadmap for a sense of where the research is heading a b c d e The 2007 Australian Ranking of ICT Conferences Archived 2009 10 02 at the Wayback Machine tier A MFCS 2017 CSR 2018 a b c d e f g h i j The 2007 Australian Ranking of ICT Conferences Archived 2009 10 02 at the Wayback Machine tier A FCT 2011 retrieved 2013 06 03 Further reading EditMartin Davis Ron Sigal Elaine J Weyuker Computability complexity and languages fundamentals of theoretical computer science 2nd ed Academic Press 1994 ISBN 0 12 206382 1 Covers theory of computation but also program semantics and quantification theory Aimed at graduate students External links EditSIGACT directory of additional theory links Theory Matters Wiki Theoretical Computer Science TCS Advocacy Wiki List of academic conferences in the area of theoretical computer science at confsearch Theoretical Computer Science StackExchange a Question and Answer site for researchers in theoretical computer science Computer Science Animated Theory of computation Massachusetts Institute of Technology Retrieved from https en wikipedia org w index php title Theoretical computer science amp oldid 1150000978, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.