fbpx
Wikipedia

Decision tree model

In computational complexity the decision tree model is the model of computation in which an algorithm is considered to be basically a decision tree, i.e., a sequence of queries or tests that are done adaptively, so the outcome of previous tests can influence the tests performed next.

Typically, these tests have a small number of outcomes (such as a yes–no question) and can be performed quickly (say, with unit computational cost), so the worst-case time complexity of an algorithm in the decision tree model corresponds to the depth of the corresponding decision tree. This notion of computational complexity of a problem or an algorithm in the decision tree model is called its decision tree complexity or query complexity.

Decision trees models are instrumental in establishing lower bounds for complexity theory for certain classes of computational problems and algorithms. Several variants of decision tree models have been introduced, depending on the computational model and type of query algorithms are allowed to perform.

For example, a decision tree argument is used to show that a comparison sort of items must take comparisons. For comparison sorts, a query is a comparison of two items , with two outcomes (assuming no items are equal): either or . Comparison sorts can be expressed as a decision tree in this model, since such sorting algorithms only perform these types of queries.

Comparison trees and lower bounds for sorting edit

Decision trees are often employed to understand algorithms for sorting and other similar problems; this was first done by Ford and Johnson.[1]

For example, many sorting algorithms are comparison sorts, which means that they only gain information about an input sequence   via local comparisons: testing whether  ,  , or  . Assuming that the items to be sorted are all distinct and comparable, this can be rephrased as a yes-or-no question: is  ?

These algorithms can be modeled as binary decision trees, where the queries are comparisons: an internal node corresponds to a query, and the node's children correspond to the next query when the answer to the question is yes or no. For leaf nodes, the output corresponds to a permutation   that describes how the input sequence was scrambled from the fully ordered list of items. (The inverse of this permutation,  , re-orders the input sequence.)

One can show that comparison sorts must use   comparisons through a simple argument: for an algorithm to be correct, it must be able to output every possible permutation of   elements; otherwise, the algorithm would fail for that particular permutation as input. So, its corresponding decision tree must have at least as many leaves as permutations:   leaves. Any binary tree with at least   leaves has depth at least  , so this is a lower bound on the run time of a comparison sorting algorithm. In this case, the existence of numerous comparison-sorting algorithms having this time complexity, such as mergesort and heapsort, demonstrates that the bound is tight.[2]: 91 

This argument does not use anything about the type of query, so it in fact proves a lower bound for any sorting algorithm that can be modeled as a binary decision tree. In essence, this is a rephrasing of the information-theoretic argument that a correct sorting algorithm must learn at least   bits of information about the input sequence. As a result, this also works for randomized decision trees as well.

Other decision tree lower bounds do use that the query is a comparison. For example, consider the task of only using comparisons to find the smallest number among   numbers. Before the smallest number can be determined, every number except the smallest must "lose" (compare greater) in at least one comparison. So, it takes at least   comparisons to find the minimum. (The information-theoretic argument here only gives a lower bound of  .) A similar argument works for general lower bounds for computing order statistics.[2]: 214 

Linear and algebraic decision trees edit

Linear decision trees generalize the above comparison decision trees to computing functions that take real vectors   as input. The tests in linear decision trees are linear functions: for a particular choice of real numbers  , output the sign of  . (Algorithms in this model can only depend on the sign of the output.) Comparison trees are linear decision trees, because the comparison between   and   corresponds to the linear function  . From its definition, linear decision trees can only specify functions   whose fibers can be constructed by taking unions and intersections of half-spaces.

Algebraic decision trees are a generalization of linear decision trees that allow the test functions to be polynomials of degree  . Geometrically, the space is divided into semi-algebraic sets (a generalization of hyperplane).

These decision tree models, defined by Rabin[3] and Reingold,[4] are often used for proving lower bounds in computational geometry.[5] For example, Ben-Or showed that element uniqueness (the task of computing  , where   is 0 if and only if there exist distinct coordinates   such that  ) requires an algebraic decision tree of depth  .[6] This was first showed for linear decision models by Dobkin and Lipton.[7] They also show a   lower bound for linear decision trees on the knapsack problem, generalized to algebraic decision trees by Steele and Yao.[8]

Boolean decision tree complexities edit

For Boolean decision trees, the task is to compute the value of an n-bit Boolean function   for an input  . The queries correspond to reading a bit of the input,  , and the output is  . Each query may be dependent on previous queries. There are many types of computational models using decision trees that could be considered, admitting multiple complexity notions, called complexity measures.

Deterministic decision tree edit

If the output of a decision tree is  , for all  , the decision tree is said to "compute"  . The depth of a tree is the maximum number of queries that can happen before a leaf is reached and a result obtained.  , the deterministic decision tree complexity of   is the smallest depth among all deterministic decision trees that compute  .

Randomized decision tree edit

One way to define a randomized decision tree is to add additional nodes to the tree, each controlled by a probability  . Another equivalent definition is to define it as a distribution over deterministic decision trees. Based on this second definition, the complexity of the randomized tree is defined as the largest depth among all the trees in the support of the underlying distribution.   is defined as the complexity of the lowest-depth randomized decision tree whose result is   with probability at least   for all   (i.e., with bounded 2-sided error).

  is known as the Monte Carlo randomized decision-tree complexity, because the result is allowed to be incorrect with bounded two-sided error. The Las Vegas decision-tree complexity   measures the expected depth of a decision tree that must be correct (i.e., has zero-error). There is also a one-sided bounded-error version which is denoted by  .

Nondeterministic decision tree edit

The nondeterministic decision tree complexity of a function is known more commonly as the certificate complexity of that function. It measures the number of input bits that a nondeterministic algorithm would need to look at in order to evaluate the function with certainty.

Formally, the certificate complexity of   at   is the size of the smallest subset of indices   such that, for all  , if   for all  , then  . The certificate complexity of   is the maximum certificate complexity over all  . The analogous notion where one only requires the verifier to be correct with 2/3 probability is denoted  .

Quantum decision tree edit

The quantum decision tree complexity   is the depth of the lowest-depth quantum decision tree that gives the result   with probability at least   for all  . Another quantity,  , is defined as the depth of the lowest-depth quantum decision tree that gives the result   with probability 1 in all cases (i.e. computes   exactly).   and   are more commonly known as quantum query complexities, because the direct definition of a quantum decision tree is more complicated than in the classical case. Similar to the randomized case, we define   and  .

These notions are typically bounded by the notions of degree and approximate degree. The degree of  , denoted  , is the smallest degree of any polynomial   satisfying   for all  . The approximate degree of  , denoted  , is the smallest degree of any polynomial   satisfying   whenever   and   whenever  .

Beals et al. established that   and  .[9]

Relationships between boolean function complexity measures edit

It follows immediately from the definitions that for all  -bit Boolean functions  , , and  . Finding the best upper bounds in the converse direction is a major goal in the field of query complexity.

All of these types of query complexity are polynomially related. Blum and Impagliazzo,[10] Hartmanis and Hemachandra,[11] and Tardos[12] independently discovered that  . Noam Nisan found that the Monte Carlo randomized decision tree complexity is also polynomially related to deterministic decision tree complexity:  .[13] (Nisan also showed that  .) A tighter relationship is known between the Monte Carlo and Las Vegas models:  .[14] This relationship is optimal up to polylogarithmic factors.[15] As for quantum decision tree complexities,  , and this bound is tight.[16][15] Midrijanis showed that  ,[17][18] improving a quartic bound due to Beals et al.[9]

It is important to note that these polynomial relationships are valid only for total Boolean functions. For partial Boolean functions, that have a domain a subset of  , an exponential separation between   and   is possible; the first example of such a problem was discovered by Deutsch and Jozsa.

Sensitivity conjecture edit

For a Boolean function  , the sensitivity of   is defined to be the maximum sensitivity of   over all  , where the sensitivity of   at   is the number of single-bit changes in   that change the value of  . Sensitivity is related to the notion of total influence from the analysis of Boolean functions, which is equal to average sensitivity over all  .

The sensitivity conjecture is the conjecture that sensitivity is polynomially related to query complexity; that is, there exists exponent   such that, for all  ,   and  . One can show through a simple argument that  , so the conjecture is specifically concerned about finding a lower bound for sensitivity. Since all of the previously-discussed complexity measures are polynomially related, the precise type of complexity measure is not relevant. However, this is typically phrased as the question of relating sensitivity with block sensitivity.

The block sensitivity of  , denoted  , is defined to be the maximum block sensitivity of   over all  . The block sensitivity of   at   is the maximum number   of disjoint subsets   such that, for any of the subsets  , flipping the bits of   corresponding to   changes the value of  .[13]

Since block sensitivity takes a maximum over more choices of subsets,  . Further, block sensitivity is polynomially related to the previously discussed complexity measures; for example, Nisan's paper introducing block-sensitivity showed that  .[13] So, one could rephrase the sensitivity conjecture as showing that, for some  ,  . In 1992, Nisan and Szegedy conjectured that   suffices.[19] This would be tight, as Rubinstein in 1995 showed a quadratic separation between sensitivity and block sensitivity.[20]

In July 2019, 27 years after the conjecture was initially posed, Hao Huang from Emory University proved the sensitivity conjecture, showing that  .[21] This proof is notably succinct, proving this statement in two pages when prior progress towards the sensitivity conjecture had been limited.[22][23]

Summary of known results edit

Best-known separations for complexity measures as of October 2020[16]
                     
  2 2, 3 2 2, 3 2, 3 3, 6 2, 3 2, 3 4 4
  1 2 2 2, 3 2, 3 3, 6 2, 3 2, 3 3, 4 4
  1 1 2 2, 3 2, 3 3, 6 1.5, 3 2, 3 3, 4 4
  1 1 1, 2 2 2 2.22, 5 1.15, 3 1.63, 3 2, 4 2, 4
  1 1 1 1 1.5, 2 2, 4 1.15, 2 1.63, 2 2 2
  1 1 1 1 1 2, 4 1.15, 2 1.63, 2 2 2
  1 1 1 1 1 1 1.15, 2 1.63, 2 2 2
  1 1.33, 2 1.33, 3 2 2, 3 2, 3 3, 6 2, 3 2, 4 4
  1 1.33, 2 1.33, 2 2 2 2 2 1 2 2
  1 1 1 2 2, 3 2, 3 3, 6 1 2, 3 4
  1 1 1 2 2 2 2 1 1 1

This table summarizes results on separations between Boolean function complexity measures. The complexity measures are, in order, deterministic, zero-error randomized, two-sided-error randomized, certificate, randomized certificate, block sensitivity, sensitivity, exact quantum, degree, quantum, and approximate degree complexities.

The number in the  -th row and  -th column denotes bounds on the exponent  , which is the infimum of all   satisfying   for all boolean functions  . For example, the entry in the D-th row and s-th column is "3, 6", so   for all  , and there exists a function   such that  .

See also edit

References edit

  1. ^ Ford, Lester R. Jr.; Johnson, Selmer M. (1959-05-01). "A Tournament Problem". The American Mathematical Monthly. 66 (5): 387–389. doi:10.1080/00029890.1959.11989306. ISSN 0002-9890.
  2. ^ a b Introduction to algorithms. Cormen, Thomas H. (Third ed.). Cambridge, Mass.: MIT Press. 2009. ISBN 978-0-262-27083-0. OCLC 676697295.{{cite book}}: CS1 maint: others (link)
  3. ^ Rabin, Michael O. (1972-12-01). "Proving simultaneous positivity of linear forms". Journal of Computer and System Sciences. 6 (6): 639–650. doi:10.1016/S0022-0000(72)80034-5. ISSN 0022-0000.
  4. ^ Reingold, Edward M. (1972-10-01). "On the Optimality of Some Set Algorithms". Journal of the ACM. 19 (4): 649–659. doi:10.1145/321724.321730. ISSN 0004-5411. S2CID 18605212.
  5. ^ Preparata, Franco P. (1985). Computational geometry : an introduction. Shamos, Michael Ian. New York: Springer-Verlag. ISBN 0-387-96131-3. OCLC 11970840.
  6. ^ Ben-Or, Michael (1983-12-01). "Lower bounds for algebraic computation trees". Proceedings of the fifteenth annual ACM symposium on Theory of computing - STOC '83. STOC '83. New York, NY, USA: Association for Computing Machinery. pp. 80–86. doi:10.1145/800061.808735. ISBN 978-0-89791-099-6. S2CID 1499957.
  7. ^ Dobkin, David; Lipton, Richard J. (1976-06-01). "Multidimensional Searching Problems". SIAM Journal on Computing. 5 (2): 181–186. doi:10.1137/0205015. ISSN 0097-5397.
  8. ^ Michael Steele, J; Yao, Andrew C (1982-03-01). "Lower bounds for algebraic decision trees". Journal of Algorithms. 3 (1): 1–8. doi:10.1016/0196-6774(82)90002-5. ISSN 0196-6774.
  9. ^ a b Beals, R.; Buhrman, H.; Cleve, R.; Mosca, M.; de Wolf, R. (2001). "Quantum lower bounds by polynomials". Journal of the ACM. 48 (4): 778–797. arXiv:quant-ph/9802049. doi:10.1145/502090.502097. S2CID 1078168.
  10. ^ Blum, M.; Impagliazzo, R. (1987). "Generic oracles and oracle classes". Proceedings of 18th IEEE FOCS. pp. 118–126.
  11. ^ Hartmanis, J.; Hemachandra, L. (1987), "One-way functions, robustness, and non-isomorphism of NP-complete sets", Technical Report DCS TR86-796, Cornell University
  12. ^ Tardos, G. (1989). "Query complexity, or why is it difficult to separate NPA ∩ coNPA from PA by random oracles A?". Combinatorica. 9 (4): 385–392. doi:10.1007/BF02125350. S2CID 45372592.
  13. ^ a b c Nisan, N. (1989). "CREW PRAMs and decision trees". Proceedings of 21st ACM STOC. pp. 327–335.
  14. ^ Kulkarni, R. and Tal, A. On Fractional Block Sensitivity. Electronic Colloquium on Computational Complexity (ECCC). Vol. 20. 2013.
  15. ^ a b Ambainis, Andris; Balodis, Kaspars; Belovs, Aleksandrs; Lee, Troy; Santha, Miklos; Smotrovs, Juris (2017-09-04). "Separations in Query Complexity Based on Pointer Functions". Journal of the ACM. 64 (5): 32:1–32:24. arXiv:1506.04719. doi:10.1145/3106234. ISSN 0004-5411. S2CID 10214557.
  16. ^ a b Aaronson, Scott; Ben-David, Shalev; Kothari, Robin; Rao, Shravas; Tal, Avishay (2020-10-23). "Degree vs. Approximate Degree and Quantum Implications of Huang's Sensitivity Theorem". arXiv:2010.12629 [quant-ph].
  17. ^ Midrijanis, Gatis (2004), "Exact quantum query complexity for total Boolean functions", arXiv:quant-ph/0403168
  18. ^ Midrijanis, Gatis (2005), "On Randomized and Quantum Query Complexities", arXiv:quant-ph/0501142
  19. ^ Nisan, Noam; Szegedy, Mario (1992-07-01). "On the degree of Boolean functions as real polynomials". Proceedings of the twenty-fourth annual ACM symposium on Theory of computing - STOC '92. STOC '92. Victoria, British Columbia, Canada: Association for Computing Machinery. pp. 462–467. doi:10.1145/129712.129757. ISBN 978-0-89791-511-3. S2CID 6919144.
  20. ^ Rubinstein, David (1995-06-01). "Sensitivity vs. block sensitivity of Boolean functions". Combinatorica. 15 (2): 297–299. doi:10.1007/BF01200762. ISSN 1439-6912. S2CID 41010711.
  21. ^ Huang, Hao (2019). "Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture". Annals of Mathematics. 190 (3): 949–955. arXiv:1907.00847. doi:10.4007/annals.2019.190.3.6. ISSN 0003-486X. JSTOR 10.4007/annals.2019.190.3.6. S2CID 195767594.
  22. ^ Klarreich, Erica. "Decades-Old Computer Science Conjecture Solved in Two Pages". Quanta Magazine. Retrieved 2019-07-26.
  23. ^ Hatami, Pooya; Kulkarni, Raghav; Pankratov, Denis (2011-06-22). "Variations on the Sensitivity Conjecture". Theory of Computing. 1: 1–27. doi:10.4086/toc.gs.2011.004. ISSN 1557-2862. S2CID 6918061.

Surveys edit

  • Buhrman, Harry; de Wolf, Ronald (2002), "Complexity Measures and Decision Tree Complexity: A Survey" (PDF), Theoretical Computer Science, 288 (1): 21–43, doi:10.1016/S0304-3975(01)00144-X

decision, tree, model, computational, complexity, decision, tree, model, model, computation, which, algorithm, considered, basically, decision, tree, sequence, queries, tests, that, done, adaptively, outcome, previous, tests, influence, tests, performed, next,. In computational complexity the decision tree model is the model of computation in which an algorithm is considered to be basically a decision tree i e a sequence of queries or tests that are done adaptively so the outcome of previous tests can influence the tests performed next Typically these tests have a small number of outcomes such as a yes no question and can be performed quickly say with unit computational cost so the worst case time complexity of an algorithm in the decision tree model corresponds to the depth of the corresponding decision tree This notion of computational complexity of a problem or an algorithm in the decision tree model is called its decision tree complexity or query complexity Decision trees models are instrumental in establishing lower bounds for complexity theory for certain classes of computational problems and algorithms Several variants of decision tree models have been introduced depending on the computational model and type of query algorithms are allowed to perform For example a decision tree argument is used to show that a comparison sort of n displaystyle n items must take nlog n displaystyle n log n comparisons For comparison sorts a query is a comparison of two items a b displaystyle a b with two outcomes assuming no items are equal either a lt b displaystyle a lt b or a gt b displaystyle a gt b Comparison sorts can be expressed as a decision tree in this model since such sorting algorithms only perform these types of queries Contents 1 Comparison trees and lower bounds for sorting 2 Linear and algebraic decision trees 3 Boolean decision tree complexities 3 1 Deterministic decision tree 3 2 Randomized decision tree 3 3 Nondeterministic decision tree 3 4 Quantum decision tree 4 Relationships between boolean function complexity measures 4 1 Sensitivity conjecture 4 2 Summary of known results 5 See also 6 References 6 1 SurveysComparison trees and lower bounds for sorting editDecision trees are often employed to understand algorithms for sorting and other similar problems this was first done by Ford and Johnson 1 For example many sorting algorithms are comparison sorts which means that they only gain information about an input sequence x1 x2 xn displaystyle x 1 x 2 ldots x n nbsp via local comparisons testing whether xi lt xj displaystyle x i lt x j nbsp xi xj displaystyle x i x j nbsp or xi gt xj displaystyle x i gt x j nbsp Assuming that the items to be sorted are all distinct and comparable this can be rephrased as a yes or no question is xi gt xj displaystyle x i gt x j nbsp These algorithms can be modeled as binary decision trees where the queries are comparisons an internal node corresponds to a query and the node s children correspond to the next query when the answer to the question is yes or no For leaf nodes the output corresponds to a permutation p displaystyle pi nbsp that describes how the input sequence was scrambled from the fully ordered list of items The inverse of this permutation p 1 displaystyle pi 1 nbsp re orders the input sequence One can show that comparison sorts must use W nlog n displaystyle Omega n log n nbsp comparisons through a simple argument for an algorithm to be correct it must be able to output every possible permutation of n displaystyle n nbsp elements otherwise the algorithm would fail for that particular permutation as input So its corresponding decision tree must have at least as many leaves as permutations n displaystyle n nbsp leaves Any binary tree with at least n displaystyle n nbsp leaves has depth at least log2 n W nlog2 n displaystyle log 2 n Omega n log 2 n nbsp so this is a lower bound on the run time of a comparison sorting algorithm In this case the existence of numerous comparison sorting algorithms having this time complexity such as mergesort and heapsort demonstrates that the bound is tight 2 91 This argument does not use anything about the type of query so it in fact proves a lower bound for any sorting algorithm that can be modeled as a binary decision tree In essence this is a rephrasing of the information theoretic argument that a correct sorting algorithm must learn at least log2 n displaystyle log 2 n nbsp bits of information about the input sequence As a result this also works for randomized decision trees as well Other decision tree lower bounds do use that the query is a comparison For example consider the task of only using comparisons to find the smallest number among n displaystyle n nbsp numbers Before the smallest number can be determined every number except the smallest must lose compare greater in at least one comparison So it takes at least n 1 displaystyle n 1 nbsp comparisons to find the minimum The information theoretic argument here only gives a lower bound of log n displaystyle log n nbsp A similar argument works for general lower bounds for computing order statistics 2 214 Linear and algebraic decision trees editLinear decision trees generalize the above comparison decision trees to computing functions that take real vectors x Rn displaystyle x in mathbb R n nbsp as input The tests in linear decision trees are linear functions for a particular choice of real numbers a0 an displaystyle a 0 dots a n nbsp output the sign of a0 i 1naixi displaystyle a 0 textstyle sum i 1 n a i x i nbsp Algorithms in this model can only depend on the sign of the output Comparison trees are linear decision trees because the comparison between xi displaystyle x i nbsp and xj displaystyle x j nbsp corresponds to the linear function xi xj displaystyle x i x j nbsp From its definition linear decision trees can only specify functions f displaystyle f nbsp whose fibers can be constructed by taking unions and intersections of half spaces Algebraic decision trees are a generalization of linear decision trees that allow the test functions to be polynomials of degree d displaystyle d nbsp Geometrically the space is divided into semi algebraic sets a generalization of hyperplane These decision tree models defined by Rabin 3 and Reingold 4 are often used for proving lower bounds in computational geometry 5 For example Ben Or showed that element uniqueness the task of computing f Rn 0 1 displaystyle f mathbb R n to 0 1 nbsp where f x displaystyle f x nbsp is 0 if and only if there exist distinct coordinates i j displaystyle i j nbsp such that xi xj displaystyle x i x j nbsp requires an algebraic decision tree of depth W nlog n displaystyle Omega n log n nbsp 6 This was first showed for linear decision models by Dobkin and Lipton 7 They also show a n2 displaystyle n 2 nbsp lower bound for linear decision trees on the knapsack problem generalized to algebraic decision trees by Steele and Yao 8 Boolean decision tree complexities editFor Boolean decision trees the task is to compute the value of an n bit Boolean function f 0 1 n 0 1 displaystyle f 0 1 n to 0 1 nbsp for an input x 0 1 n displaystyle x in 0 1 n nbsp The queries correspond to reading a bit of the input xi displaystyle x i nbsp and the output is f x displaystyle f x nbsp Each query may be dependent on previous queries There are many types of computational models using decision trees that could be considered admitting multiple complexity notions called complexity measures Deterministic decision tree edit If the output of a decision tree is f x displaystyle f x nbsp for all x 0 1 n displaystyle x in 0 1 n nbsp the decision tree is said to compute f displaystyle f nbsp The depth of a tree is the maximum number of queries that can happen before a leaf is reached and a result obtained D f displaystyle D f nbsp the deterministic decision tree complexity of f displaystyle f nbsp is the smallest depth among all deterministic decision trees that compute f displaystyle f nbsp Randomized decision tree edit One way to define a randomized decision tree is to add additional nodes to the tree each controlled by a probability pi displaystyle p i nbsp Another equivalent definition is to define it as a distribution over deterministic decision trees Based on this second definition the complexity of the randomized tree is defined as the largest depth among all the trees in the support of the underlying distribution R2 f displaystyle R 2 f nbsp is defined as the complexity of the lowest depth randomized decision tree whose result is f x displaystyle f x nbsp with probability at least 2 3 displaystyle 2 3 nbsp for all x 0 1 n displaystyle x in 0 1 n nbsp i e with bounded 2 sided error R2 f displaystyle R 2 f nbsp is known as the Monte Carlo randomized decision tree complexity because the result is allowed to be incorrect with bounded two sided error The Las Vegas decision tree complexity R0 f displaystyle R 0 f nbsp measures the expected depth of a decision tree that must be correct i e has zero error There is also a one sided bounded error version which is denoted by R1 f displaystyle R 1 f nbsp Nondeterministic decision tree edit The nondeterministic decision tree complexity of a function is known more commonly as the certificate complexity of that function It measures the number of input bits that a nondeterministic algorithm would need to look at in order to evaluate the function with certainty Formally the certificate complexity of f displaystyle f nbsp at x displaystyle x nbsp is the size of the smallest subset of indices S n displaystyle S subset n nbsp such that for all y 0 1 n displaystyle y in 0 1 n nbsp if yi xi displaystyle y i x i nbsp for all i S displaystyle i in S nbsp then f y f x displaystyle f y f x nbsp The certificate complexity of f displaystyle f nbsp is the maximum certificate complexity over all x displaystyle x nbsp The analogous notion where one only requires the verifier to be correct with 2 3 probability is denoted RC f displaystyle RC f nbsp Quantum decision tree edit The quantum decision tree complexity Q2 f displaystyle Q 2 f nbsp is the depth of the lowest depth quantum decision tree that gives the result f x displaystyle f x nbsp with probability at least 2 3 displaystyle 2 3 nbsp for all x 0 1 n displaystyle x in 0 1 n nbsp Another quantity QE f displaystyle Q E f nbsp is defined as the depth of the lowest depth quantum decision tree that gives the result f x displaystyle f x nbsp with probability 1 in all cases i e computes f displaystyle f nbsp exactly Q2 f displaystyle Q 2 f nbsp and QE f displaystyle Q E f nbsp are more commonly known as quantum query complexities because the direct definition of a quantum decision tree is more complicated than in the classical case Similar to the randomized case we define Q0 f displaystyle Q 0 f nbsp and Q1 f displaystyle Q 1 f nbsp These notions are typically bounded by the notions of degree and approximate degree The degree of f displaystyle f nbsp denoted deg f displaystyle deg f nbsp is the smallest degree of any polynomial p displaystyle p nbsp satisfying f x p x displaystyle f x p x nbsp for all x 0 1 n displaystyle x in 0 1 n nbsp The approximate degree of f displaystyle f nbsp denoted deg f displaystyle widetilde deg f nbsp is the smallest degree of any polynomial p displaystyle p nbsp satisfying p x 0 1 3 displaystyle p x in 0 1 3 nbsp whenever f x 0 displaystyle f x 0 nbsp and p x 2 3 1 displaystyle p x in 2 3 1 nbsp whenever f x 1 displaystyle f x 1 nbsp Beals et al established that Q0 f deg f 2 displaystyle Q 0 f geq deg f 2 nbsp and Q2 f deg f 2 displaystyle Q 2 f geq widetilde deg f 2 nbsp 9 Relationships between boolean function complexity measures editIt follows immediately from the definitions that for all n displaystyle n nbsp bit Boolean functions f displaystyle f nbsp Q2 f R2 f R1 f R0 f D f n displaystyle Q 2 f leq R 2 f leq R 1 f leq R 0 f leq D f leq n nbsp and Q2 f Q0 f D f n displaystyle Q 2 f leq Q 0 f leq D f leq n nbsp Finding the best upper bounds in the converse direction is a major goal in the field of query complexity All of these types of query complexity are polynomially related Blum and Impagliazzo 10 Hartmanis and Hemachandra 11 and Tardos 12 independently discovered that D f R0 f 2 displaystyle D f leq R 0 f 2 nbsp Noam Nisan found that the Monte Carlo randomized decision tree complexity is also polynomially related to deterministic decision tree complexity D f O R2 f 3 displaystyle D f O R 2 f 3 nbsp 13 Nisan also showed that D f O R1 f 2 displaystyle D f O R 1 f 2 nbsp A tighter relationship is known between the Monte Carlo and Las Vegas models R0 f O R2 f 2log R2 f displaystyle R 0 f O R 2 f 2 log R 2 f nbsp 14 This relationship is optimal up to polylogarithmic factors 15 As for quantum decision tree complexities D f O Q2 f 4 displaystyle D f O Q 2 f 4 nbsp and this bound is tight 16 15 Midrijanis showed that D f O Q0 f 3 displaystyle D f O Q 0 f 3 nbsp 17 18 improving a quartic bound due to Beals et al 9 It is important to note that these polynomial relationships are valid only for total Boolean functions For partial Boolean functions that have a domain a subset of 0 1 n displaystyle 0 1 n nbsp an exponential separation between Q0 f displaystyle Q 0 f nbsp and D f displaystyle D f nbsp is possible the first example of such a problem was discovered by Deutsch and Jozsa Sensitivity conjecture edit For a Boolean function f 0 1 n 0 1 displaystyle f 0 1 n to 0 1 nbsp the sensitivity of f displaystyle f nbsp is defined to be the maximum sensitivity of f displaystyle f nbsp over all x displaystyle x nbsp where the sensitivity of f displaystyle f nbsp at x displaystyle x nbsp is the number of single bit changes in x displaystyle x nbsp that change the value of f x displaystyle f x nbsp Sensitivity is related to the notion of total influence from the analysis of Boolean functions which is equal to average sensitivity over all x displaystyle x nbsp The sensitivity conjecture is the conjecture that sensitivity is polynomially related to query complexity that is there exists exponent c c displaystyle c c nbsp such that for all f displaystyle f nbsp D f O s f c displaystyle D f O s f c nbsp and s f O D f c displaystyle s f O D f c nbsp One can show through a simple argument that s f D f displaystyle s f leq D f nbsp so the conjecture is specifically concerned about finding a lower bound for sensitivity Since all of the previously discussed complexity measures are polynomially related the precise type of complexity measure is not relevant However this is typically phrased as the question of relating sensitivity with block sensitivity The block sensitivity of f displaystyle f nbsp denoted bs f displaystyle bs f nbsp is defined to be the maximum block sensitivity of f displaystyle f nbsp over all x displaystyle x nbsp The block sensitivity of f displaystyle f nbsp at x displaystyle x nbsp is the maximum number t displaystyle t nbsp of disjoint subsets S1 St n displaystyle S 1 ldots S t subset n nbsp such that for any of the subsets Si displaystyle S i nbsp flipping the bits of x displaystyle x nbsp corresponding to Si displaystyle S i nbsp changes the value of f x displaystyle f x nbsp 13 Since block sensitivity takes a maximum over more choices of subsets s f bs f displaystyle s f leq bs f nbsp Further block sensitivity is polynomially related to the previously discussed complexity measures for example Nisan s paper introducing block sensitivity showed that bs f D f O bs f 4 displaystyle bs f leq D f O bs f 4 nbsp 13 So one could rephrase the sensitivity conjecture as showing that for some c displaystyle c nbsp bs f O s f c displaystyle bs f O s f c nbsp In 1992 Nisan and Szegedy conjectured that c 2 displaystyle c 2 nbsp suffices 19 This would be tight as Rubinstein in 1995 showed a quadratic separation between sensitivity and block sensitivity 20 In July 2019 27 years after the conjecture was initially posed Hao Huang from Emory University proved the sensitivity conjecture showing that bs f O s f 4 displaystyle bs f O s f 4 nbsp 21 This proof is notably succinct proving this statement in two pages when prior progress towards the sensitivity conjecture had been limited 22 23 Summary of known results edit Best known separations for complexity measures as of October 2020 update 16 D displaystyle D nbsp R0 displaystyle R 0 nbsp R2 displaystyle R 2 nbsp C displaystyle C nbsp RC displaystyle RC nbsp bs displaystyle bs nbsp s displaystyle s nbsp Q0 displaystyle Q 0 nbsp deg displaystyle deg nbsp Q displaystyle Q nbsp deg displaystyle widetilde deg nbsp D displaystyle D nbsp 2 2 3 2 2 3 2 3 3 6 2 3 2 3 4 4R0 displaystyle R 0 nbsp 1 2 2 2 3 2 3 3 6 2 3 2 3 3 4 4R displaystyle R nbsp 1 1 2 2 3 2 3 3 6 1 5 3 2 3 3 4 4C displaystyle C nbsp 1 1 1 2 2 2 2 22 5 1 15 3 1 63 3 2 4 2 4RC displaystyle RC nbsp 1 1 1 1 1 5 2 2 4 1 15 2 1 63 2 2 2bs displaystyle bs nbsp 1 1 1 1 1 2 4 1 15 2 1 63 2 2 2s displaystyle s nbsp 1 1 1 1 1 1 1 15 2 1 63 2 2 2Q0 displaystyle Q 0 nbsp 1 1 33 2 1 33 3 2 2 3 2 3 3 6 2 3 2 4 4deg displaystyle deg nbsp 1 1 33 2 1 33 2 2 2 2 2 1 2 2Q displaystyle Q nbsp 1 1 1 2 2 3 2 3 3 6 1 2 3 4deg displaystyle widetilde deg nbsp 1 1 1 2 2 2 2 1 1 1This table summarizes results on separations between Boolean function complexity measures The complexity measures are in order deterministic zero error randomized two sided error randomized certificate randomized certificate block sensitivity sensitivity exact quantum degree quantum and approximate degree complexities The number in the A displaystyle A nbsp th row and B displaystyle B nbsp th column denotes bounds on the exponent c displaystyle c nbsp which is the infimum of all k displaystyle k nbsp satisfying A f O B f k displaystyle A f O B f k nbsp for all boolean functions f displaystyle f nbsp For example the entry in the D th row and s th column is 3 6 so D f O s f 6 o 1 displaystyle D f O operatorname s f 6 o 1 nbsp for all f displaystyle f nbsp and there exists a function g displaystyle g nbsp such that D g W s g 3 o 1 displaystyle D g Omega operatorname s g 3 o 1 nbsp See also editComparison sort Decision tree Aanderaa Karp Rosenberg conjecture Minimum spanning tree Decision treesReferences edit Ford Lester R Jr Johnson Selmer M 1959 05 01 A Tournament Problem The American Mathematical Monthly 66 5 387 389 doi 10 1080 00029890 1959 11989306 ISSN 0002 9890 a b Introduction to algorithms Cormen Thomas H Third ed Cambridge Mass MIT Press 2009 ISBN 978 0 262 27083 0 OCLC 676697295 a href Template Cite book html title Template Cite book cite book a CS1 maint others link Rabin Michael O 1972 12 01 Proving simultaneous positivity of linear forms Journal of Computer and System Sciences 6 6 639 650 doi 10 1016 S0022 0000 72 80034 5 ISSN 0022 0000 Reingold Edward M 1972 10 01 On the Optimality of Some Set Algorithms Journal of the ACM 19 4 649 659 doi 10 1145 321724 321730 ISSN 0004 5411 S2CID 18605212 Preparata Franco P 1985 Computational geometry an introduction Shamos Michael Ian New York Springer Verlag ISBN 0 387 96131 3 OCLC 11970840 Ben Or Michael 1983 12 01 Lower bounds for algebraic computation trees Proceedings of the fifteenth annual ACM symposium on Theory of computing STOC 83 STOC 83 New York NY USA Association for Computing Machinery pp 80 86 doi 10 1145 800061 808735 ISBN 978 0 89791 099 6 S2CID 1499957 Dobkin David Lipton Richard J 1976 06 01 Multidimensional Searching Problems SIAM Journal on Computing 5 2 181 186 doi 10 1137 0205015 ISSN 0097 5397 Michael Steele J Yao Andrew C 1982 03 01 Lower bounds for algebraic decision trees Journal of Algorithms 3 1 1 8 doi 10 1016 0196 6774 82 90002 5 ISSN 0196 6774 a b Beals R Buhrman H Cleve R Mosca M de Wolf R 2001 Quantum lower bounds by polynomials Journal of the ACM 48 4 778 797 arXiv quant ph 9802049 doi 10 1145 502090 502097 S2CID 1078168 Blum M Impagliazzo R 1987 Generic oracles and oracle classes Proceedings of 18th IEEE FOCS pp 118 126 Hartmanis J Hemachandra L 1987 One way functions robustness and non isomorphism of NP complete sets Technical Report DCS TR86 796 Cornell University Tardos G 1989 Query complexity or why is it difficult to separate NPA coNPA from PA by random oracles A Combinatorica 9 4 385 392 doi 10 1007 BF02125350 S2CID 45372592 a b c Nisan N 1989 CREW PRAMs and decision trees Proceedings of 21st ACM STOC pp 327 335 Kulkarni R and Tal A On Fractional Block Sensitivity Electronic Colloquium on Computational Complexity ECCC Vol 20 2013 a b Ambainis Andris Balodis Kaspars Belovs Aleksandrs Lee Troy Santha Miklos Smotrovs Juris 2017 09 04 Separations in Query Complexity Based on Pointer Functions Journal of the ACM 64 5 32 1 32 24 arXiv 1506 04719 doi 10 1145 3106234 ISSN 0004 5411 S2CID 10214557 a b Aaronson Scott Ben David Shalev Kothari Robin Rao Shravas Tal Avishay 2020 10 23 Degree vs Approximate Degree and Quantum Implications of Huang s Sensitivity Theorem arXiv 2010 12629 quant ph Midrijanis Gatis 2004 Exact quantum query complexity for total Boolean functions arXiv quant ph 0403168 Midrijanis Gatis 2005 On Randomized and Quantum Query Complexities arXiv quant ph 0501142 Nisan Noam Szegedy Mario 1992 07 01 On the degree of Boolean functions as real polynomials Proceedings of the twenty fourth annual ACM symposium on Theory of computing STOC 92 STOC 92 Victoria British Columbia Canada Association for Computing Machinery pp 462 467 doi 10 1145 129712 129757 ISBN 978 0 89791 511 3 S2CID 6919144 Rubinstein David 1995 06 01 Sensitivity vs block sensitivity of Boolean functions Combinatorica 15 2 297 299 doi 10 1007 BF01200762 ISSN 1439 6912 S2CID 41010711 Huang Hao 2019 Induced subgraphs of hypercubes and a proof of the Sensitivity Conjecture Annals of Mathematics 190 3 949 955 arXiv 1907 00847 doi 10 4007 annals 2019 190 3 6 ISSN 0003 486X JSTOR 10 4007 annals 2019 190 3 6 S2CID 195767594 Klarreich Erica Decades Old Computer Science Conjecture Solved in Two Pages Quanta Magazine Retrieved 2019 07 26 Hatami Pooya Kulkarni Raghav Pankratov Denis 2011 06 22 Variations on the Sensitivity Conjecture Theory of Computing 1 1 27 doi 10 4086 toc gs 2011 004 ISSN 1557 2862 S2CID 6918061 Surveys edit Buhrman Harry de Wolf Ronald 2002 Complexity Measures and Decision Tree Complexity A Survey PDF Theoretical Computer Science 288 1 21 43 doi 10 1016 S0304 3975 01 00144 X Retrieved from https en wikipedia org w index php title Decision tree model amp oldid 1180783904, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.