fbpx
Wikipedia

Tensor rank decomposition

In multilinear algebra, the tensor rank decomposition [1] or the decomposition of a tensor is the decomposition of a tensor in terms of a sum of minimum tensors. This is an open problem.[clarification needed]

Canonical polyadic decomposition (CPD) is a variant of the rank decomposition which computes the best fitting terms for a user specified . The CP decomposition has found some applications in linguistics and chemometrics. The CP rank was introduced by Frank Lauren Hitchcock in 1927[2] and later rediscovered several times, notably in psychometrics.[3][4] The CP decomposition is referred to as CANDECOMP,[3] PARAFAC,[4] or CANDECOMP/PARAFAC (CP). Note that the PARAFAC2 rank decomposition is a variation of the CP decomposition.[5]

Another popular generalization of the matrix SVD known as the higher-order singular value decomposition computes orthonormal mode matrices and has found applications in econometrics, signal processing, computer vision, computer graphics, and psychometrics.

Notation edit

A scalar variable is denoted by lower case italic letters,   and an upper bound scalar is denoted by an upper case italic letter,  .

Indices are denoted by a combination of lowercase and upper case italic letters,  . Multiple indices that one might encounter when referring to the multiple modes of a tensor are conveniently denoted by   where  .

A vector is denoted by a lower case bold Times Roman,   and a matrix is denoted by bold upper case letters  .

A higher order tensor is denoted by calligraphic letters, . An element of an  -order tensor   is denoted by   or  .

Definition edit

A data tensor   is a collection of multivariate observations organized into a M-way array where M=C+1. Every tensor may be represented with a suitably large   as a linear combination of   rank-1 tensors:

 

where   and   where  . When the number of terms   is minimal in the above expression, then   is called the rank of the tensor, and the decomposition is often referred to as a (tensor) rank decomposition, minimal CP decomposition, or Canonical Polyadic Decomposition (CPD). If the number of terms is not minimal, then the above decomposition is often referred to as CANDECOMP/PARAFAC, Polyadic decomposition'.

Tensor rank edit

Contrary to the case of matrices, computing the rank of a tensor is NP-hard.[6] The only notable well-understood case consists of tensors in  , whose rank can be obtained from the KroneckerWeierstrass normal form of the linear matrix pencil that the tensor represents.[7] A simple polynomial-time algorithm exists for certifying that a tensor is of rank 1, namely the higher-order singular value decomposition.

The rank of the tensor of zeros is zero by convention. The rank of a tensor   is one, provided that  .

Field dependence edit

The rank of a tensor depends on the field over which the tensor is decomposed. It is known that some real tensors may admit a complex decomposition whose rank is strictly less than the rank of a real decomposition of the same tensor. As an example,[8] consider the following real tensor

 

where  . The rank of this tensor over the reals is known to be 3, while its complex rank is only 2 because it is the sum of a complex rank-1 tensor with its complex conjugate, namely

 

where  .

In contrast, the rank of real matrices will never decrease under a field extension to  : real matrix rank and complex matrix rank coincide for real matrices.

Generic rank edit

The generic rank   is defined as the least rank   such that the closure in the Zariski topology of the set of tensors of rank at most   is the entire space  . In the case of complex tensors, tensors of rank at most   form a dense set  : every tensor in the aforementioned space is either of rank less than the generic rank, or it is the limit in the Euclidean topology of a sequence of tensors from  . In the case of real tensors, the set of tensors of rank at most   only forms an open set of positive measure in the Euclidean topology. There may exist Euclidean-open sets of tensors of rank strictly higher than the generic rank. All ranks appearing on open sets in the Euclidean topology are called typical ranks. The smallest typical rank is called the generic rank; this definition applies to both complex and real tensors. The generic rank of tensor spaces was initially studied in 1983 by Volker Strassen.[9]

As an illustration of the above concepts, it is known that both 2 and 3 are typical ranks of   while the generic rank of   is 2. Practically, this means that a randomly sampled real tensor (from a continuous probability measure on the space of tensors) of size   will be a rank-1 tensor with probability zero, a rank-2 tensor with positive probability, and rank-3 with positive probability. On the other hand, a randomly sampled complex tensor of the same size will be a rank-1 tensor with probability zero, a rank-2 tensor with probability one, and a rank-3 tensor with probability zero. It is even known that the generic rank-3 real tensor in   will be of complex rank equal to 2.

The generic rank of tensor spaces depends on the distinction between balanced and unbalanced tensor spaces. A tensor space  , where  , is called unbalanced whenever

 

and it is called balanced otherwise.

Unbalanced tensor spaces edit

When the first factor is very large with respect to the other factors in the tensor product, then the tensor space essentially behaves as a matrix space. The generic rank of tensors living in an unbalanced tensor spaces is known to equal

 

almost everywhere. More precisely, the rank of every tensor in an unbalanced tensor space  , where   is some indeterminate closed set in the Zariski topology, equals the above value.[10]

Balanced tensor spaces edit

The expected generic rank of tensors living in a balanced tensor space is equal to

 

almost everywhere for complex tensors and on a Euclidean-open set for real tensors, where

 

More precisely, the rank of every tensor in  , where   is some indeterminate closed set in the Zariski topology, is expected to equal the above value.[11] For real tensors,   is the least rank that is expected to occur on a set of positive Euclidean measure. The value   is often referred to as the expected generic rank of the tensor space   because it is only conjecturally correct. It is known that the true generic rank always satisfies

 

The Abo–Ottaviani–Peterson conjecture[11] states that equality is expected, i.e.,  , with the following exceptional cases:

  •  
  •  

In each of these exceptional cases, the generic rank is known to be  . Note that while the set of tensors of rank 3 in   is defective (13 and not the expected 14), the generic rank in that space is still the expected one, 4. Similarly, the set of tensors of rank 5 in   is defective (44 and not the expected 45), but the generic rank in that space is still the expected 6.

The AOP conjecture has been proved completely in a number of special cases. Lickteig showed already in 1985 that  , provided that  .[12] In 2011, a major breakthrough was established by Catalisano, Geramita, and Gimigliano who proved that the expected dimension of the set of rank   tensors of format   is the expected one except for rank 3 tensors in the 4 factor case, yet the expected rank in that case is still 4. As a consequence,   for all binary tensors.[13]

Maximum rank edit

The maximum rank that can be admitted by any of the tensors in a tensor space is unknown in general; even a conjecture about this maximum rank is missing. Presently, the best general upper bound states that the maximum rank   of  , where  , satisfies

 

where   is the (least) generic rank of  .[14] It is well-known that the foregoing inequality may be strict. For instance, the generic rank of tensors in   is two, so that the above bound yields  , while it is known that the maximum rank equals 3.[8]

Border rank edit

A rank-  tensor   is called a border tensor if there exists a sequence of tensors of rank at most   whose limit is  . If   is the least value for which such a convergent sequence exists, then it is called the border rank of  . For order-2 tensors, i.e., matrices, rank and border rank always coincide, however, for tensors of order   they may differ. Border tensors were first studied in the context of fast approximate matrix multiplication algorithms by Bini, Lotti, and Romani in 1980.[15]

A classic example of a border tensor is the rank-3 tensor

 

It can be approximated arbitrarily well by the following sequence of rank-2 tensors

 

as  . Therefore, its border rank is 2, which is strictly less than its rank. When the two vectors are orthogonal, this example is also known as a W state.

Properties edit

Identifiability edit

It follows from the definition of a pure tensor that   if and only if there exist   such that   and   for all m. For this reason, the parameters   of a rank-1 tensor   are called identifiable or essentially unique. A rank-  tensor   is called identifiable if every of its tensor rank decompositions is the sum of the same set of   distinct tensors   where the  's are of rank 1. An identifiable rank-  thus has only one essentially unique decomposition

 
and all   tensor rank decompositions of   can be obtained by permuting the order of the summands. Observe that in a tensor rank decomposition all the  's are distinct, for otherwise the rank of   would be at most  .

Generic identifiability edit

Order-2 tensors in  , i.e., matrices, are not identifiable for  . This follows essentially from the observation

 
where   is an invertible   matrix,   ,  ,   and  . It can be shown[16] that for every  , where   is a closed set in the Zariski topology, the decomposition on the right-hand side is a sum of a different set of rank-1 tensors than the decomposition on the left-hand side, entailing that order-2 tensors of rank   are generically not identifiable.

The situation changes completely for higher-order tensors in   with   and all  . For simplicity in notation, assume without loss of generality that the factors are ordered such that  . Let  denote the set of tensors of rank bounded by  . Then, the following statement was proved to be correct using a computer-assisted proof for all spaces of dimension  ,[17] and it is conjectured to be valid in general:[17][18][19]

There exists a closed set   in the Zariski topology such that every tensor  is identifiable (  is called generically identifiable in this case), unless either one of the following exceptional cases holds:

  1. The rank is too large:  ;
  2. The space is identifiability-unbalanced, i.e.,  , and the rank is too large:  ;
  3. The space is the defective case   and the rank is  ;
  4. The space is the defective case  , where  , and the rank is  ;
  5. The space is   and the rank is  ;
  6. The space is   and the rank is  ; or
  7. The space is   and the rank is  .
  8. The space is perfect, i.e.,   is an integer, and the rank is  .

In these exceptional cases, the generic (and also minimum) number of complex decompositions is

  • proved to be   in the first 4 cases;
  • proved to be two in case 5;[20]
  • expected[21] to be six in case 6;
  • proved to be two in case 7;[22] and
  • expected[21] to be at least two in case 8 with exception of the two identifiable cases   and  .

In summary, the generic tensor of order   and rank   that is not identifiability-unbalanced is expected to be identifiable (modulo the exceptional cases in small spaces).

Ill-posedness of the standard approximation problem edit

The rank approximation problem asks for the rank-  decomposition closest (in the usual Euclidean topology) to some rank-  tensor  , where  . That is, one seeks to solve

 

where   is the Frobenius norm.

It was shown in a 2008 paper by de Silva and Lim[8] that the above standard approximation problem may be ill-posed. A solution to aforementioned problem may sometimes not exist because the set over which one optimizes is not closed. As such, a minimizer may not exist, even though an infimum would exist. In particular, it is known that certain so-called border tensors may be approximated arbitrarily well by a sequence of tensor of rank at most  , even though the limit of the sequence converges to a tensor of rank strictly higher than  . The rank-3 tensor

 

can be approximated arbitrarily well by the following sequence of rank-2 tensors

 

as  . This example neatly illustrates the general principle that a sequence of rank-  tensors that converges to a tensor of strictly higher rank needs to admit at least two individual rank-1 terms whose norms become unbounded. Stated formally, whenever a sequence

 

has the property that   (in the Euclidean topology) as  , then there should exist at least   such that

 

as  . This phenomenon is often encountered when attempting to approximate a tensor using numerical optimization algorithms. It is sometimes called the problem of diverging components. It was, in addition, shown that a random low-rank tensor over the reals may not admit a rank-2 approximation with positive probability, leading to the understanding that the ill-posedness problem is an important consideration when employing the tensor rank decomposition.

A common partial solution to the ill-posedness problem consists of imposing an additional inequality constraint that bounds the norm of the individual rank-1 terms by some constant. Other constraints that result in a closed set, and, thus, well-posed optimization problem, include imposing positivity or a bounded inner product strictly less than unity between the rank-1 terms appearing in the sought decomposition.

Calculating the CPD edit

Alternating algorithms:

  • alternating least squares (ALS)
  • alternating slice-wise diagonalisation (ASD)

Direct algorithms:

General optimization algorithms:

General polynomial system solving algorithms:

Applications edit

In machine learning, the CP-decomposition is the central ingredient in learning probabilistic latent variables models via the technique of moment-matching. For example, consider the multi-view model[32] which is a probabilistic latent variable model. In this model, the generation of samples are posited as follows: there exists a hidden random variable that is not observed directly, given which, there are several conditionally independent random variables known as the different "views" of the hidden variable. For simplicity, assume there are three symmetrical views   of a  -state categorical hidden variable  . Then the empirical third moment of this latent variable model can be written as:  .

In applications such as topic modeling, this can be interpreted as the co-occurrence of words in a document. Then the eigenvalues of this empirical moment tensor can be interpreted as the probability of choosing a specific topic and each column of the factor matrix   corresponds to probabilities of words in the vocabulary in the corresponding topic.

See also edit

References edit

  1. ^ Papalexakis, Evangelos. "Automatic Unsupervised Tensor Mining with Quality Assessment" (PDF).
  2. ^ F. L. Hitchcock (1927). "The expression of a tensor or a polyadic as a sum of products". Journal of Mathematics and Physics. 6 (1–4): 164–189. doi:10.1002/sapm192761164.
  3. ^ a b Carroll, J. D.; Chang, J. (1970). "Analysis of individual differences in multidimensional scaling via an n-way generalization of 'Eckart–Young' decomposition". Psychometrika. 35 (3): 283–319. doi:10.1007/BF02310791. S2CID 50364581.
  4. ^ a b Harshman, Richard A. (1970). (PDF). UCLA Working Papers in Phonetics. 16: 84. No. 10,085. Archived from the original (PDF) on October 10, 2004.
  5. ^ Gujral, Ekta. "Aptera: Automatic PARAFAC2 Tensor Analysis" (PDF). ASONAM 2022.
  6. ^ Hillar, C. J.; Lim, L. (2013). "Most tensor problems are NP-Hard". Journal of the ACM. 60 (6): 1–39. arXiv:0911.1393. doi:10.1145/2512329. S2CID 1460452.
  7. ^ Landsberg, J. M. (2012). Tensors: Geometry and Applications. AMS.
  8. ^ a b c de Silva, V.; Lim, L. (2008). "Tensor Rank and the Ill-Posedness of the Best Low-Rank Approximation Problem". SIAM Journal on Matrix Analysis and Applications. 30 (3): 1084–1127. arXiv:math/0607647. doi:10.1137/06066518x. S2CID 7159193.
  9. ^ Strassen, V. (1983). "Rank and optimal computation of generic tensors". Linear Algebra and Its Applications. 52/53: 645–685. doi:10.1016/0024-3795(83)80041-x.
  10. ^ Catalisano, M. V.; Geramita, A. V.; Gimigliano, A. (2002). "Ranks of tensors, secant varieties of Segre varieties and fat points". Linear Algebra and Its Applications. 355 (1–3): 263–285. doi:10.1016/s0024-3795(02)00352-x.
  11. ^ a b Abo, H.; Ottaviani, G.; Peterson, C. (2009). "Induction for secant varieties of Segre varieties". Transactions of the American Mathematical Society. 361 (2): 767–792. arXiv:math/0607191. doi:10.1090/s0002-9947-08-04725-9. S2CID 59069541.
  12. ^ Lickteig, Thomas (1985). "Typical tensorial rank". Linear Algebra and Its Applications. 69: 95–120. doi:10.1016/0024-3795(85)90070-9.
  13. ^ Catalisano, M. V.; Geramita, A. V.; Gimigliano, A. (2011). "Secant varieties of  1 × ··· ×  1 (n-times) are not defective for n ≥ 5". Journal of Algebraic Geometry. 20 (2): 295–327. doi:10.1090/s1056-3911-10-00537-0.
  14. ^ Blehkerman, G.; Teitler, Z. (2015). "On maximum, typical and generic ranks". Mathematische Annalen. 362 (3–4): 1–11. arXiv:1402.2371. doi:10.1007/s00208-014-1150-3. S2CID 14309435.
  15. ^ Bini, D.; Lotti, G.; Romani, F. (1980). "Approximate solutions for the bilinear form computational problem". SIAM Journal on Scientific Computing. 9 (4): 692–697. doi:10.1137/0209053.
  16. ^ Harris, Joe (1992). Algebraic Geometry SpringerLink. Graduate Texts in Mathematics. Vol. 133. doi:10.1007/978-1-4757-2189-8. ISBN 978-1-4419-3099-6.
  17. ^ a b Chiantini, L.; Ottaviani, G.; Vannieuwenhoven, N. (2014-01-01). "An Algorithm For Generic and Low-Rank Specific Identifiability of Complex Tensors". SIAM Journal on Matrix Analysis and Applications. 35 (4): 1265–1287. arXiv:1403.4157. doi:10.1137/140961389. ISSN 0895-4798. S2CID 28478606.
  18. ^ Bocci, Cristiano; Chiantini, Luca; Ottaviani, Giorgio (2014-12-01). "Refined methods for the identifiability of tensors". Annali di Matematica Pura ed Applicata. 193 (6): 1691–1702. arXiv:1303.6915. doi:10.1007/s10231-013-0352-8. ISSN 0373-3114. S2CID 119721371.
  19. ^ Chiantini, L.; Ottaviani, G.; Vannieuwenhoven, N. (2017-01-01). "Effective Criteria for Specific Identifiability of Tensors and Forms". SIAM Journal on Matrix Analysis and Applications. 38 (2): 656–681. arXiv:1609.00123. doi:10.1137/16m1090132. ISSN 0895-4798. S2CID 23983015.
  20. ^ Chiantini, L.; Ottaviani, G. (2012-01-01). "On Generic Identifiability of 3-Tensors of Small Rank". SIAM Journal on Matrix Analysis and Applications. 33 (3): 1018–1037. arXiv:1103.2696. doi:10.1137/110829180. ISSN 0895-4798. S2CID 43781880.
  21. ^ a b Hauenstein, J. D.; Oeding, L.; Ottaviani, G.; Sommese, A. J. (2016). "Homotopy techniques for tensor decomposition and perfect identifiability". J. Reine Angew. Math. 2019 (753): 1–22. arXiv:1501.00090. doi:10.1515/crelle-2016-0067. S2CID 16324593.
  22. ^ Bocci, Cristiano; Chiantini, Luca (2013). "On the identifiability of binary Segre products". Journal of Algebraic Geometry. 22 (1): 1–11. arXiv:1105.3643. doi:10.1090/s1056-3911-2011-00592-4. ISSN 1056-3911. S2CID 119671913.
  23. ^ Domanov, Ignat; Lathauwer, Lieven De (January 2014). "Canonical Polyadic Decomposition of Third-Order Tensors: Reduction to Generalized Eigenvalue Decomposition". SIAM Journal on Matrix Analysis and Applications. 35 (2): 636–660. arXiv:1312.2848. doi:10.1137/130916084. ISSN 0895-4798. S2CID 14851072.
  24. ^ Domanov, Ignat; De Lathauwer, Lieven (January 2017). "Canonical polyadic decomposition of third-order tensors: Relaxed uniqueness conditions and algebraic algorithm". Linear Algebra and Its Applications. 513: 342–375. arXiv:1501.07251. doi:10.1016/j.laa.2016.10.019. ISSN 0024-3795. S2CID 119729978.
  25. ^ Faber, Nicolaas (Klaas) M.; Ferré, Joan; Boqué, Ricard (January 2001). "Iteratively reweighted generalized rank annihilation method". Chemometrics and Intelligent Laboratory Systems. 55 (1–2): 67–90. doi:10.1016/s0169-7439(00)00117-9. ISSN 0169-7439.
  26. ^ Leurgans, S. E.; Ross, R. T.; Abel, R. B. (October 1993). "A Decomposition for Three-Way Arrays". SIAM Journal on Matrix Analysis and Applications. 14 (4): 1064–1083. doi:10.1137/0614071. ISSN 0895-4798.
  27. ^ Lorber, Avraham. (October 1985). "Features of quantifying chemical composition from two-dimensional data array by the rank annihilation factor analysis method". Analytical Chemistry. 57 (12): 2395–2397. doi:10.1021/ac00289a052. ISSN 0003-2700.
  28. ^ Sanchez, Eugenio; Kowalski, Bruce R. (January 1990). "Tensorial resolution: A direct trilinear decomposition". Journal of Chemometrics. 4 (1): 29–45. doi:10.1002/cem.1180040105. ISSN 0886-9383. S2CID 120459386.
  29. ^ Sands, Richard; Young, Forrest W. (March 1980). "Component models for three-way data: An alternating least squares algorithm with optimal scaling features". Psychometrika. 45 (1): 39–67. doi:10.1007/bf02293598. ISSN 0033-3123. S2CID 121003817.
  30. ^ Bernardi, A.; Brachat, J.; Comon, P.; Mourrain, B. (May 2013). "General tensor decomposition, moment matrices and applications". Journal of Symbolic Computation. 52: 51–71. arXiv:1105.1229. doi:10.1016/j.jsc.2012.05.012. ISSN 0747-7171. S2CID 14181289.
  31. ^ Bernardi, Alessandra; Daleo, Noah S.; Hauenstein, Jonathan D.; Mourrain, Bernard (December 2017). "Tensor decomposition and homotopy continuation". Differential Geometry and Its Applications. 55: 78–105. arXiv:1512.04312. doi:10.1016/j.difgeo.2017.07.009. ISSN 0926-2245. S2CID 119147635.
  32. ^ Anandkumar, Animashree; Ge, Rong; Hsu, Daniel; Kakade, Sham M; Telgarsky, Matus (2014). "Tensor decompositions for learning latent variable models". The Journal of Machine Learning Research. 15 (1): 2773–2832.

Further reading edit

  • Kolda, Tamara G.; Bader, Brett W. (2009). "Tensor Decompositions and Applications". SIAM Rev. 51 (3): 455–500. Bibcode:2009SIAMR..51..455K. CiteSeerX 10.1.1.153.2059. doi:10.1137/07070111X. S2CID 16074195.
  • Landsberg, Joseph M. (2012). Tensors: Geometry and Applications. AMS.

External links edit

  • PARAFAC Tutorial
  • Parallel Factor Analysis (PARAFAC)
  • FactoMineR (free exploratory multivariate data analysis software linked to R)

tensor, rank, decomposition, multilinear, algebra, tensor, rank, decomposition, displaystyle, rank, decomposition, tensor, decomposition, tensor, terms, minimum, displaystyle, displaystyle, rank, tensors, this, open, problem, clarification, needed, canonical, . In multilinear algebra the tensor rank decomposition 1 or the r a n k R displaystyle rank R decomposition of a tensor is the decomposition of a tensor in terms of a sum of minimum R displaystyle R r a n k 1 displaystyle rank 1 tensors This is an open problem clarification needed Canonical polyadic decomposition CPD is a variant of the rank decomposition which computes the best fitting K displaystyle K r a n k 1 displaystyle rank 1 terms for a user specified K displaystyle K The CP decomposition has found some applications in linguistics and chemometrics The CP rank was introduced by Frank Lauren Hitchcock in 1927 2 and later rediscovered several times notably in psychometrics 3 4 The CP decomposition is referred to as CANDECOMP 3 PARAFAC 4 or CANDECOMP PARAFAC CP Note that the PARAFAC2 rank decomposition is a variation of the CP decomposition 5 Another popular generalization of the matrix SVD known as the higher order singular value decomposition computes orthonormal mode matrices and has found applications in econometrics signal processing computer vision computer graphics and psychometrics Contents 1 Notation 2 Definition 3 Tensor rank 3 1 Field dependence 3 2 Generic rank 3 2 1 Unbalanced tensor spaces 3 2 2 Balanced tensor spaces 3 3 Maximum rank 3 4 Border rank 4 Properties 4 1 Identifiability 4 1 1 Generic identifiability 4 2 Ill posedness of the standard approximation problem 5 Calculating the CPD 6 Applications 7 See also 8 References 9 Further reading 10 External linksNotation editA scalar variable is denoted by lower case italic letters a displaystyle a nbsp and an upper bound scalar is denoted by an upper case italic letter A displaystyle A nbsp Indices are denoted by a combination of lowercase and upper case italic letters 1 i I displaystyle 1 leq i leq I nbsp Multiple indices that one might encounter when referring to the multiple modes of a tensor are conveniently denoted by 1 i m I m displaystyle 1 leq i m leq I m nbsp where 1 m M displaystyle 1 leq m leq M nbsp A vector is denoted by a lower case bold Times Roman a displaystyle mathbf a nbsp and a matrix is denoted by bold upper case letters A displaystyle mathbf A nbsp A higher order tensor is denoted by calligraphic letters A displaystyle mathcal A nbsp An element of an M displaystyle M nbsp order tensor A C I 1 I 2 I m I M displaystyle mathcal A in mathbb C I 1 times I 2 times dots I m times dots I M nbsp is denoted by a i 1 i 2 i m i M displaystyle a i 1 i 2 dots i m dots i M nbsp or A i 1 i 2 i m i M displaystyle mathcal A i 1 i 2 dots i m dots i M nbsp Definition editA data tensor A F I 0 I 1 I C displaystyle mathcal A in mathbb F I 0 times I 1 times ldots times I C nbsp is a collection of multivariate observations organized into a M way array where M C 1 Every tensor may be represented with a suitably large R displaystyle R nbsp as a linear combination of r displaystyle r nbsp rank 1 tensors A r 1 R l r a 0 r a 1 r a 2 r a c r a C r displaystyle mathcal A sum r 1 R lambda r mathbf a 0 r otimes mathbf a 1 r otimes mathbf a 2 r dots otimes mathbf a c r otimes cdots otimes mathbf a C r nbsp where l r R displaystyle lambda r in mathbb R nbsp and a m r F I m displaystyle mathbf a m r in mathbb F I m nbsp where 1 m M displaystyle 1 leq m leq M nbsp When the number of terms R displaystyle R nbsp is minimal in the above expression then R displaystyle R nbsp is called the rank of the tensor and the decomposition is often referred to as a tensor rank decomposition minimal CP decomposition or Canonical Polyadic Decomposition CPD If the number of terms is not minimal then the above decomposition is often referred to as CANDECOMP PARAFAC Polyadic decomposition Tensor rank editContrary to the case of matrices computing the rank of a tensor is NP hard 6 The only notable well understood case consists of tensors in F I m F I n F 2 displaystyle F I m otimes F I n otimes F 2 nbsp whose rank can be obtained from the Kronecker Weierstrass normal form of the linear matrix pencil that the tensor represents 7 A simple polynomial time algorithm exists for certifying that a tensor is of rank 1 namely the higher order singular value decomposition The rank of the tensor of zeros is zero by convention The rank of a tensor a 1 a M displaystyle mathbf a 1 otimes cdots otimes mathbf a M nbsp is one provided that a m F I m 0 displaystyle mathbf a m in F I m setminus 0 nbsp Field dependence edit The rank of a tensor depends on the field over which the tensor is decomposed It is known that some real tensors may admit a complex decomposition whose rank is strictly less than the rank of a real decomposition of the same tensor As an example 8 consider the following real tensor A x 1 x 2 x 3 x 1 y 2 y 3 y 1 x 2 y 3 y 1 y 2 x 3 displaystyle mathcal A mathbf x 1 otimes mathbf x 2 otimes mathbf x 3 mathbf x 1 otimes mathbf y 2 otimes mathbf y 3 mathbf y 1 otimes mathbf x 2 otimes mathbf y 3 mathbf y 1 otimes mathbf y 2 otimes mathbf x 3 nbsp where x i y j R 2 displaystyle mathbf x i mathbf y j in mathbb R 2 nbsp The rank of this tensor over the reals is known to be 3 while its complex rank is only 2 because it is the sum of a complex rank 1 tensor with its complex conjugate namely A 1 2 z 1 z 2 z 3 z 1 z 2 z 3 displaystyle mathcal A frac 1 2 bar mathbf z 1 otimes mathbf z 2 otimes bar mathbf z 3 mathbf z 1 otimes bar mathbf z 2 otimes mathbf z 3 nbsp where z k x k i y k displaystyle mathbf z k mathbf x k i mathbf y k nbsp In contrast the rank of real matrices will never decrease under a field extension to C displaystyle mathbb C nbsp real matrix rank and complex matrix rank coincide for real matrices Generic rank edit The generic rank r I 1 I M displaystyle r I 1 ldots I M nbsp is defined as the least rank r displaystyle r nbsp such that the closure in the Zariski topology of the set of tensors of rank at most r displaystyle r nbsp is the entire space F I 1 F I M displaystyle F I 1 otimes cdots otimes F I M nbsp In the case of complex tensors tensors of rank at most r I 1 I M displaystyle r I 1 ldots I M nbsp form a dense set S displaystyle S nbsp every tensor in the aforementioned space is either of rank less than the generic rank or it is the limit in the Euclidean topology of a sequence of tensors from S displaystyle S nbsp In the case of real tensors the set of tensors of rank at most r I 1 I M displaystyle r I 1 ldots I M nbsp only forms an open set of positive measure in the Euclidean topology There may exist Euclidean open sets of tensors of rank strictly higher than the generic rank All ranks appearing on open sets in the Euclidean topology are called typical ranks The smallest typical rank is called the generic rank this definition applies to both complex and real tensors The generic rank of tensor spaces was initially studied in 1983 by Volker Strassen 9 As an illustration of the above concepts it is known that both 2 and 3 are typical ranks of R 2 R 2 R 2 displaystyle mathbb R 2 otimes mathbb R 2 otimes mathbb R 2 nbsp while the generic rank of C 2 C 2 C 2 displaystyle mathbb C 2 otimes mathbb C 2 otimes mathbb C 2 nbsp is 2 Practically this means that a randomly sampled real tensor from a continuous probability measure on the space of tensors of size 2 2 2 displaystyle 2 times 2 times 2 nbsp will be a rank 1 tensor with probability zero a rank 2 tensor with positive probability and rank 3 with positive probability On the other hand a randomly sampled complex tensor of the same size will be a rank 1 tensor with probability zero a rank 2 tensor with probability one and a rank 3 tensor with probability zero It is even known that the generic rank 3 real tensor in R 2 R 2 R 2 displaystyle mathbb R 2 otimes mathbb R 2 otimes mathbb R 2 nbsp will be of complex rank equal to 2 The generic rank of tensor spaces depends on the distinction between balanced and unbalanced tensor spaces A tensor space F I 1 F I M displaystyle F I 1 otimes cdots otimes F I M nbsp where I 1 I 2 I M displaystyle I 1 geq I 2 geq cdots geq I M nbsp is called unbalanced whenever I 1 gt 1 m 2 M I m m 2 M I m 1 displaystyle I 1 gt 1 prod m 2 M I m sum m 2 M I m 1 nbsp and it is called balanced otherwise Unbalanced tensor spaces edit When the first factor is very large with respect to the other factors in the tensor product then the tensor space essentially behaves as a matrix space The generic rank of tensors living in an unbalanced tensor spaces is known to equal r I 1 I M min I 1 m 2 M I m displaystyle r I 1 ldots I M min left I 1 prod m 2 M I m right nbsp almost everywhere More precisely the rank of every tensor in an unbalanced tensor space F I 1 I M Z displaystyle F I 1 times cdots times I M setminus Z nbsp where Z displaystyle Z nbsp is some indeterminate closed set in the Zariski topology equals the above value 10 Balanced tensor spaces edit The expected generic rank of tensors living in a balanced tensor space is equal to r E I 1 I M P S 1 displaystyle r E I 1 ldots I M left lceil frac Pi Sigma 1 right rceil nbsp almost everywhere for complex tensors and on a Euclidean open set for real tensors where P m 1 M I m and S m 1 M I m 1 displaystyle Pi prod m 1 M I m quad text and quad Sigma sum m 1 M I m 1 nbsp More precisely the rank of every tensor in C I 1 I M Z displaystyle mathbb C I 1 times cdots times I M setminus Z nbsp where Z displaystyle Z nbsp is some indeterminate closed set in the Zariski topology is expected to equal the above value 11 For real tensors r E I 1 I M displaystyle r E I 1 ldots I M nbsp is the least rank that is expected to occur on a set of positive Euclidean measure The value r E I 1 I M displaystyle r E I 1 ldots I M nbsp is often referred to as the expected generic rank of the tensor space F I 1 I M displaystyle F I 1 times cdots times I M nbsp because it is only conjecturally correct It is known that the true generic rank always satisfies r I 1 I M r E I 1 I M displaystyle r I 1 ldots I M geq r E I 1 ldots I M nbsp The Abo Ottaviani Peterson conjecture 11 states that equality is expected i e r I 1 I M r E I 1 I M displaystyle r I 1 ldots I M r E I 1 ldots I M nbsp with the following exceptional cases F 2 m 1 2 m 1 3 with m 1 2 displaystyle F 2m 1 times 2m 1 times 3 text with m 1 2 ldots nbsp F m 1 m 1 2 2 with m 2 3 displaystyle F m 1 times m 1 times 2 times 2 text with m 2 3 ldots nbsp In each of these exceptional cases the generic rank is known to be r I 1 I m I M r E I 1 I M 1 displaystyle r I 1 ldots I m ldots I M r E I 1 ldots I M 1 nbsp Note that while the set of tensors of rank 3 in F 2 2 2 2 displaystyle F 2 times 2 times 2 times 2 nbsp is defective 13 and not the expected 14 the generic rank in that space is still the expected one 4 Similarly the set of tensors of rank 5 in F 4 4 3 displaystyle F 4 times 4 times 3 nbsp is defective 44 and not the expected 45 but the generic rank in that space is still the expected 6 The AOP conjecture has been proved completely in a number of special cases Lickteig showed already in 1985 that r n n n r E n n n displaystyle r n n n r E n n n nbsp provided that n 3 displaystyle n neq 3 nbsp 12 In 2011 a major breakthrough was established by Catalisano Geramita and Gimigliano who proved that the expected dimension of the set of rank s displaystyle s nbsp tensors of format 2 2 2 displaystyle 2 times 2 times cdots times 2 nbsp is the expected one except for rank 3 tensors in the 4 factor case yet the expected rank in that case is still 4 As a consequence r 2 2 2 r E 2 2 2 displaystyle r 2 2 ldots 2 r E 2 2 ldots 2 nbsp for all binary tensors 13 Maximum rank edit The maximum rank that can be admitted by any of the tensors in a tensor space is unknown in general even a conjecture about this maximum rank is missing Presently the best general upper bound states that the maximum rank r max I 1 I M displaystyle r mbox max I 1 ldots I M nbsp of F I 1 F I M displaystyle F I 1 otimes cdots otimes F I M nbsp where I 1 I 2 I M displaystyle I 1 geq I 2 geq cdots geq I M nbsp satisfies r max I 1 I M min m 2 M I m 2 r I 1 I M displaystyle r mbox max I 1 ldots I M leq min left prod m 2 M I m 2 cdot r I 1 ldots I M right nbsp where r I 1 I M displaystyle r I 1 ldots I M nbsp is the least generic rank of F I 1 F I M displaystyle F I 1 otimes cdots otimes F I M nbsp 14 It is well known that the foregoing inequality may be strict For instance the generic rank of tensors in R 2 2 2 displaystyle mathbb R 2 times 2 times 2 nbsp is two so that the above bound yields r max 2 2 2 4 displaystyle r mbox max 2 2 2 leq 4 nbsp while it is known that the maximum rank equals 3 8 Border rank edit A rank s displaystyle s nbsp tensor A displaystyle mathcal A nbsp is called a border tensor if there exists a sequence of tensors of rank at most r lt s displaystyle r lt s nbsp whose limit is A displaystyle mathcal A nbsp If r displaystyle r nbsp is the least value for which such a convergent sequence exists then it is called the border rank of A displaystyle mathcal A nbsp For order 2 tensors i e matrices rank and border rank always coincide however for tensors of order 3 displaystyle geq 3 nbsp they may differ Border tensors were first studied in the context of fast approximate matrix multiplication algorithms by Bini Lotti and Romani in 1980 15 A classic example of a border tensor is the rank 3 tensor A u u v u v u v u u with u v 1 and u v 1 displaystyle mathcal A mathbf u otimes mathbf u otimes mathbf v mathbf u otimes mathbf v otimes mathbf u mathbf v otimes mathbf u otimes mathbf u quad text with mathbf u mathbf v 1 text and langle mathbf u mathbf v rangle neq 1 nbsp It can be approximated arbitrarily well by the following sequence of rank 2 tensors A m m u 1 m v u 1 m v u 1 m v m u u u u u v u v u v u u 1 m u v v v u v v v u 1 m 2 v v v displaystyle begin aligned mathcal A m amp m mathbf u frac 1 m mathbf v otimes mathbf u frac 1 m mathbf v otimes mathbf u frac 1 m mathbf v m mathbf u otimes mathbf u otimes mathbf u amp mathbf u otimes mathbf u otimes mathbf v mathbf u otimes mathbf v otimes mathbf u mathbf v otimes mathbf u otimes mathbf u frac 1 m mathbf u otimes mathbf v otimes mathbf v mathbf v otimes mathbf u otimes mathbf v mathbf v otimes mathbf v otimes mathbf u frac 1 m 2 mathbf v otimes mathbf v otimes mathbf v end aligned nbsp as m displaystyle m to infty nbsp Therefore its border rank is 2 which is strictly less than its rank When the two vectors are orthogonal this example is also known as a W state Properties editIdentifiability edit It follows from the definition of a pure tensor that A a 1 a 2 a M b 1 b 2 b M displaystyle mathcal A mathbf a 1 otimes mathbf a 2 otimes cdots otimes mathbf a M mathbf b 1 otimes mathbf b 2 otimes cdots otimes mathbf b M nbsp if and only if there exist l k displaystyle lambda k nbsp such that l 1 l 2 l M 1 displaystyle lambda 1 lambda 2 cdots lambda M 1 nbsp and a m l m b m displaystyle mathbf a m lambda m mathbf b m nbsp for all m For this reason the parameters a m m 1 M displaystyle mathbf a m m 1 M nbsp of a rank 1 tensor A displaystyle mathcal A nbsp are called identifiable or essentially unique A rank r displaystyle r nbsp tensor A F I 1 F I 2 F I M displaystyle mathcal A in F I 1 otimes F I 2 otimes cdots otimes F I M nbsp is called identifiable if every of its tensor rank decompositions is the sum of the same set of r displaystyle r nbsp distinct tensors A 1 A 2 A r displaystyle mathcal A 1 mathcal A 2 ldots mathcal A r nbsp where the A i displaystyle mathcal A i nbsp s are of rank 1 An identifiable rank r displaystyle r nbsp thus has only one essentially unique decompositionA i 1 r A i displaystyle mathcal A sum i 1 r mathcal A i nbsp and all r displaystyle r nbsp tensor rank decompositions of A displaystyle mathcal A nbsp can be obtained by permuting the order of the summands Observe that in a tensor rank decomposition all the A i displaystyle mathcal A i nbsp s are distinct for otherwise the rank of A displaystyle mathcal A nbsp would be at most r 1 displaystyle r 1 nbsp Generic identifiability edit Order 2 tensors in F I 1 F I 2 F I 1 I 2 displaystyle F I 1 otimes F I 2 simeq F I 1 times I 2 nbsp i e matrices are not identifiable for r gt 1 displaystyle r gt 1 nbsp This follows essentially from the observationA i 1 r a i b i i 1 r a i b i T A B T A X 1 B X T T i 1 r c i d i T i 1 r c i d i displaystyle mathcal A sum i 1 r mathbf a i otimes mathbf b i sum i 1 r mathbf a i mathbf b i T AB T AX 1 BX T T sum i 1 r mathbf c i mathbf d i T sum i 1 r mathbf c i otimes mathbf d i nbsp where X G L r F displaystyle X in mathrm GL r F nbsp is an invertible r r displaystyle r times r nbsp matrix A a i i 1 r displaystyle A mathbf a i i 1 r nbsp B b i i 1 r displaystyle B mathbf b i i 1 r nbsp A X 1 c i i 1 r displaystyle AX 1 mathbf c i i 1 r nbsp and B X T d i i 1 r displaystyle BX T mathbf d i i 1 r nbsp It can be shown 16 that for every X G L n F Z displaystyle X in mathrm GL n F setminus Z nbsp where Z displaystyle Z nbsp is a closed set in the Zariski topology the decomposition on the right hand side is a sum of a different set of rank 1 tensors than the decomposition on the left hand side entailing that order 2 tensors of rank r gt 1 displaystyle r gt 1 nbsp are generically not identifiable The situation changes completely for higher order tensors in F I 1 F I 2 F I M displaystyle F I 1 otimes F I 2 otimes cdots otimes F I M nbsp with M gt 2 displaystyle M gt 2 nbsp and all I m 2 displaystyle I m geq 2 nbsp For simplicity in notation assume without loss of generality that the factors are ordered such that I 1 I 2 I M 2 displaystyle I 1 geq I 2 geq cdots geq I M geq 2 nbsp Let S r F I 1 F I m F I M displaystyle S r subset F I 1 otimes cdots F I m otimes cdots otimes F I M nbsp denote the set of tensors of rank bounded by r displaystyle r nbsp Then the following statement was proved to be correct using a computer assisted proof for all spaces of dimension P lt 15000 displaystyle Pi lt 15000 nbsp 17 and it is conjectured to be valid in general 17 18 19 There exists a closed set Z r displaystyle Z r nbsp in the Zariski topology such that every tensor A S r Z r displaystyle mathcal A in S r setminus Z r nbsp is identifiable S r displaystyle S r nbsp is called generically identifiable in this case unless either one of the following exceptional cases holds The rank is too large r gt r E I 1 I 2 I M displaystyle r gt r E I 1 I 2 ldots I M nbsp The space is identifiability unbalanced i e I 1 gt m 2 M i m m 2 M I m 1 textstyle I 1 gt prod m 2 M i m sum m 2 M I m 1 nbsp and the rank is too large r m 2 M I m m 2 M I m 1 textstyle r geq prod m 2 M I m sum m 2 M I m 1 nbsp The space is the defective case F 4 F 4 F 3 displaystyle F 4 otimes F 4 otimes F 3 nbsp and the rank is r 5 displaystyle r 5 nbsp The space is the defective case F n F n F 2 F 2 displaystyle F n otimes F n otimes F 2 otimes F 2 nbsp where n 2 displaystyle n geq 2 nbsp and the rank is r 2 n 1 displaystyle r 2n 1 nbsp The space is F 4 F 4 F 4 displaystyle F 4 otimes F 4 otimes F 4 nbsp and the rank is r 6 displaystyle r 6 nbsp The space is F 6 F 6 F 3 displaystyle F 6 otimes F 6 otimes F 3 nbsp and the rank is r 8 displaystyle r 8 nbsp or The space is F 2 F 2 F 2 F 2 F 2 displaystyle F 2 otimes F 2 otimes F 2 otimes F 2 otimes F 2 nbsp and the rank is r 5 displaystyle r 5 nbsp The space is perfect i e r E I 1 I 2 I M P S 1 textstyle r E I 1 I 2 ldots I M frac Pi Sigma 1 nbsp is an integer and the rank is r r E I 1 I 2 I M textstyle r r E I 1 I 2 ldots I M nbsp In these exceptional cases the generic and also minimum number of complex decompositions is proved to be displaystyle infty nbsp in the first 4 cases proved to be two in case 5 20 expected 21 to be six in case 6 proved to be two in case 7 22 and expected 21 to be at least two in case 8 with exception of the two identifiable cases F 5 F 4 F 3 displaystyle F 5 otimes F 4 otimes F 3 nbsp and F 3 F 2 F 2 F 2 displaystyle F 3 otimes F 2 otimes F 2 otimes F 2 nbsp In summary the generic tensor of order M gt 2 displaystyle M gt 2 nbsp and rank r lt P S 1 textstyle r lt frac Pi Sigma 1 nbsp that is not identifiability unbalanced is expected to be identifiable modulo the exceptional cases in small spaces Ill posedness of the standard approximation problem edit The rank approximation problem asks for the rank r displaystyle r nbsp decomposition closest in the usual Euclidean topology to some rank s displaystyle s nbsp tensor A displaystyle mathcal A nbsp where r lt s displaystyle r lt s nbsp That is one seeks to solve min a i m F I m A i 1 r a i 1 a i 2 a i M F displaystyle min mathbf a i m in F I m mathcal A sum i 1 r mathbf a i 1 otimes mathbf a i 2 otimes cdots otimes mathbf a i M F nbsp where F displaystyle cdot F nbsp is the Frobenius norm It was shown in a 2008 paper by de Silva and Lim 8 that the above standard approximation problem may be ill posed A solution to aforementioned problem may sometimes not exist because the set over which one optimizes is not closed As such a minimizer may not exist even though an infimum would exist In particular it is known that certain so called border tensors may be approximated arbitrarily well by a sequence of tensor of rank at most r displaystyle r nbsp even though the limit of the sequence converges to a tensor of rank strictly higher than r displaystyle r nbsp The rank 3 tensor A u u v u v u v u u with u v 1 and u v 1 displaystyle mathcal A mathbf u otimes mathbf u otimes mathbf v mathbf u otimes mathbf v otimes mathbf u mathbf v otimes mathbf u otimes mathbf u quad text with mathbf u mathbf v 1 text and langle mathbf u mathbf v rangle neq 1 nbsp can be approximated arbitrarily well by the following sequence of rank 2 tensors A n n u 1 n v u 1 n v u 1 n v n u u u displaystyle mathcal A n n mathbf u frac 1 n mathbf v otimes mathbf u frac 1 n mathbf v otimes mathbf u frac 1 n mathbf v n mathbf u otimes mathbf u otimes mathbf u nbsp as n displaystyle n to infty nbsp This example neatly illustrates the general principle that a sequence of rank r displaystyle r nbsp tensors that converges to a tensor of strictly higher rank needs to admit at least two individual rank 1 terms whose norms become unbounded Stated formally whenever a sequence A n i 1 r a i n 1 a i n 2 a i n M displaystyle mathcal A n sum i 1 r mathbf a i n 1 otimes mathbf a i n 2 otimes cdots otimes mathbf a i n M nbsp has the property that A n A displaystyle mathcal A n to mathcal A nbsp in the Euclidean topology as n displaystyle n to infty nbsp then there should exist at least 1 i j r displaystyle 1 leq i neq j leq r nbsp such that a i n 1 a i n 2 a i n M F and a j n 1 a j n 2 a j n M F displaystyle mathbf a i n 1 otimes mathbf a i n 2 otimes cdots otimes mathbf a i n M F to infty text and mathbf a j n 1 otimes mathbf a j n 2 otimes cdots otimes mathbf a j n M F to infty nbsp as n displaystyle n to infty nbsp This phenomenon is often encountered when attempting to approximate a tensor using numerical optimization algorithms It is sometimes called the problem of diverging components It was in addition shown that a random low rank tensor over the reals may not admit a rank 2 approximation with positive probability leading to the understanding that the ill posedness problem is an important consideration when employing the tensor rank decomposition A common partial solution to the ill posedness problem consists of imposing an additional inequality constraint that bounds the norm of the individual rank 1 terms by some constant Other constraints that result in a closed set and thus well posed optimization problem include imposing positivity or a bounded inner product strictly less than unity between the rank 1 terms appearing in the sought decomposition Calculating the CPD editAlternating algorithms alternating least squares ALS alternating slice wise diagonalisation ASD Direct algorithms pencil based algorithms 23 24 25 26 27 28 29 moment based algorithms 30 General optimization algorithms simultaneous diagonalization SD simultaneous generalized Schur decomposition SGSD Levenberg Marquardt LM nonlinear conjugate gradient NCG limited memory BFGS L BFGS General polynomial system solving algorithms homotopy continuation 31 Applications editIn machine learning the CP decomposition is the central ingredient in learning probabilistic latent variables models via the technique of moment matching For example consider the multi view model 32 which is a probabilistic latent variable model In this model the generation of samples are posited as follows there exists a hidden random variable that is not observed directly given which there are several conditionally independent random variables known as the different views of the hidden variable For simplicity assume there are three symmetrical views x displaystyle x nbsp of a k displaystyle k nbsp state categorical hidden variable h displaystyle h nbsp Then the empirical third moment of this latent variable model can be written as T i 1 k P r h i E x h i 3 displaystyle T sum i 1 k Pr h i E x h i otimes 3 nbsp In applications such as topic modeling this can be interpreted as the co occurrence of words in a document Then the eigenvalues of this empirical moment tensor can be interpreted as the probability of choosing a specific topic and each column of the factor matrix E x h k displaystyle E x h k nbsp corresponds to probabilities of words in the vocabulary in the corresponding topic See also editLatent class analysis Multilinear subspace learning Singular value decomposition Tucker decomposition Higher order singular value decomposition Tensor decompositionReferences edit Papalexakis Evangelos Automatic Unsupervised Tensor Mining with Quality Assessment PDF F L Hitchcock 1927 The expression of a tensor or a polyadic as a sum of products Journal of Mathematics and Physics 6 1 4 164 189 doi 10 1002 sapm192761164 a b Carroll J D Chang J 1970 Analysis of individual differences in multidimensional scaling via an n way generalization of Eckart Young decomposition Psychometrika 35 3 283 319 doi 10 1007 BF02310791 S2CID 50364581 a b Harshman Richard A 1970 Foundations of the PARAFAC procedure Models and conditions for an explanatory multi modal factor analysis PDF UCLA Working Papers in Phonetics 16 84 No 10 085 Archived from the original PDF on October 10 2004 Gujral Ekta Aptera Automatic PARAFAC2 Tensor Analysis PDF ASONAM 2022 Hillar C J Lim L 2013 Most tensor problems are NP Hard Journal of the ACM 60 6 1 39 arXiv 0911 1393 doi 10 1145 2512329 S2CID 1460452 Landsberg J M 2012 Tensors Geometry and Applications AMS a b c de Silva V Lim L 2008 Tensor Rank and the Ill Posedness of the Best Low Rank Approximation Problem SIAM Journal on Matrix Analysis and Applications 30 3 1084 1127 arXiv math 0607647 doi 10 1137 06066518x S2CID 7159193 Strassen V 1983 Rank and optimal computation of generic tensors Linear Algebra and Its Applications 52 53 645 685 doi 10 1016 0024 3795 83 80041 x Catalisano M V Geramita A V Gimigliano A 2002 Ranks of tensors secant varieties of Segre varieties and fat points Linear Algebra and Its Applications 355 1 3 263 285 doi 10 1016 s0024 3795 02 00352 x a b Abo H Ottaviani G Peterson C 2009 Induction for secant varieties of Segre varieties Transactions of the American Mathematical Society 361 2 767 792 arXiv math 0607191 doi 10 1090 s0002 9947 08 04725 9 S2CID 59069541 Lickteig Thomas 1985 Typical tensorial rank Linear Algebra and Its Applications 69 95 120 doi 10 1016 0024 3795 85 90070 9 Catalisano M V Geramita A V Gimigliano A 2011 Secant varieties of P displaystyle mathbb P nbsp 1 P displaystyle mathbb P nbsp 1 n times are not defective for n 5 Journal of Algebraic Geometry 20 2 295 327 doi 10 1090 s1056 3911 10 00537 0 Blehkerman G Teitler Z 2015 On maximum typical and generic ranks Mathematische Annalen 362 3 4 1 11 arXiv 1402 2371 doi 10 1007 s00208 014 1150 3 S2CID 14309435 Bini D Lotti G Romani F 1980 Approximate solutions for the bilinear form computational problem SIAM Journal on Scientific Computing 9 4 692 697 doi 10 1137 0209053 Harris Joe 1992 Algebraic Geometry SpringerLink Graduate Texts in Mathematics Vol 133 doi 10 1007 978 1 4757 2189 8 ISBN 978 1 4419 3099 6 a b Chiantini L Ottaviani G Vannieuwenhoven N 2014 01 01 An Algorithm For Generic and Low Rank Specific Identifiability of Complex Tensors SIAM Journal on Matrix Analysis and Applications 35 4 1265 1287 arXiv 1403 4157 doi 10 1137 140961389 ISSN 0895 4798 S2CID 28478606 Bocci Cristiano Chiantini Luca Ottaviani Giorgio 2014 12 01 Refined methods for the identifiability of tensors Annali di Matematica Pura ed Applicata 193 6 1691 1702 arXiv 1303 6915 doi 10 1007 s10231 013 0352 8 ISSN 0373 3114 S2CID 119721371 Chiantini L Ottaviani G Vannieuwenhoven N 2017 01 01 Effective Criteria for Specific Identifiability of Tensors and Forms SIAM Journal on Matrix Analysis and Applications 38 2 656 681 arXiv 1609 00123 doi 10 1137 16m1090132 ISSN 0895 4798 S2CID 23983015 Chiantini L Ottaviani G 2012 01 01 On Generic Identifiability of 3 Tensors of Small Rank SIAM Journal on Matrix Analysis and Applications 33 3 1018 1037 arXiv 1103 2696 doi 10 1137 110829180 ISSN 0895 4798 S2CID 43781880 a b Hauenstein J D Oeding L Ottaviani G Sommese A J 2016 Homotopy techniques for tensor decomposition and perfect identifiability J Reine Angew Math 2019 753 1 22 arXiv 1501 00090 doi 10 1515 crelle 2016 0067 S2CID 16324593 Bocci Cristiano Chiantini Luca 2013 On the identifiability of binary Segre products Journal of Algebraic Geometry 22 1 1 11 arXiv 1105 3643 doi 10 1090 s1056 3911 2011 00592 4 ISSN 1056 3911 S2CID 119671913 Domanov Ignat Lathauwer Lieven De January 2014 Canonical Polyadic Decomposition of Third Order Tensors Reduction to Generalized Eigenvalue Decomposition SIAM Journal on Matrix Analysis and Applications 35 2 636 660 arXiv 1312 2848 doi 10 1137 130916084 ISSN 0895 4798 S2CID 14851072 Domanov Ignat De Lathauwer Lieven January 2017 Canonical polyadic decomposition of third order tensors Relaxed uniqueness conditions and algebraic algorithm Linear Algebra and Its Applications 513 342 375 arXiv 1501 07251 doi 10 1016 j laa 2016 10 019 ISSN 0024 3795 S2CID 119729978 Faber Nicolaas Klaas M Ferre Joan Boque Ricard January 2001 Iteratively reweighted generalized rank annihilation method Chemometrics and Intelligent Laboratory Systems 55 1 2 67 90 doi 10 1016 s0169 7439 00 00117 9 ISSN 0169 7439 Leurgans S E Ross R T Abel R B October 1993 A Decomposition for Three Way Arrays SIAM Journal on Matrix Analysis and Applications 14 4 1064 1083 doi 10 1137 0614071 ISSN 0895 4798 Lorber Avraham October 1985 Features of quantifying chemical composition from two dimensional data array by the rank annihilation factor analysis method Analytical Chemistry 57 12 2395 2397 doi 10 1021 ac00289a052 ISSN 0003 2700 Sanchez Eugenio Kowalski Bruce R January 1990 Tensorial resolution A direct trilinear decomposition Journal of Chemometrics 4 1 29 45 doi 10 1002 cem 1180040105 ISSN 0886 9383 S2CID 120459386 Sands Richard Young Forrest W March 1980 Component models for three way data An alternating least squares algorithm with optimal scaling features Psychometrika 45 1 39 67 doi 10 1007 bf02293598 ISSN 0033 3123 S2CID 121003817 Bernardi A Brachat J Comon P Mourrain B May 2013 General tensor decomposition moment matrices and applications Journal of Symbolic Computation 52 51 71 arXiv 1105 1229 doi 10 1016 j jsc 2012 05 012 ISSN 0747 7171 S2CID 14181289 Bernardi Alessandra Daleo Noah S Hauenstein Jonathan D Mourrain Bernard December 2017 Tensor decomposition and homotopy continuation Differential Geometry and Its Applications 55 78 105 arXiv 1512 04312 doi 10 1016 j difgeo 2017 07 009 ISSN 0926 2245 S2CID 119147635 Anandkumar Animashree Ge Rong Hsu Daniel Kakade Sham M Telgarsky Matus 2014 Tensor decompositions for learning latent variable models The Journal of Machine Learning Research 15 1 2773 2832 Further reading editKolda Tamara G Bader Brett W 2009 Tensor Decompositions and Applications SIAM Rev 51 3 455 500 Bibcode 2009SIAMR 51 455K CiteSeerX 10 1 1 153 2059 doi 10 1137 07070111X S2CID 16074195 Landsberg Joseph M 2012 Tensors Geometry and Applications AMS External links editPARAFAC Tutorial Parallel Factor Analysis PARAFAC FactoMineR free exploratory multivariate data analysis software linked to R Retrieved from https en wikipedia org w index php title Tensor rank decomposition amp oldid 1212683506, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.