fbpx
Wikipedia

Matrix chain multiplication

Matrix chain multiplication (or the matrix chain ordering problem[1]) is an optimization problem concerning the most efficient way to multiply a given sequence of matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence of the matrix multiplications involved. The problem may be solved using dynamic programming.

There are many options because matrix multiplication is associative. In other words, no matter how the product is parenthesized, the result obtained will remain the same. For example, for four matrices A, B, C, and D, there are five possible options:

((AB)C)D = (A(BC))D = (AB)(CD) = A((BC)D) = A(B(CD)).

Although it does not affect the product, the order in which the terms are parenthesized affects the number of simple arithmetic operations needed to compute the product, that is, the computational complexity. The straightforward multiplication of a matrix that is X × Y by a matrix that is Y × Z requires XYZ ordinary multiplications and X(Y − 1)Z ordinary additions. In this context, it is typical to use the number of ordinary multiplications as a measure of the runtime complexity.

If A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix, then

computing (AB)C needs (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations, while
computing A(BC) needs (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations.

Clearly the first method is more efficient. With this information, the problem statement can be refined as "how to determine the optimal parenthesization of a product of n matrices?" Checking each possible parenthesization (brute force) would require a run-time that is exponential in the number of matrices, which is very slow and impractical for large n. A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems.

A dynamic programming algorithm

To begin, let us assume that all we really want to know is the minimum cost, or minimum number of arithmetic operations needed to multiply out the matrices. If we are only multiplying two matrices, there is only one way to multiply them, so the minimum cost is the cost of doing this. In general, we can find the minimum cost using the following recursive algorithm:

  • Take the sequence of matrices and separate it into two subsequences.
  • Find the minimum cost of multiplying out each subsequence.
  • Add these costs together, and add in the cost of multiplying the two result matrices.
  • Do this for each possible position at which the sequence of matrices can be split, and take the minimum over all of them.

For example, if we have four matrices ABCD, we compute the cost required to find each of (A)(BCD), (AB)(CD), and (ABC)(D), making recursive calls to find the minimum cost to compute ABC, AB, CD, and BCD. We then choose the best one. Better still, this yields not only the minimum cost, but also demonstrates the best way of doing the multiplication: group it the way that yields the lowest total cost, and do the same for each factor.

However, this algorithm has exponential runtime complexity making it as inefficient as the naive approach of trying all permutations. The reason is that the algorithm does a lot of redundant work. For example, above we made a recursive call to find the best cost for computing both ABC and AB. But finding the best cost for computing ABC also requires finding the best cost for AB. As the recursion grows deeper, more and more of this type of unnecessary repetition occurs.

One simple solution is called memoization: each time we compute the minimum cost needed to multiply out a specific subsequence, we save it. If we are ever asked to compute it again, we simply give the saved answer, and do not recompute it. Since there are about n2/2 different subsequences, where n is the number of matrices, the space required to do this is reasonable. It can be shown that this simple trick brings the runtime down to O(n3) from O(2n), which is more than efficient enough for real applications. This is top-down dynamic programming.

The following bottom-up approach[2] computes, for each 2 ≤ k ≤ n, the minimum costs of all subsequences of length k using the costs of smaller subsequences already computed. It has the same asymptotic runtime and requires no recursion.

Pseudocode:

// Matrix A[i] has dimension dims[i-1] x dims[i] for i = 1..n MatrixChainOrder(int dims[]) { // length[dims] = n + 1 n = dims.length - 1; // m[i,j] = Minimum number of scalar multiplications (i.e., cost) // needed to compute the matrix A[i]A[i+1]...A[j] = A[i..j] // The cost is zero when multiplying one matrix for (i = 1; i <= n; i++) m[i, i] = 0; for (len = 2; len <= n; len++) { // Subsequence lengths for (i = 1; i <= n - len + 1; i++) { j = i + len - 1; m[i, j] = MAXINT; for (k = i; k <= j - 1; k++) { cost = m[i, k] + m[k+1, j] + dims[i-1]*dims[k]*dims[j]; if (cost < m[i, j]) { m[i, j] = cost; s[i, j] = k; // Index of the subsequence split that achieved minimal cost } } } } } 
  • Note : The first index for dims is 0 and the first index for m and s is 1.

A Python implementation using the memoization decorator from the standard library:

from functools import cache def matrixChainOrder(dims: list[int]) -> int: @cache def a(i, j): return min((a(i, k) + dims[i] * dims[k] * dims[j] + a(k, j) for k in range(i + 1, j)), default=0) return a(0, len(dims) - 1) 

A Java implementation using zero-based array indexes along with a convenience method for printing the solved order of operations:

public class MatrixOrderOptimization { protected int[][]m; protected int[][]s; public void matrixChainOrder(int[] dims) { int n = dims.length - 1; m = new int[n][n]; s = new int[n][n]; for (int lenMinusOne = 1; lenMinusOne < n; lenMinusOne++) { for (int i = 0; i < n - lenMinusOne; i++) { int j = i + lenMinusOne; m[i][j] = Integer.MAX_VALUE; for (int k = i; k < j; k++) { int cost = m[i][k] + m[k+1][j] + dims[i]*dims[k+1]*dims[j+1]; if (cost < m[i][j]) { m[i][j] = cost; s[i][j] = k; } } } } } public void printOptimalParenthesizations() { boolean[] inAResult = new boolean[s.length]; printOptimalParenthesizations(s, 0, s.length - 1, inAResult); } void printOptimalParenthesizations(int[][]s, int i, int j, /* for pretty printing: */ boolean[] inAResult) { if (i != j) { printOptimalParenthesizations(s, i, s[i][j], inAResult); printOptimalParenthesizations(s, s[i][j] + 1, j, inAResult); String istr = inAResult[i] ? "_result " : " "; String jstr = inAResult[j] ? "_result " : " "; System.out.println(" A_" + i + istr + "* A_" + j + jstr); inAResult[i] = true; inAResult[j] = true; } } } 

At the end of this program, we have the minimum cost for the full sequence.

More efficient algorithms

There are algorithms that are more efficient than the O(n3) dynamic programming algorithm, though they are more complex.

Hu & Shing

An algorithm published by Hu and Shing achieves O(n log n) computational complexity.[3][4][5] They showed how the matrix chain multiplication problem can be transformed (or reduced) into the problem of triangulation of a regular polygon. The polygon is oriented such that there is a horizontal bottom side, called the base, which represents the final result. The other n sides of the polygon, in the clockwise direction, represent the matrices. The vertices on each end of a side are the dimensions of the matrix represented by that side. With n matrices in the multiplication chain there are n−1 binary operations and Cn−1 ways of placing parentheses, where Cn−1 is the (n−1)-th Catalan number. The algorithm exploits that there are also Cn−1 possible triangulations of a polygon with n+1 sides.

This image illustrates possible triangulations of a regular hexagon. These correspond to the different ways that parentheses can be placed to order the multiplications for a product of 5 matrices.

 

For the example below, there are four sides: A, B, C and the final result ABC. A is a 10×30 matrix, B is a 30×5 matrix, C is a 5×60 matrix, and the final result is a 10×60 matrix. The regular polygon for this example is a 4-gon, i.e. a square:

 

The matrix product AB is a 10x5 matrix and BC is a 30x60 matrix. The two possible triangulations in this example are:

The cost of a single triangle in terms of the number of multiplications needed is the product of its vertices. The total cost of a particular triangulation of the polygon is the sum of the costs of all its triangles:

(AB)C: (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 multiplications
A(BC): (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 multiplications

Hu & Shing developed an algorithm that finds an optimum solution for the minimum cost partition problem in O(n log n) time. Their proof of correctness of the algorithm relies on "Lemma 1" proved in a 1981 technical report and omitted from the published paper.[6][4] The technical report's proof of the lemma is incorrect, but Shing has presented a corrected proof.[1]

Other O(n log n) algorithms

Wang, Zhu and Tian have published a simplified O(n log m) algorithm, where n is the number of matrices in the chain and m is the number of local minimums in the dimension sequence of the given matrix chain.[7]

Nimbark, Gohel, and Doshi have published a greedy O(n log n) algorithm,[8] but their proof of optimality is incorrect and their algorithm fails to produce the most efficient parentheses assignment for some matrix chains.[1]

Chin-Hu-Shing approximate solution

An algorithm created independently by Chin[9] and Hu & Shing[10] runs in O(n) and produces a parenthesization which is at most 15.47% worse than the optimal choice. In most cases the algorithm yields the optimal solution or a solution which is only 1-2 percent worse than the optimal one.[5]

The algorithm starts by translating the problem to the polygon partitioning problem. To each vertex V of the polygon is associated a weight w. Suppose we have three consecutive vertices  , and that   is the vertex with minimum weight  . We look at the quadrilateral with vertices   (in clockwise order). We can triangulate it in two ways:

  •   and  , with cost  
  •   and   with cost  .

Therefore, if

 

or equivalently

 

we remove the vertex   from the polygon and add the side   to the triangulation. We repeat this process until no   satisfies the condition above. For all the remaining vertices  , we add the side   to the triangulation. This gives us a nearly optimal triangulation.

Generalizations

The matrix chain multiplication problem generalizes to solving a more abstract problem: given a linear sequence of objects, an associative binary operation on those objects, and a way to compute the cost of performing that operation on any two given objects (as well as all partial results), compute the minimum cost way to group the objects to apply the operation over the sequence.[11] One somewhat contrived special case of this is string concatenation of a list of strings. In C, for example, the cost of concatenating two strings of length m and n using strcat is O(m + n), since we need O(m) time to find the end of the first string and O(n) time to copy the second string onto the end of it. Using this cost function, we can write a dynamic programming algorithm to find the fastest way to concatenate a sequence of strings. However, this optimization is rather useless because we can straightforwardly concatenate the strings in time proportional to the sum of their lengths. A similar problem exists for singly linked lists.

Another generalization is to solve the problem when parallel processors are available. In this case, instead of adding the costs of computing each factor of a matrix product, we take the maximum because we can do them simultaneously. This can drastically affect both the minimum cost and the final optimal grouping; more "balanced" groupings that keep all the processors busy are favored. There are even more sophisticated approaches.[12]

See also

References

  1. ^ a b c Schwartz, Oded; Weiss, Elad (January 2019). "Revisiting "Computation of Matrix Chain Products". SIAM Journal on Computing. 48 (5): 1481–1486. doi:10.1137/18m1195401.
  2. ^ Cormen, Thomas H; Leiserson, Charles E; Rivest, Ronald L; Stein, Clifford (2001). "15.2: Matrix-chain multiplication". Introduction to Algorithms. Vol. Second Edition. MIT Press and McGraw-Hill. pp. 331–338. ISBN 978-0-262-03293-3.
  3. ^ Hu, TC; Shing, MT (1982). "Computation of Matrix Chain Products, Part I" (PDF). SIAM Journal on Computing. 11 (2): 362–373. CiteSeerX 10.1.1.695.2923. doi:10.1137/0211028. ISSN 0097-5397.
  4. ^ a b Hu, TC; Shing, MT (1984). "Computation of Matrix Chain Products, Part II" (PDF). SIAM Journal on Computing. 13 (2): 228–251. CiteSeerX 10.1.1.695.4875. doi:10.1137/0213017. ISSN 0097-5397.
  5. ^ a b Artur, Czumaj (1996). (PDF). Journal of Algorithms. 21: 71–79. CiteSeerX 10.1.1.218.8168. doi:10.1006/jagm.1996.0037. Archived from the original (PDF) on 2018-07-27.
  6. ^ Hu, TC; Shing, MT (1981). Computation of Matrix Chain Products, Part I, Part II (PDF) (Technical report). Stanford University, Department of Computer Science. Part II, page 3. STAN-CS-TR-81-875.
  7. ^ Wang, Xiaodong; Zhu, Daxin; Tian, Jun (April 2013). "Efficient computation of matrix chain". 2013 8th International Conference on Computer Science Education: 703–707. doi:10.1109/ICCSE.2013.6553999.
  8. ^ Nimbark, Hitesh; Gohel, Shobhen; Doshi, Nishant (2011). "A Novel Approach for Matrix Chain Multiplication Using Greedy Technique for Packet Processing". Computer Networks and Information Technologies. 142: 318–321. doi:10.1007/978-3-642-19542-6_58.
  9. ^ Chin, Francis Y. (July 1978). "An O(n) algorithm for determining a near-optimal computation order of matrix chain products". Communications of the ACM. 21 (7): 544–549. doi:10.1145/359545.359556.
  10. ^ Hu, T.C; Shing, M.T (June 1981). "An O(n) algorithm to find a near-optimum partition of a convex polygon". Journal of Algorithms. 2 (2): 122–138. doi:10.1016/0196-6774(81)90014-6.
  11. ^ G. Baumgartner, D. Bernholdt, D. Cociorva, R. Harrison, M. Nooijen, J. Ramanujam and P. Sadayappan. A Performance Optimization Framework for Compilation of Tensor Contraction Expressions into Parallel Programs. 7th International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS '02). Fort Lauderdale, Florida. 2002 available at http://citeseer.ist.psu.edu/610463.html and at http://www.csc.lsu.edu/~gb/TCE/Publications/OptFramework-HIPS02.pdf
  12. ^ Heejo Lee, Jong Kim, Sungje Hong, and Sunggu Lee. Processor Allocation and Task Scheduling of Matrix Chain Products on Parallel Systems 2011-07-22 at the Wayback Machine. IEEE Trans. on Parallel and Distributed Systems, Vol. 14, No. 4, pp. 394–407, Apr. 2003

matrix, chain, multiplication, matrix, chain, ordering, problem, optimization, problem, concerning, most, efficient, multiply, given, sequence, matrices, problem, actually, perform, multiplications, merely, decide, sequence, matrix, multiplications, involved, . Matrix chain multiplication or the matrix chain ordering problem 1 is an optimization problem concerning the most efficient way to multiply a given sequence of matrices The problem is not actually to perform the multiplications but merely to decide the sequence of the matrix multiplications involved The problem may be solved using dynamic programming There are many options because matrix multiplication is associative In other words no matter how the product is parenthesized the result obtained will remain the same For example for four matrices A B C and D there are five possible options AB C D A BC D AB CD A BC D A B CD Although it does not affect the product the order in which the terms are parenthesized affects the number of simple arithmetic operations needed to compute the product that is the computational complexity The straightforward multiplication of a matrix that is X Y by a matrix that is Y Z requires XYZ ordinary multiplications and X Y 1 Z ordinary additions In this context it is typical to use the number of ordinary multiplications as a measure of the runtime complexity If A is a 10 30 matrix B is a 30 5 matrix and C is a 5 60 matrix then computing AB C needs 10 30 5 10 5 60 1500 3000 4500 operations while computing A BC needs 30 5 60 10 30 60 9000 18000 27000 operations Clearly the first method is more efficient With this information the problem statement can be refined as how to determine the optimal parenthesization of a product of n matrices Checking each possible parenthesization brute force would require a run time that is exponential in the number of matrices which is very slow and impractical for large n A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems Contents 1 A dynamic programming algorithm 2 More efficient algorithms 2 1 Hu amp Shing 2 2 Other O n log n algorithms 2 3 Chin Hu Shing approximate solution 3 Generalizations 4 See also 5 ReferencesA dynamic programming algorithm EditTo begin let us assume that all we really want to know is the minimum cost or minimum number of arithmetic operations needed to multiply out the matrices If we are only multiplying two matrices there is only one way to multiply them so the minimum cost is the cost of doing this In general we can find the minimum cost using the following recursive algorithm Take the sequence of matrices and separate it into two subsequences Find the minimum cost of multiplying out each subsequence Add these costs together and add in the cost of multiplying the two result matrices Do this for each possible position at which the sequence of matrices can be split and take the minimum over all of them For example if we have four matrices ABCD we compute the cost required to find each of A BCD AB CD and ABC D making recursive calls to find the minimum cost to compute ABC AB CD and BCD We then choose the best one Better still this yields not only the minimum cost but also demonstrates the best way of doing the multiplication group it the way that yields the lowest total cost and do the same for each factor However this algorithm has exponential runtime complexity making it as inefficient as the naive approach of trying all permutations The reason is that the algorithm does a lot of redundant work For example above we made a recursive call to find the best cost for computing both ABC and AB But finding the best cost for computing ABC also requires finding the best cost for AB As the recursion grows deeper more and more of this type of unnecessary repetition occurs One simple solution is called memoization each time we compute the minimum cost needed to multiply out a specific subsequence we save it If we are ever asked to compute it again we simply give the saved answer and do not recompute it Since there are about n2 2 different subsequences where n is the number of matrices the space required to do this is reasonable It can be shown that this simple trick brings the runtime down to O n3 from O 2n which is more than efficient enough for real applications This is top down dynamic programming The following bottom up approach 2 computes for each 2 k n the minimum costs of all subsequences of length k using the costs of smaller subsequences already computed It has the same asymptotic runtime and requires no recursion Pseudocode Matrix A i has dimension dims i 1 x dims i for i 1 n MatrixChainOrder int dims length dims n 1 n dims length 1 m i j Minimum number of scalar multiplications i e cost needed to compute the matrix A i A i 1 A j A i j The cost is zero when multiplying one matrix for i 1 i lt n i m i i 0 for len 2 len lt n len Subsequence lengths for i 1 i lt n len 1 i j i len 1 m i j MAXINT for k i k lt j 1 k cost m i k m k 1 j dims i 1 dims k dims j if cost lt m i j m i j cost s i j k Index of the subsequence split that achieved minimal cost Note The first index for dims is 0 and the first index for m and s is 1 A Python implementation using the memoization decorator from the standard library from functools import cache def matrixChainOrder dims list int gt int cache def a i j return min a i k dims i dims k dims j a k j for k in range i 1 j default 0 return a 0 len dims 1 A Java implementation using zero based array indexes along with a convenience method for printing the solved order of operations public class MatrixOrderOptimization protected int m protected int s public void matrixChainOrder int dims int n dims length 1 m new int n n s new int n n for int lenMinusOne 1 lenMinusOne lt n lenMinusOne for int i 0 i lt n lenMinusOne i int j i lenMinusOne m i j Integer MAX VALUE for int k i k lt j k int cost m i k m k 1 j dims i dims k 1 dims j 1 if cost lt m i j m i j cost s i j k public void printOptimalParenthesizations boolean inAResult new boolean s length printOptimalParenthesizations s 0 s length 1 inAResult void printOptimalParenthesizations int s int i int j for pretty printing boolean inAResult if i j printOptimalParenthesizations s i s i j inAResult printOptimalParenthesizations s s i j 1 j inAResult String istr inAResult i result String jstr inAResult j result System out println A i istr A j jstr inAResult i true inAResult j true At the end of this program we have the minimum cost for the full sequence More efficient algorithms EditThere are algorithms that are more efficient than the O n3 dynamic programming algorithm though they are more complex Hu amp Shing Edit An algorithm published by Hu and Shing achieves O n log n computational complexity 3 4 5 They showed how the matrix chain multiplication problem can be transformed or reduced into the problem of triangulation of a regular polygon The polygon is oriented such that there is a horizontal bottom side called the base which represents the final result The other n sides of the polygon in the clockwise direction represent the matrices The vertices on each end of a side are the dimensions of the matrix represented by that side With n matrices in the multiplication chain there are n 1 binary operations and Cn 1 ways of placing parentheses where Cn 1 is the n 1 th Catalan number The algorithm exploits that there are also Cn 1 possible triangulations of a polygon with n 1 sides This image illustrates possible triangulations of a regular hexagon These correspond to the different ways that parentheses can be placed to order the multiplications for a product of 5 matrices For the example below there are four sides A B C and the final result ABC A is a 10 30 matrix B is a 30 5 matrix C is a 5 60 matrix and the final result is a 10 60 matrix The regular polygon for this example is a 4 gon i e a square The matrix product AB is a 10x5 matrix and BC is a 30x60 matrix The two possible triangulations in this example are Polygon representation of AB C Polygon representation of A BC The cost of a single triangle in terms of the number of multiplications needed is the product of its vertices The total cost of a particular triangulation of the polygon is the sum of the costs of all its triangles AB C 10 30 5 10 5 60 1500 3000 4500 multiplications A BC 30 5 60 10 30 60 9000 18000 27000 multiplicationsHu amp Shing developed an algorithm that finds an optimum solution for the minimum cost partition problem in O n log n time Their proof of correctness of the algorithm relies on Lemma 1 proved in a 1981 technical report and omitted from the published paper 6 4 The technical report s proof of the lemma is incorrect but Shing has presented a corrected proof 1 Other O n log n algorithms Edit Wang Zhu and Tian have published a simplified O n log m algorithm where n is the number of matrices in the chain and m is the number of local minimums in the dimension sequence of the given matrix chain 7 Nimbark Gohel and Doshi have published a greedy O n log n algorithm 8 but their proof of optimality is incorrect and their algorithm fails to produce the most efficient parentheses assignment for some matrix chains 1 Chin Hu Shing approximate solution Edit An algorithm created independently by Chin 9 and Hu amp Shing 10 runs in O n and produces a parenthesization which is at most 15 47 worse than the optimal choice In most cases the algorithm yields the optimal solution or a solution which is only 1 2 percent worse than the optimal one 5 The algorithm starts by translating the problem to the polygon partitioning problem To each vertex V of the polygon is associated a weight w Suppose we have three consecutive vertices V i 1 V i V i 1 displaystyle V i 1 V i V i 1 and that V min displaystyle V min is the vertex with minimum weight w min displaystyle w min We look at the quadrilateral with vertices V min V i 1 V i V i 1 displaystyle V min V i 1 V i V i 1 in clockwise order We can triangulate it in two ways V min V i 1 V i displaystyle V min V i 1 V i and V min V i 1 V i displaystyle V min V i 1 V i with cost w min w i 1 w i w min w i 1 w i displaystyle w min w i 1 w i w min w i 1 w i V min V i 1 V i 1 displaystyle V min V i 1 V i 1 and V i 1 V i V i 1 displaystyle V i 1 V i V i 1 with cost w min w i 1 w i 1 w i 1 w i w i 1 displaystyle w min w i 1 w i 1 w i 1 w i w i 1 Therefore if w min w i 1 w i 1 w i 1 w i w i 1 lt w min w i 1 w i w min w i 1 w i displaystyle w min w i 1 w i 1 w i 1 w i w i 1 lt w min w i 1 w i w min w i 1 w i or equivalently 1 w i 1 w min lt 1 w i 1 1 w i 1 displaystyle frac 1 w i frac 1 w min lt frac 1 w i 1 frac 1 w i 1 we remove the vertex V i displaystyle V i from the polygon and add the side V i 1 V i 1 displaystyle V i 1 V i 1 to the triangulation We repeat this process until no V i displaystyle V i satisfies the condition above For all the remaining vertices V n displaystyle V n we add the side V min V n displaystyle V min V n to the triangulation This gives us a nearly optimal triangulation Generalizations EditThe matrix chain multiplication problem generalizes to solving a more abstract problem given a linear sequence of objects an associative binary operation on those objects and a way to compute the cost of performing that operation on any two given objects as well as all partial results compute the minimum cost way to group the objects to apply the operation over the sequence 11 One somewhat contrived special case of this is string concatenation of a list of strings In C for example the cost of concatenating two strings of length m and n using strcat is O m n since we need O m time to find the end of the first string and O n time to copy the second string onto the end of it Using this cost function we can write a dynamic programming algorithm to find the fastest way to concatenate a sequence of strings However this optimization is rather useless because we can straightforwardly concatenate the strings in time proportional to the sum of their lengths A similar problem exists for singly linked lists Another generalization is to solve the problem when parallel processors are available In this case instead of adding the costs of computing each factor of a matrix product we take the maximum because we can do them simultaneously This can drastically affect both the minimum cost and the final optimal grouping more balanced groupings that keep all the processors busy are favored There are even more sophisticated approaches 12 See also EditAssociahedron Tamari latticeReferences Edit a b c Schwartz Oded Weiss Elad January 2019 Revisiting Computation of Matrix Chain Products SIAM Journal on Computing 48 5 1481 1486 doi 10 1137 18m1195401 Cormen Thomas H Leiserson Charles E Rivest Ronald L Stein Clifford 2001 15 2 Matrix chain multiplication Introduction to Algorithms Vol Second Edition MIT Press and McGraw Hill pp 331 338 ISBN 978 0 262 03293 3 Hu TC Shing MT 1982 Computation of Matrix Chain Products Part I PDF SIAM Journal on Computing 11 2 362 373 CiteSeerX 10 1 1 695 2923 doi 10 1137 0211028 ISSN 0097 5397 a b Hu TC Shing MT 1984 Computation of Matrix Chain Products Part II PDF SIAM Journal on Computing 13 2 228 251 CiteSeerX 10 1 1 695 4875 doi 10 1137 0213017 ISSN 0097 5397 a b Artur Czumaj 1996 Very Fast Approximation of the Matrix Chain Product Problem PDF Journal of Algorithms 21 71 79 CiteSeerX 10 1 1 218 8168 doi 10 1006 jagm 1996 0037 Archived from the original PDF on 2018 07 27 Hu TC Shing MT 1981 Computation of Matrix Chain Products Part I Part II PDF Technical report Stanford University Department of Computer Science Part II page 3 STAN CS TR 81 875 Wang Xiaodong Zhu Daxin Tian Jun April 2013 Efficient computation of matrix chain 2013 8th International Conference on Computer Science Education 703 707 doi 10 1109 ICCSE 2013 6553999 Nimbark Hitesh Gohel Shobhen Doshi Nishant 2011 A Novel Approach for Matrix Chain Multiplication Using Greedy Technique for Packet Processing Computer Networks and Information Technologies 142 318 321 doi 10 1007 978 3 642 19542 6 58 Chin Francis Y July 1978 An O n algorithm for determining a near optimal computation order of matrix chain products Communications of the ACM 21 7 544 549 doi 10 1145 359545 359556 Hu T C Shing M T June 1981 An O n algorithm to find a near optimum partition of a convex polygon Journal of Algorithms 2 2 122 138 doi 10 1016 0196 6774 81 90014 6 G Baumgartner D Bernholdt D Cociorva R Harrison M Nooijen J Ramanujam and P Sadayappan A Performance Optimization Framework for Compilation of Tensor Contraction Expressions into Parallel Programs 7th International Workshop on High Level Parallel Programming Models and Supportive Environments HIPS 02 Fort Lauderdale Florida 2002 available at http citeseer ist psu edu 610463 html and at http www csc lsu edu gb TCE Publications OptFramework HIPS02 pdf Heejo Lee Jong Kim Sungje Hong and Sunggu Lee Processor Allocation and Task Scheduling of Matrix Chain Products on Parallel Systems Archived 2011 07 22 at the Wayback Machine IEEE Trans on Parallel and Distributed Systems Vol 14 No 4 pp 394 407 Apr 2003 Retrieved from https en wikipedia org w index php title Matrix chain multiplication amp oldid 1135097443, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.