fbpx
Wikipedia

In-place matrix transposition

In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an N×M matrix in-place in computer memory, ideally with O(1) (bounded) additional storage, or at most with additional storage much less than NM. Typically, the matrix is assumed to be stored in row-major or column-major order (i.e., contiguous rows or columns, respectively, arranged consecutively).

Performing an in-place transpose (in-situ transpose) is most difficult when NM, i.e. for a non-square (rectangular) matrix, where it involves a complicated permutation of the data elements, with many cycles of length greater than 2. In contrast, for a square matrix (N = M), all of the cycles are of length 1 or 2, and the transpose can be achieved by a simple loop to swap the upper triangle of the matrix with the lower triangle. Further complications arise if one wishes to maximize memory locality in order to improve cache line utilization or to operate out-of-core (where the matrix does not fit into main memory), since transposes inherently involve non-consecutive memory accesses.

The problem of non-square in-place transposition has been studied since at least the late 1950s, and several algorithms are known, including several which attempt to optimize locality for cache, out-of-core, or similar memory-related contexts.

Background

On a computer, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid data movement.

However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in row-major order, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a fast Fourier transform algorithm (e.g. Frigo & Johnson, 2005), transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality. Since these situations normally coincide with the case of very large matrices (which exceed the cache size), performing the transposition in-place with minimal additional storage becomes desirable.

Also, as a purely mathematical problem, in-place transposition involves a number of interesting number theory puzzles that have been worked out over the course of several decades.

Example

For example, consider the 2×4 matrix:

 

In row-major format, this would be stored in computer memory as the sequence (11, 12, 13, 14, 21, 22, 23, 24), i.e. the two rows stored consecutively. If we transpose this, we obtain the 4×2 matrix:

 

which is stored in computer memory as the sequence (11, 21, 12, 22, 13, 23, 14, 24).

Position 0 1 2 3 4 5 6 7
Original storage 11 12 13 14 21 22 23 24
Transposed storage 11 21 12 22 13 23 14 24

If we number the storage locations 0 to 7, from left to right, then this permutation consists of four cycles:

(0), (1 2 4), (3 6 5), (7)

That is, the value in position 0 goes to position 0 (a cycle of length 1, no data motion). Next, the value in position 1 (in the original storage: 11, 12, 13, 14, 21, 22, 23, 24) goes to position 2 (in the transposed storage 11, 21, 12, 22, 13, 23, 14, 24), while the value in position 2 (11, 12, 13, 14, 21, 22, 23, 24) goes to position 4 (11, 21, 12, 22, 13, 23, 14, 24), and position 4 (11, 12, 13, 14, 21, 22, 23, 24) goes back to position 1 (11, 21, 12, 22, 13, 23, 14, 24). Similarly for the values in position 7 and positions (3 6 5).

Properties of the permutation

In the following, we assume that the N×M matrix is stored in row-major order with zero-based indices. This means that the (n,m) element, for n = 0,...,N−1 and m = 0,...,M−1, is stored at an address a = Mn + m (plus some offset in memory, which we ignore). In the transposed M×N matrix, the corresponding (m,n) element is stored at the address a' = Nm + n, again in row-major order. We define the transposition permutation to be the function a' = P(a) such that:

  for all  

This defines a permutation on the numbers  .

It turns out that one can define simple formulas for P and its inverse (Cate & Twigg, 1977). First:

 

where "mod" is the modulo operation.

Proof

If 0 ≤ a = Mn + m < MN − 1, then Na mod (MN−1) = MN n + Nm mod (MN − 1) = n + Nm. [ProofNote 1][ProofNote 2]

Second, the inverse permutation is given by:

 

(This is just a consequence of the fact that the inverse of an N×M transpose is an M×N transpose, although it is also easy to show explicitly that P−1 composed with P gives the identity.)

As proved by Cate & Twigg (1977), the number of fixed points (cycles of length 1) of the permutation is precisely 1 + gcd(N−1,M−1), where gcd is the greatest common divisor. For example, with N = M the number of fixed points is simply N (the diagonal of the matrix). If N − 1 and M − 1 are coprime, on the other hand, the only two fixed points are the upper-left and lower-right corners of the matrix.

The number of cycles of any length k>1 is given by (Cate & Twigg, 1977):

 

where μ is the Möbius function and the sum is over the divisors d of k.

Furthermore, the cycle containing a=1 (i.e. the second element of the first row of the matrix) is always a cycle of maximum length L, and the lengths k of all other cycles must be divisors of L (Cate & Twigg, 1977).

For a given cycle C, every element   has the same greatest common divisor  .

Proof (Brenner, 1973)

Let s be the smallest element of the cycle, and  . From the definition of the permutation P above, every other element x of the cycle is obtained by repeatedly multiplying s by N modulo MN−1, and therefore every other element is divisible by d. But, since N and MN − 1 are coprime, x cannot be divisible by any factor of MN − 1 larger than d, and hence  .

This theorem is useful in searching for cycles of the permutation, since an efficient search can look only at multiples of divisors of MN−1 (Brenner, 1973).

Laflin & Brebner (1970) pointed out that the cycles often come in pairs, which is exploited by several algorithms that permute pairs of cycles at a time. In particular, let s be the smallest element of some cycle C of length k. It follows that MN−1−s is also an element of a cycle of length k (possibly the same cycle).

Proof by the definition of P above

The length k of the cycle containing s is the smallest k > 0 such that  . Clearly, this is the same as the smallest k>0 such that  , since we are just multiplying both sides by −1, and  .

Note of proofs
  1. ^ MN x mod (MN−1) = (MN − 1) x + x mod (MN−1) = x for 0 ≤ x < MN − 1.
  2. ^ The first (a = 0) and last (a = MN−1) elements are always left invariant under transposition.

Algorithms

The following briefly summarizes the published algorithms to perform in-place matrix transposition. Source code implementing some of these algorithms can be found in the references, below.

Accessor transpose

Because physically transposing a matrix is computationally expensive, instead of moving values in memory, the access path may be transposed instead. It is trivial to perform this operation for CPU access, as the access paths of iterators must simply be exchanged,[1] however hardware acceleration may require that still be physically realigned.[2]

Square matrices

For a square N×N matrix An,m = A(n,m), in-place transposition is easy because all of the cycles have length 1 (the diagonals An,n) or length 2 (the upper triangle is swapped with the lower triangle). Pseudocode to accomplish this (assuming zero-based array indices) is:

for n = 0 to N - 1 for m = n + 1 to N swap A(n,m) with A(m,n) 

This type of implementation, while simple, can exhibit poor performance due to poor cache-line utilization, especially when N is a power of two (due to cache-line conflicts in a CPU cache with limited associativity). The reason for this is that, as m is incremented in the inner loop, the memory address corresponding to A(n,m) or A(m,n) jumps discontiguously by N in memory (depending on whether the array is in column-major or row-major format, respectively). That is, the algorithm does not exploit locality of reference.

One solution to improve the cache utilization is to "block" the algorithm to operate on several numbers at once, in blocks given by the cache-line size; unfortunately, this means that the algorithm depends on the size of the cache line (it is "cache-aware"), and on a modern computer with multiple levels of cache it requires multiple levels of machine-dependent blocking. Instead, it has been suggested (Frigo et al., 1999) that better performance can be obtained by a recursive algorithm: divide the matrix into four submatrices of roughly equal size, transposing the two submatrices along the diagonal recursively and transposing and swapping the two submatrices above and below the diagonal. (When N is sufficiently small, the simple algorithm above is used as a base case, as naively recurring all the way down to N=1 would have excessive function-call overhead.) This is a cache-oblivious algorithm, in the sense that it can exploit the cache line without the cache-line size being an explicit parameter.

Non-square matrices: Following the cycles

For non-square matrices, the algorithms are more complicated. Many of the algorithms prior to 1980 could be described as "follow-the-cycles" algorithms. That is, they loop over the cycles, moving the data from one location to the next in the cycle. In pseudocode form:

for each length>1 cycle C of the permutation pick a starting address s in C let D = data at s let x = predecessor of s in the cycle while xs move data from x to successor of x let x = predecessor of x move data from D to successor of s 

The differences between the algorithms lie mainly in how they locate the cycles, how they find the starting addresses in each cycle, and how they ensure that each cycle is moved exactly once. Typically, as discussed above, the cycles are moved in pairs, since s and MN−1−s are in cycles of the same length (possibly the same cycle). Sometimes, a small scratch array, typically of length M+N (e.g. Brenner, 1973; Cate & Twigg, 1977) is used to keep track of a subset of locations in the array that have been visited, to accelerate the algorithm.

In order to determine whether a given cycle has been moved already, the simplest scheme would be to use O(MN) auxiliary storage, one bit per element, to indicate whether a given element has been moved. To use only O(M+N) or even O(log MN) auxiliary storage, more complicated algorithms are required, and the known algorithms have a worst-case linearithmic computational cost of O(MN log MN) at best, as first proved by Knuth (Fich et al., 1995; Gustavson & Swirszcz, 2007).

Such algorithms are designed to move each data element exactly once. However, they also involve a considerable amount of arithmetic to compute the cycles, and require heavily non-consecutive memory accesses since the adjacent elements of the cycles differ by multiplicative factors of N, as discussed above.

Improving memory locality at the cost of greater total data movement

Several algorithms have been designed to achieve greater memory locality at the cost of greater data movement, as well as slightly greater storage requirements. That is, they may move each data element more than once, but they involve more consecutive memory access (greater spatial locality), which can improve performance on modern CPUs that rely on caches, as well as on SIMD architectures optimized for processing consecutive data blocks. The oldest context in which the spatial locality of transposition seems to have been studied is for out-of-core operation (by Alltop, 1975), where the matrix is too large to fit into main memory ("core").

For example, if d = gcd(N,M) is not small, one can perform the transposition using a small amount (NM/d) of additional storage, with at most three passes over the array (Alltop, 1975; Dow, 1995). Two of the passes involve a sequence of separate, small transpositions (which can be performed efficiently out of place using a small buffer) and one involves an in-place d×d square transposition of   blocks (which is efficient since the blocks being moved are large and consecutive, and the cycles are of length at most 2). This is further simplified if N is a multiple of M (or vice versa), since only one of the two out-of-place passes is required.

Another algorithm for non-coprime dimensions, involving multiple subsidiary transpositions, was described by Catanzaro et al. (2014). For the case where |N − M| is small, Dow (1995) describes another algorithm requiring |N − M| ⋅ min(N,M) additional storage, involving a min(NM) ⋅ min(NM) square transpose preceded or followed by a small out-of-place transpose. Frigo & Johnson (2005) describe the adaptation of these algorithms to use cache-oblivious techniques for general-purpose CPUs relying on cache lines to exploit spatial locality.

Work on out-of-core matrix transposition, where the matrix does not fit in main memory and must be stored largely on a hard disk, has focused largely on the N = M square-matrix case, with some exceptions (e.g. Alltop, 1975). Recent reviews of out-of-core algorithms, especially as applied to parallel computing, can be found in e.g. Suh & Prasanna (2002) and Krishnamoorth et al. (2004).

References

  1. ^ "numpy.swapaxes — NumPy v1.15 Manual". docs.scipy.org. Retrieved 22 January 2019.
  2. ^ Harris, Mark (18 February 2013). "An Efficient Matrix Transpose in CUDA C/C++". NVIDIA Developer Blog.
  • P. F. Windley, "Transposing matrices in a digital computer," Computer Journal 2, p. 47-48 (1959).
  • G. Pall, and E. Seiden, "A problem in Abelian Groups, with application to the transposition of a matrix on an electronic computer," Math. Comp. 14, p. 189-192 (1960).
  • J. Boothroyd, "Algorithm 302: Transpose vector stored array," ACM Transactions on Mathematical Software 10 (5), p. 292-293 (1967). doi:10.1145/363282.363304
  • Susan Laflin and M. A. Brebner, "Algorithm 380: in-situ transposition of a rectangular matrix," ACM Transactions on Mathematical Software 13 (5), p. 324-326 (1970). doi:10.1145/362349.362368 Source code.
  • Norman Brenner, "Algorithm 467: matrix transposition in place," ACM Transactions on Mathematical Software 16 (11), p. 692-694 (1973). doi:10.1145/355611.362542 Source code.
  • W. O. Alltop, "A computer algorithm for transposing nonsquare matrices," IEEE Trans. Comput. 24 (10), p. 1038-1040 (1975).
  • Esko G. Cate and David W. Twigg, "Algorithm 513: Analysis of In-Situ Transposition," ACM Transactions on Mathematical Software 3 (1), p. 104-110 (1977). doi:10.1145/355719.355729 Source code.
  • Bryan Catanzaro, Alexander Keller, and Michael Garland, "A decomposition for in-place matrix transposition," Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming (PPoPP '14), pp. 193–206 (2014). doi:10.1145/2555243.2555253
  • Murray Dow, "Transposing a matrix on a vector computer," Parallel Computing 21 (12), p. 1997-2005 (1995).
  • Donald E. Knuth, The Art of Computer Programming Volume 1: Fundamental Algorithms, third edition, section 1.3.3 exercise 12 (Addison-Wesley: New York, 1997).
  • M. Frigo, C. E. Leiserson, H. Prokop, and S. Ramachandran, "Cache-oblivious algorithms," in Proceedings of the 40th IEEE Symposium on Foundations of Computer Science (FOCS 99), p. 285-297 (1999). doi:10.1109/SFFCS.1999.814600
  • J. Suh and V. K. Prasanna, "An efficient algorithm for out-of-core matrix transposition," IEEE Trans. Computers 51 (4), p. 420-438 (2002). doi:10.1109/12.995452
  • S. Krishnamoorthy, G. Baumgartner, D. Cociorva, C.-C. Lam, and P. Sadayappan, "Efficient parallel out-of-core matrix transposition," International Journal of High Performance Computing and Networking 2 (2-4), p. 110-119 (2004).
  • M. Frigo and S. G. Johnson, "The Design and Implementation of FFTW3," Proceedings of the IEEE 93 (2), 216–231 (2005). Source code of the FFTW library, which includes optimized serial and parallel square and non-square transposes, in addition to FFTs.
  • Faith E. Fich, J. Ian Munro, and Patricio V. Poblete, "Permuting in place," SIAM Journal on Computing 24 (2), p. 266-278 (1995).
  • Fred G. Gustavson and Tadeusz Swirszcz, "In-place transposition of rectangular matrices," Lecture Notes in Computer Science 4699, p. 560-569 (2007), from the Proceedings of the 2006 Workshop on State-of-the-Art [sic] in Scientific and Parallel Computing (PARA 2006) (Umeå, Sweden, June 2006).
  • Sloane, N. J. A. (ed.). "Sequence A093055 (Number of non-singleton cycles in the in-situ transposition of a rectangular j X k matrix)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
  • Sloane, N. J. A. (ed.). "Sequence A093056 (Length of the longest cycle in the in-situ transposition of a rectangular j X k matrix)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
  • Sloane, N. J. A. (ed.). "Sequence A093057 (Number of matrix elements remaining at fixed position in the in-situ transposition of a rectangular j X k matrix)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.

External links

Source code

  • OFFT - recursive block in-place transpose of square matrices, in Fortran
  • Jason Stratos Papadopoulos, blocked in-place transpose of square matrices, in C, sci.math.num-analysis newsgroup (April 7, 1998).
  • See "Source code" links in the references section above, for additional code to perform in-place transposes of both square and non-square matrices.
  • libmarshal Blocked in-place transpose of rectangular matrices for the GPUs.

place, matrix, transposition, also, called, situ, matrix, transposition, problem, transposing, matrix, place, computer, memory, ideally, with, bounded, additional, storage, most, with, additional, storage, much, less, than, typically, matrix, assumed, stored, . In place matrix transposition also called in situ matrix transposition is the problem of transposing an N M matrix in place in computer memory ideally with O 1 bounded additional storage or at most with additional storage much less than NM Typically the matrix is assumed to be stored in row major or column major order i e contiguous rows or columns respectively arranged consecutively Performing an in place transpose in situ transpose is most difficult when N M i e for a non square rectangular matrix where it involves a complicated permutation of the data elements with many cycles of length greater than 2 In contrast for a square matrix N M all of the cycles are of length 1 or 2 and the transpose can be achieved by a simple loop to swap the upper triangle of the matrix with the lower triangle Further complications arise if one wishes to maximize memory locality in order to improve cache line utilization or to operate out of core where the matrix does not fit into main memory since transposes inherently involve non consecutive memory accesses The problem of non square in place transposition has been studied since at least the late 1950s and several algorithms are known including several which attempt to optimize locality for cache out of core or similar memory related contexts Contents 1 Background 2 Example 3 Properties of the permutation 4 Algorithms 4 1 Accessor transpose 4 2 Square matrices 4 3 Non square matrices Following the cycles 4 4 Improving memory locality at the cost of greater total data movement 5 References 6 External links 6 1 Source codeBackground EditOn a computer one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order For example software libraries for linear algebra such as BLAS typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid data movement However there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering For example with a matrix stored in row major order the rows of the matrix are contiguous in memory and the columns are discontiguous If repeated operations need to be performed on the columns for example in a fast Fourier transform algorithm e g Frigo amp Johnson 2005 transposing the matrix in memory to make the columns contiguous may improve performance by increasing memory locality Since these situations normally coincide with the case of very large matrices which exceed the cache size performing the transposition in place with minimal additional storage becomes desirable Also as a purely mathematical problem in place transposition involves a number of interesting number theory puzzles that have been worked out over the course of several decades Example EditFor example consider the 2 4 matrix 11 12 13 14 21 22 23 24 displaystyle begin bmatrix 11 amp 12 amp 13 amp 14 21 amp 22 amp 23 amp 24 end bmatrix In row major format this would be stored in computer memory as the sequence 11 12 13 14 21 22 23 24 i e the two rows stored consecutively If we transpose this we obtain the 4 2 matrix 11 21 12 22 13 23 14 24 displaystyle begin bmatrix 11 amp 21 12 amp 22 13 amp 23 14 amp 24 end bmatrix which is stored in computer memory as the sequence 11 21 12 22 13 23 14 24 Position 0 1 2 3 4 5 6 7Original storage 11 12 13 14 21 22 23 24Transposed storage 11 21 12 22 13 23 14 24If we number the storage locations 0 to 7 from left to right then this permutation consists of four cycles 0 1 2 4 3 6 5 7 That is the value in position 0 goes to position 0 a cycle of length 1 no data motion Next the value in position 1 in the original storage 11 12 13 14 21 22 23 24 goes to position 2 in the transposed storage 11 21 12 22 13 23 14 24 while the value in position 2 11 12 13 14 21 22 23 24 goes to position 4 11 21 12 22 13 23 14 24 and position 4 11 12 13 14 21 22 23 24 goes back to position 1 11 21 12 22 13 23 14 24 Similarly for the values in position 7 and positions 3 6 5 Properties of the permutation EditIn the following we assume that the N M matrix is stored in row major order with zero based indices This means that the n m element for n 0 N 1 and m 0 M 1 is stored at an address a Mn m plus some offset in memory which we ignore In the transposed M N matrix the corresponding m n element is stored at the address a Nm n again in row major order We define the transposition permutation to be the function a P a such that N m n P M n m displaystyle Nm n P Mn m for all n m 0 N 1 0 M 1 displaystyle n m in 0 N 1 times 0 M 1 This defines a permutation on the numbers a 0 M N 1 displaystyle a 0 ldots MN 1 It turns out that one can define simple formulas for P and its inverse Cate amp Twigg 1977 First P a M N 1 if a M N 1 N a mod M N 1 otherwise displaystyle P a begin cases MN 1 amp text if a MN 1 Na bmod MN 1 amp text otherwise end cases where mod is the modulo operation ProofIf 0 a Mn m lt MN 1 then Na mod MN 1 MN n Nm mod MN 1 n Nm ProofNote 1 ProofNote 2 Second the inverse permutation is given by P 1 a M N 1 if a M N 1 M a mod M N 1 otherwise displaystyle P 1 a begin cases MN 1 amp text if a MN 1 Ma bmod MN 1 amp text otherwise end cases This is just a consequence of the fact that the inverse of an N M transpose is an M N transpose although it is also easy to show explicitly that P 1 composed with P gives the identity As proved by Cate amp Twigg 1977 the number of fixed points cycles of length 1 of the permutation is precisely 1 gcd N 1 M 1 where gcd is the greatest common divisor For example with N M the number of fixed points is simply N the diagonal of the matrix If N 1 and M 1 are coprime on the other hand the only two fixed points are the upper left and lower right corners of the matrix The number of cycles of any length k gt 1 is given by Cate amp Twigg 1977 1 k d k m k d gcd N d 1 M N 1 displaystyle frac 1 k sum d k mu k d gcd N d 1 MN 1 where m is the Mobius function and the sum is over the divisors d of k Furthermore the cycle containing a 1 i e the second element of the first row of the matrix is always a cycle of maximum length L and the lengths k of all other cycles must be divisors of L Cate amp Twigg 1977 For a given cycle C every element x C displaystyle x in C has the same greatest common divisor d gcd x M N 1 displaystyle d gcd x MN 1 Proof Brenner 1973 Let s be the smallest element of the cycle and d gcd s M N 1 displaystyle d gcd s MN 1 From the definition of the permutation P above every other element x of the cycle is obtained by repeatedly multiplying s by N modulo MN 1 and therefore every other element is divisible by d But since N and MN 1 are coprime x cannot be divisible by any factor of MN 1 larger than d and hence d gcd x M N 1 displaystyle d gcd x MN 1 This theorem is useful in searching for cycles of the permutation since an efficient search can look only at multiples of divisors of MN 1 Brenner 1973 Laflin amp Brebner 1970 pointed out that the cycles often come in pairs which is exploited by several algorithms that permute pairs of cycles at a time In particular let s be the smallest element of some cycle C of length k It follows that MN 1 s is also an element of a cycle of length k possibly the same cycle Proof by the definition of P aboveThe length k of the cycle containing s is the smallest k gt 0 such that s N k s mod M N 1 displaystyle sN k s bmod MN 1 Clearly this is the same as the smallest k gt 0 such that s N k s mod M N 1 displaystyle s N k s bmod MN 1 since we are just multiplying both sides by 1 and M N 1 s s mod M N 1 displaystyle MN 1 s s bmod MN 1 Note of proofs MN x mod MN 1 MN 1 x x mod MN 1 x for 0 x lt MN 1 The first a 0 and last a MN 1 elements are always left invariant under transposition Algorithms EditThe following briefly summarizes the published algorithms to perform in place matrix transposition Source code implementing some of these algorithms can be found in the references below Accessor transpose Edit Because physically transposing a matrix is computationally expensive instead of moving values in memory the access path may be transposed instead It is trivial to perform this operation for CPU access as the access paths of iterators must simply be exchanged 1 however hardware acceleration may require that still be physically realigned 2 Square matrices Edit For a square N N matrix An m A n m in place transposition is easy because all of the cycles have length 1 the diagonals An n or length 2 the upper triangle is swapped with the lower triangle Pseudocode to accomplish this assuming zero based array indices is for n 0 to N 1 for m n 1 to N swap A n m with A m n This type of implementation while simple can exhibit poor performance due to poor cache line utilization especially when N is a power of two due to cache line conflicts in a CPU cache with limited associativity The reason for this is that as m is incremented in the inner loop the memory address corresponding to A n m or A m n jumps discontiguously by N in memory depending on whether the array is in column major or row major format respectively That is the algorithm does not exploit locality of reference One solution to improve the cache utilization is to block the algorithm to operate on several numbers at once in blocks given by the cache line size unfortunately this means that the algorithm depends on the size of the cache line it is cache aware and on a modern computer with multiple levels of cache it requires multiple levels of machine dependent blocking Instead it has been suggested Frigo et al 1999 that better performance can be obtained by a recursive algorithm divide the matrix into four submatrices of roughly equal size transposing the two submatrices along the diagonal recursively and transposing and swapping the two submatrices above and below the diagonal When N is sufficiently small the simple algorithm above is used as a base case as naively recurring all the way down to N 1 would have excessive function call overhead This is a cache oblivious algorithm in the sense that it can exploit the cache line without the cache line size being an explicit parameter Non square matrices Following the cycles Edit For non square matrices the algorithms are more complicated Many of the algorithms prior to 1980 could be described as follow the cycles algorithms That is they loop over the cycles moving the data from one location to the next in the cycle In pseudocode form for each length gt 1 cycle C of the permutation pick a starting address s in C let D data at s let x predecessor of s in the cycle while x s move data from x to successor of x let x predecessor of x move data from D to successor of s The differences between the algorithms lie mainly in how they locate the cycles how they find the starting addresses in each cycle and how they ensure that each cycle is moved exactly once Typically as discussed above the cycles are moved in pairs since s and MN 1 s are in cycles of the same length possibly the same cycle Sometimes a small scratch array typically of length M N e g Brenner 1973 Cate amp Twigg 1977 is used to keep track of a subset of locations in the array that have been visited to accelerate the algorithm In order to determine whether a given cycle has been moved already the simplest scheme would be to use O MN auxiliary storage one bit per element to indicate whether a given element has been moved To use only O M N or even O log MN auxiliary storage more complicated algorithms are required and the known algorithms have a worst case linearithmic computational cost of O MN log MN at best as first proved by Knuth Fich et al 1995 Gustavson amp Swirszcz 2007 Such algorithms are designed to move each data element exactly once However they also involve a considerable amount of arithmetic to compute the cycles and require heavily non consecutive memory accesses since the adjacent elements of the cycles differ by multiplicative factors of N as discussed above Improving memory locality at the cost of greater total data movement Edit Several algorithms have been designed to achieve greater memory locality at the cost of greater data movement as well as slightly greater storage requirements That is they may move each data element more than once but they involve more consecutive memory access greater spatial locality which can improve performance on modern CPUs that rely on caches as well as on SIMD architectures optimized for processing consecutive data blocks The oldest context in which the spatial locality of transposition seems to have been studied is for out of core operation by Alltop 1975 where the matrix is too large to fit into main memory core For example if d gcd N M is not small one can perform the transposition using a small amount NM d of additional storage with at most three passes over the array Alltop 1975 Dow 1995 Two of the passes involve a sequence of separate small transpositions which can be performed efficiently out of place using a small buffer and one involves an in place d d square transposition of N M d 2 displaystyle NM d 2 blocks which is efficient since the blocks being moved are large and consecutive and the cycles are of length at most 2 This is further simplified if N is a multiple of M or vice versa since only one of the two out of place passes is required Another algorithm for non coprime dimensions involving multiple subsidiary transpositions was described by Catanzaro et al 2014 For the case where N M is small Dow 1995 describes another algorithm requiring N M min N M additional storage involving a min N M min N M square transpose preceded or followed by a small out of place transpose Frigo amp Johnson 2005 describe the adaptation of these algorithms to use cache oblivious techniques for general purpose CPUs relying on cache lines to exploit spatial locality Work on out of core matrix transposition where the matrix does not fit in main memory and must be stored largely on a hard disk has focused largely on the N M square matrix case with some exceptions e g Alltop 1975 Recent reviews of out of core algorithms especially as applied to parallel computing can be found in e g Suh amp Prasanna 2002 and Krishnamoorth et al 2004 References Edit numpy swapaxes NumPy v1 15 Manual docs scipy org Retrieved 22 January 2019 Harris Mark 18 February 2013 An Efficient Matrix Transpose in CUDA C C NVIDIA Developer Blog P F Windley Transposing matrices in a digital computer Computer Journal 2 p 47 48 1959 G Pall and E Seiden A problem in Abelian Groups with application to the transposition of a matrix on an electronic computer Math Comp 14 p 189 192 1960 J Boothroyd Algorithm 302 Transpose vector stored array ACM Transactions on Mathematical Software 10 5 p 292 293 1967 doi 10 1145 363282 363304 Susan Laflin and M A Brebner Algorithm 380 in situ transposition of a rectangular matrix ACM Transactions on Mathematical Software 13 5 p 324 326 1970 doi 10 1145 362349 362368 Source code Norman Brenner Algorithm 467 matrix transposition in place ACM Transactions on Mathematical Software 16 11 p 692 694 1973 doi 10 1145 355611 362542 Source code W O Alltop A computer algorithm for transposing nonsquare matrices IEEE Trans Comput 24 10 p 1038 1040 1975 Esko G Cate and David W Twigg Algorithm 513 Analysis of In Situ Transposition ACM Transactions on Mathematical Software 3 1 p 104 110 1977 doi 10 1145 355719 355729 Source code Bryan Catanzaro Alexander Keller and Michael Garland A decomposition for in place matrix transposition Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming PPoPP 14 pp 193 206 2014 doi 10 1145 2555243 2555253 Murray Dow Transposing a matrix on a vector computer Parallel Computing 21 12 p 1997 2005 1995 Donald E Knuth The Art of Computer Programming Volume 1 Fundamental Algorithms third edition section 1 3 3 exercise 12 Addison Wesley New York 1997 M Frigo C E Leiserson H Prokop and S Ramachandran Cache oblivious algorithms in Proceedings of the 40th IEEE Symposium on Foundations of Computer Science FOCS 99 p 285 297 1999 doi 10 1109 SFFCS 1999 814600 J Suh and V K Prasanna An efficient algorithm for out of core matrix transposition IEEE Trans Computers 51 4 p 420 438 2002 doi 10 1109 12 995452 S Krishnamoorthy G Baumgartner D Cociorva C C Lam and P Sadayappan Efficient parallel out of core matrix transposition International Journal of High Performance Computing and Networking 2 2 4 p 110 119 2004 M Frigo and S G Johnson The Design and Implementation of FFTW3 Proceedings of the IEEE 93 2 216 231 2005 Source code of the FFTW library which includes optimized serial and parallel square and non square transposes in addition to FFTs Faith E Fich J Ian Munro and Patricio V Poblete Permuting in place SIAM Journal on Computing 24 2 p 266 278 1995 Fred G Gustavson and Tadeusz Swirszcz In place transposition of rectangular matrices Lecture Notes in Computer Science 4699 p 560 569 2007 from the Proceedings of the 2006 Workshop on State of the Art sic in Scientific and Parallel Computing PARA 2006 Umea Sweden June 2006 Sloane N J A ed Sequence A093055 Number of non singleton cycles in the in situ transposition of a rectangular j X k matrix The On Line Encyclopedia of Integer Sequences OEIS Foundation Sloane N J A ed Sequence A093056 Length of the longest cycle in the in situ transposition of a rectangular j X k matrix The On Line Encyclopedia of Integer Sequences OEIS Foundation Sloane N J A ed Sequence A093057 Number of matrix elements remaining at fixed position in the in situ transposition of a rectangular j X k matrix The On Line Encyclopedia of Integer Sequences OEIS Foundation External links EditSource code Edit OFFT recursive block in place transpose of square matrices in Fortran Jason Stratos Papadopoulos blocked in place transpose of square matrices in C sci math num analysis newsgroup April 7 1998 See Source code links in the references section above for additional code to perform in place transposes of both square and non square matrices libmarshal Blocked in place transpose of rectangular matrices for the GPUs Retrieved from https en wikipedia org w index php title In place matrix transposition amp oldid 1138576205, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.