Introduction
Matrix multiplication occupies a central role in scientific computing with an extremely wide range of applications. Many numerical procedures in linear algebra (e.g. solving linear systems, matrix inversion, factorizations, determinants) can essentially be reduced to matrix multiplication [5, 3]. Hence, there is great interest in investigating fast matrix multiplication algorithms, to accelerate matrix multiplication (and other numerical procedures in turn).
SuanShu was already the fastest in matrix multiplication and hence linear algebra per our benchmark.
SuanShu v3.0.0 benchmark
Starting version 3.3.0, SuanShu has implemented an advanced algorithm for even faster matrix multiplication. It makes some operations 100x times faster those of our competitors! The new benchmark can be found here:
SuanShu v3.3.0 benchmark
In this article, we briefly describe our implementation of a matrix multiplication algorithm that dramatically accelerates dense matrix-matrix multiplication compared to the classical IJK algorithm.
Parallel IJK
We first describe the method against which our new algorithm is compared against, IJK. Here is the algorithm performing multiplication for is , is , and is :
for (i = 1; i < = m; i ++){ for (j = 1; j <= p; j ++){ for (k = 1; k <= n; k ++){ C[i,k] += A[i,j] * B[j,k]; } } }
In Suanshu, this is implemented in parallel; the outermost loop is passed to a ParallelExecutor .
As there are often more rows than threads available, the time complexity of this parallelized IJK is still roughly the same as IJK: , or cubic time for . This complexity is not most desirable.
The core of our new multiplication algorithm, the Strassen algorithm, reduces the time complexity to .
Strassen’s Algorithm
The Strassen algorithm [6] is based on the following block matrix multiplication:
The naive method of completing this involves 8 submatrix multiplications and 4 additions. The version (Winograd’s variant [7]) of Strassen’s algorithm that we use forgoes one submatrix multiplication in exchange for eleven extra additions/subtractions, which is faster when the submatrices are large enough.
The algorithm runs as follows (visualization above):
1. Split into four equally-sized quadrants , , , . The same for . (Assume first that all dimensions are even.)
2. Obtain the following factor matrices:
3. Obtain for . Again depending on the dimensions, we either use Parallel IJK, or make a recursive call to Strassen’s algorithm.
4. The final product can then be obtained as follows (using some temporary matrices ):
Odd Dimensions
So far this algorithm has ignored the cases when and/or has an odd number of rows/columns. There are several methods of dealing with this [2, 4]. For example, one could pad the matrices statically so that the dimensions are always even until the recursion passes to IJK (static padding); or pad only when one of the dimensions is odd (dynamic padding).
Alternatively one could disregard the extra rows/columns until after the algorithm completes, and then take care of them afterwards (i.e. if has an extra row or has an extra column, use the appropriate matrix-vector operation to calculate the remaining row/column of . If has an extra column and has an extra row, their product can be added on to afterwards.) We chose this method, called dynamic peeling, for our implementation.
Blocking and Tiling
Taken on its own, the above Strassen’s algorithm works well, provided both matrices are roughly square. In practice, we may encounter cases where either is highly rectangular (e.g., too tall, or long).
We solve this by slicing the matrices into blocks which are nearly square, then using Strassen’s algorithm on the submatrices. The blocking scheme is devised so that long or tall strips are avoided.
Performance
The following charts show the performance of our hybrid Block-Strassen algorithm versus Parallel IJK on an Intel ® Core i5-3337U CPU @ 1.80 GHz with 6GB RAM, running Java 1.8.0 update 40.
Tests are patterned after D’Alberto and Nicolau [1]: We ran for random matrices () and () , for every triple in , where . This multiplication was done three times using Parallel IJK, then three times using Hybrid Block-Strassen. The average times using each method are compared and tabulated.
Figure 3 shows the shows the multiplication time plotted against the product of the dimensions. The multiplication time for IJK is , and our empirical results show that multiplication times for both Parallel IJK and HBS are strongly linearly related to complexity. HBS, however, has a significantly smaller gradient.
The gradients of the best-fit lines suggest that as complexity approaches infinity (ignoring memory constraints), HBS will take 63.5% less time than IJK. Several data points, however, (e.g. ) show an even greater speedup.
Figures 4 and 5 show the time saving of HBS over Parallel IJK, in seconds (Figure 4), and as a percentage of the Parallel IJK time (Figure 5). Each table is for a specific value of (number of columns of ), and runs over values of (number of rows of ) and (number of columns of ).
Finally, an accuracy test was also run to address concerns regarding the numerical stability of Strassen’s algorithm [3]. Figure 6 shows the maximum entry-wise relative error
of the resulting product matrix. We see that none of the errors exceed (note that we did not run Strassen’s algorithm completely to the scalar level; when the matrices and are too small, IJK is used. This reduces the error.), which suggests that HBS is surprisingly accurate, and a strong candidate for use in general-purpose contexts where speed is a priority.
References
[1] Paolo D’Alberto and Alexandru Nicolau. Adaptive strassen’s matrix multiplication. In Proceedings of the 21st Annual International Conference on Supercomputing, ICS ’07, pages 284–292, New York, NY, USA, 2007. ACM.
[2] Hossam ElGindy and George Ferizis. On improving the memory access patterns during the execution of strassen’s matrix multiplication algorithm. In Proceedings of the 27th Australasian Conference on Computer Science – Volume 26, ACSC ’04, pages 109–115, Darlinghurst, Australia, Australia, 2004. Australian Computer Society, Inc.
[3] Nicholas J Higham. Accuracy and stability of numerical algorithms. Siam, 2002.
[4] Steven Huss-Lederman, Elaine M. Jacobson, Anna Tsao, Thomas Turnbull, and Jeremy R. Johnson. Implementation of strassen’s algorithm for matrix multiplication. In Proceedings of the 1996 ACM/IEEE Conference on Supercomputing, Supercomputing ’96, Washington, DC, USA, 1996. IEEE Computer Society.
[5] Steven S. Skiena. The Algorithm Design Manual. Springer Publishing Company, Incorporated, 2nd edition, 2008.
[6] Volker Strassen. Gaussian elimination is not optimal. Numerische Mathematik, 13(4):354–356, 1969.
[7] Shmuel Winograd. On multiplication of 2×2 matrices. Linear algebra and its applications, 4(4):381– 388, 1971.
No comment yet, add your voice below!