Transforming a QP Problem to an SOCP Problem

A Quadratic Programming problem (QP) in the form of

\begin{aligned} & \underset{x}{\text{minimize}} & & \frac{1}{2} x^T H x + p^T x \\ & \text{subject to} \\ & & & A_{eq} x = b_{eq} \\ & & & A x \geq b \end{aligned}

where H \in \Re^{n \times n}, A \in \Re^{m \times n}, p, x \in \Re^n, b \in \Re^m , can be transformed to a Second-Order Cone Programming (SOCP) problem in the form of

\begin{aligned} & \underset{x}{\text{minimize}} & & f^T x \\ & \text{subject to} \\ & & & \left \| A_i x + b_i \right \|_2 \leq c_i^T x + d_i, \; i = 1, \ldots, m \end{aligned}

Consider y = \left \| F x - c \right \|, and

\begin{aligned} y^2 & = \left \| F x - c \right \|^2 \\ & = (F x - c)^T (F x - c) \\ & = x^T F^T F x - 2 c^T F x + c^T c \end{aligned}

As c^T c is non-negative, minimizing x^T F^T F x - 2 c^T F x is equivalent to minimizing y^2, and hence is equivalent to minimizing y.

If we have H = 2 F^T F and p = -2 F^T c, then the objective function in QP \frac{1}{2} x^T H x + p^T x can be written as x^T F^T F x - 2 c^T F x . We can thus minimize y.

Thus, the QP problem can now be written as

\begin{aligned} & \underset{y}{\text{minimize}} & & y \\ & \text{subject to} \\ & & & \left \| F x - c \right \| \leq y \\ & & & A_{eq} x = b_{eq} \\ & & & A x \geq b \end{aligned}

As H, by definition of QP, is symmetric, a symmetric F can be found such that \frac{H}{2} = F^T F = F^2. If the QP is assumed to be a convex QP, H is positive semidefinite, applying Cholesky factorization gives \frac{H}{2} = U^T U (or L L^T). In this case, F = U (or L^T).

Next, as \left \| \cdot \right \|_2 is always non-negative, the equality constraint A_{eq} x = b_{eq} can be written as

\left \| A_{eq} x - b_{eq} \right \| \leq 0

Finally, each row in the inequality constraint A x \geq b can be written as

a_i x - b_i \geq 0, \; i = 1, \ldots, m

where a_i is the i-th row of A, and b_i is the i-th element of b.

Therefore, a QP problem can be transformed to an equivalent SOCP problem in the following way. We need to introduce a few variables first.

  • y = x_{n+1}
  • c = -\frac{1}{2} F^{-T} p

\begin{aligned} & \underset{x}{\text{minimize}} & & f^T x \\ & \text{subject to} \\ & & & \left \| A_1 x + b_1 \right \|_2 \leq c_1^T x + d_1 \\ & & & \left \| A_2 x + b_2 \right \|_2 \leq c_2^T x + d_2 \\ & & & \left \| A_3 x + b_3 \right \|_2 \leq c_3^T x + d_3 \\ & & & \vdots \\ & & & \left \| A_{m+2} x + b_{m+2} \right \|_2 \leq c_{m+2}^T x + d_{m+2} \\ & \text{where} \\ & & f & = (0, \ldots, 0, 1)^T \\ & & x & = (x_1, \ldots, x_n, x_{n+1})^T \\ & & A_1 & = \left [ F, 0 \right ], b_1 = \frac{1}{2} F^{-T} p, c_1 = (0, \ldots, 0, 1)^T, d_1 = 0, \left (\frac{H}{2} \right ) = F^T F \\ & & A_2 & = \left [ A_{eq}, 0 \right ], b_2 = -b_{eq}, c_2 = 0, d_2 = 0 \\ & & A_{i+2} & = 0, b_{i+2} = 0, c_{i+2} = \text{the i-th row of }A, d_{i+2} = -(\text{the i-th element of }b), \; i = 1, \ldots, m \\ & & & f, x, c_1, \ldots, c_{m+2} \in \Re^{n+1} \end{aligned}

The sub-vector with the first n elements in the solution of the transformed SOCP problem is the solution of the original QP problem.

SuanShu has implementations to solve both SOCP and QP problems.

Fastest Java Matrix Multiplication

Introduction

Matrix multiplication occupies a central role in scientific computing with an extremely wide range of applications. Many numerical procedures in linear algebra (e.g. solving linear systems, matrix inversion, factorizations, determinants) can essentially be reduced to matrix multiplication [5, 3]. Hence, there is great interest in investigating fast matrix multiplication algorithms, to accelerate matrix multiplication (and other numerical procedures in turn).

SuanShu was already the fastest in matrix multiplication and hence linear algebra per our benchmark.
SuanShu v3.0.0 benchmark

Starting version 3.3.0, SuanShu has implemented an advanced algorithm for even faster matrix multiplication. It makes some operations 100x times faster those of our competitors! The new benchmark can be found here:
SuanShu v3.3.0 benchmark

In this article, we briefly describe our implementation of a matrix multiplication algorithm that dramatically accelerates dense matrix-matrix multiplication compared to the classical IJK algorithm.

Parallel IJK

We first describe the method against which our new algorithm is compared against, IJK. Here is the algorithm performing multiplication C = A\times B for A is m\times nB is n\times p, and C is m\times p:

for (i = 1; i < = m; i ++){
    for (j = 1; j <= p; j ++){
        for (k = 1; k <= n; k ++){
            C[i,k] += A[i,j] * B[j,k];
        }
    }
}

In Suanshu, this is implemented in parallel; the outermost loop is passed to a ParallelExecutor .

As there are often more rows than threads available, the time complexity of this parallelized IJK is still roughly the same as IJK: O\left(mnp\right), or cubic time O\left(n^3\right) for m = n = p. This complexity is not most desirable.

The core of our new multiplication algorithm, the Strassen algorithm, reduces the time complexity to O\left(n^{\log_27}\right).

Strassen’s Algorithm

The Strassen algorithm [6] is based on the following block matrix multiplication:

\displaystyle \begin{pmatrix}A_{11} & A_{12}\\ A_{21} & A_{22} \end{pmatrix}\begin{pmatrix}B_{11} & B_{12}\\ B_{21} & B_{22} \end{pmatrix}=\begin{pmatrix}A_{11}B_{11}+A_{12}B_{21} & A_{11}B_{12}+A_{12}B_{22}\\ A_{21}B_{11}+A_{22}B_{21} & A_{21}B_{12}+A_{22}B_{22} \end{pmatrix}

The naive method of completing this involves 8 submatrix multiplications and 4 additions. The version (Winograd’s variant [7]) of Strassen’s algorithm that we use forgoes one submatrix multiplication in exchange for eleven extra additions/subtractions, which is faster when the submatrices are large enough.

WinoPNG
Fig 1. Visualization of Strassen’s Algorithm (Winograd variant)

The algorithm runs as follows (visualization above):

1. Split A into four equally-sized quadrants A_{11}A_{12}A_{21}A_{22}. The same for B. (Assume first that all dimensions are even.)

2. Obtain the following factor matrices:

\displaystyle \begin{array}{ccccccc} M_{1} & = & A_{11} & \qquad & N_{1} & = & B_{11}\\ M_{2} & = & A_{12} & \qquad & N_{2} & = & B_{21}\\ M_{3} & = & A_{21}+A_{22} & \qquad & N_{3} & = & B_{12}-B_{11}\\ M_{4} & = & M_{3}-A_{11} & \qquad & N_{4} & = & B_{22}-N_{3}\\ M_{5} & = & A_{11}-A_{21} & \qquad & N_{5} & = & B_{22}-B_{12}\\ M_{6} & = & A_{12}-M_{4} & \qquad & N_{6} & = & B_{22}\\ M_{7} & = & A_{22} & \qquad & N_{7} & = & B_{21}-N_{4} \end{array}

3. Obtain P_{i}=M_{i}\times N_{i} for i\in\left\{ 1,2,\ldots,7\right\}. Again depending on the dimensions, we either use Parallel IJK, or make a recursive call to Strassen’s algorithm.

4. The final product C=A\times B can then be obtained as follows (using some temporary matrices T_{i}):

\displaystyle \begin{array}{ccccccc} C_{11} & = & P_{1}+P_{2} & \qquad & C_{22} & = & T_{2}+P_{3}\\ T_{1} & = & P_{1}+P_{4} & \qquad & T_{3} & = & T_{1}+P_{3}\\ T_{2} & = & T_{1}+P_{5} & \qquad & C_{12} & = & T_{3}+P_{6}\\ C_{21} & = & T_{2}+P_{7} & \qquad \end{array}

Odd Dimensions

So far this algorithm has ignored the cases when A and/or B has an odd number of rows/columns. There are several methods of dealing with this [2, 4]. For example, one could pad the matrices statically so that the dimensions are always even until the recursion passes to IJK (static padding); or pad only when one of the dimensions is odd (dynamic padding).

Alternatively one could disregard the extra rows/columns until after the algorithm completes, and then take care of them afterwards (i.e. if A has an extra row or B has an extra column, use the appropriate matrix-vector operation to calculate the remaining row/column of C. If A has an extra column and B has an extra row, their product can be added on to C afterwards.) We chose this method, called dynamic peeling, for our implementation.

Blocking and Tiling

Taken on its own, the above Strassen’s algorithm works well, provided both matrices are roughly square. In practice, we may encounter cases where either is highly rectangular (e.g., too tall, or long).

We solve this by slicing the matrices into blocks which are nearly square, then using Strassen’s algorithm on the submatrices. The blocking scheme is devised so that long or tall strips are avoided.

Blocking
Fig 2. Left: A strict blocking scheme (strips!). Right: An efficient blocking scheme (our implementation).

Performance

The following charts show the performance of our hybrid Block-Strassen algorithm versus Parallel IJK on an Intel ® Core i5-3337U CPU @ 1.80 GHz with 6GB RAM, running Java 1.8.0 update 40.

Tests are patterned after D’Alberto and Nicolau [1]: We ran C = A \times B for random matrices A (m\times n) and B (n \times p) , for every triple \left(m, n, p\right) in S^3, where S = \{100,500,1000,2500,5000,7500,10000\}. This multiplication was done three times using Parallel IJK, then three times using Hybrid Block-Strassen. The average times using each method are compared and tabulated.

WinoComplexity
Fig 3. Multiplication Time (by Method) vs Complexity m×n×p

Figure 3 shows the shows the multiplication time plotted against the product mnp of the dimensions. The multiplication time for IJK is O\left(mnp\right), and our empirical results show that multiplication times for both Parallel IJK and HBS are strongly linearly related to complexity. HBS, however, has a significantly smaller gradient.

The gradients of the best-fit lines suggest that as complexity approaches infinity (ignoring memory constraints), HBS will take 63.5% less time than IJK. Several data points, however, (e.g. m,n,p\ge5000 ) show an even greater speedup.

Fig 4. Time Savings (in seconds) using HBS versus Parallel IJK
Fig 4. Time Savings (in seconds) using HBS versus Parallel IJK
WinoPercent
Fig 5. Time Savings (as a percent of P.IJK Time) using HBS versus Parallel IJK

Figures 4 and 5 show the time saving of HBS over Parallel IJK, in seconds (Figure 4), and as a percentage of the Parallel IJK time (Figure 5). Each table is for a specific value of n (number of columns of A), and runs over values of m (number of rows of A) and p (number of columns of B).

WinoAccu
Fig 6. Maximum entry-wise Relative Error of HBS versus PIJK (in units of 1e-15)

Finally, an accuracy test was also run to address concerns regarding the numerical stability of Strassen’s algorithm [3]. Figure 6 shows the maximum entry-wise relative error

\displaystyle\max_{i,j}\left|\frac{C_{i,j}^{{\rm IJK}}-C_{i,j}^{{\rm HBS}}}{C_{i,j}^{{\rm IJK}}}\right|

of the resulting product matrix. We see that none of the errors exceed 2 \times 10^{-14} (note that we did not run Strassen’s algorithm completely to the scalar level; when the matrices M_i and N_i are too small, IJK is used. This reduces the error.), which suggests that HBS is surprisingly accurate, and a strong candidate for use in general-purpose contexts where speed is a priority.

References

[1] Paolo D’Alberto and Alexandru Nicolau. Adaptive strassen’s matrix multiplication. In Proceedings of the 21st Annual International Conference on Supercomputing, ICS ’07, pages 284–292, New York, NY, USA, 2007. ACM.
[2] Hossam ElGindy and George Ferizis. On improving the memory access patterns during the execution of strassen’s matrix multiplication algorithm. In Proceedings of the 27th Australasian Conference on Computer Science – Volume 26, ACSC ’04, pages 109–115, Darlinghurst, Australia, Australia, 2004. Australian Computer Society, Inc.
[3] Nicholas J Higham. Accuracy and stability of numerical algorithms. Siam, 2002.
[4] Steven Huss-Lederman, Elaine M. Jacobson, Anna Tsao, Thomas Turnbull, and Jeremy R. Johnson. Implementation of strassen’s algorithm for matrix multiplication. In Proceedings of the 1996 ACM/IEEE Conference on Supercomputing, Supercomputing ’96, Washington, DC, USA, 1996. IEEE Computer Society.
[5] Steven S. Skiena. The Algorithm Design Manual. Springer Publishing Company, Incorporated, 2nd edition, 2008.
[6] Volker Strassen. Gaussian elimination is not optimal. Numerische Mathematik, 13(4):354–356, 1969.
[7] Shmuel Winograd. On multiplication of 2×2 matrices. Linear algebra and its applications, 4(4):381– 388, 1971.

Using SuanShu on Amazon EC2

Cloud computing is very popular nowadays. Delegating your CPU-intensive computation (or simulation) to the cloud seems to be a smart choice. Many of our users asked if SuanShu can be run on Amazon’s Elastic Compute Cloud (EC2), because SuanShu license requires a MAC address and they have no control on which machine being used when they launch an EC2 instance. Here comes a good news! Amazon Web Service (AWS) now supports Elastic Network Interface (ENI), by which you can bind your EC2 instance to a specified network interface. Therefore, you can license your SuanShu against the MAC address of the ENI, and launch an instance with the same ENI and MAC address. For details, please visit the blog here. User guide for ENI can also be found here.

SuanShu Introduction

Numerical Method Inc. publishes SuanShu, a Java numerical and statistical library. The objective of SuanShu is to enable very easy programming of engineering applications. Programmers are able to program mathematics in a way that the source code is solidly object-oriented and individually testable. SuanShu source code adheres to the strictest coding standard so that it is readable, maintainable, and can be easily modified and extended.

SuanShu revolutionizes how numerical computing is traditionally done, e.g., netlib, gsl. The repositories of these most popular and somewhat “standard” libraries are rather collections of ad-hoc source code in obsolete languages, e.g., FORTRAN and C. One biggest problem of these code is that they are not readable (for most modern programmers), hence unmaintainable. For example, it is quite a challenge to understand AS 288, let alone improving it. Other problems include, but not limited to, the lack of data structure, duplicated code, being entirely procedural, very bad variable naming, abuse of GOTO, the lack of test cases, insufficient documentations, the lack of IDE support, inconvenient linking to modern languages such as Java, being unfriendly to parallel computing, etc.

To address these problems, SuanShu designs a framework of reusable math components (not procedures) so that programmers can put components together like Legos to build more complex algorithms. SuanShu is written from anew so that it conforms to the modern programming paradigm such as variable naming, code structuring, reusability, readability, maintainability, as well as software engineering procedure. To ensure very high quality of the code and very few bugs, SuanShu has a few thousands of unit test cases that run daily.

The basic of SuanShu covers the following.

– numerical differentiation and integration
– polynomial and Jenkin-Straub
– root finding
– unconstrained and constrained optimization for univariate and multivariate functions
– linear algebra: matrix operations and factorization
– sparse matrix
– descriptive statistics
– random sampling from distributions

Comparing to competing products, SuanShu, as we believe, has the most extensive coverage in statistics. SuanShu covers the following.

– Ordinary Least Square (OLS) regression
– Generalized Linear Model (GLM) regression
– a full suite of residual analysis
– Stochastic Differential Equation (SDE) simulation
– a comprehensive library of hypothesis testing: Kolmogorov-Smirnov, D’Agostino, Jarque-Bera, Lilliefors, Shapiro-Wilk, One-way ANOVA, T, Kruskal-Wallis, Siegel-Tukey, Van der Waerden, Wilcoxon rank sum, Wilcoxon signed rank, Breusch-Pagan, ADF, Bartlett, Brown-Forsythe, F, Levene, Pearson’s Chi-square, Portmanteau
– time series analysis, univariate and multivariate
– ARIMA, GARCH modelling, simulation, fitting, and prediction
– sample and theoretical auto-correlation
– cointegration
– hidden Markov chain
– Kalman filter
more

For the full article, please read “SuanShu Introduction“.

Class Parsimony

Hello World!

Today I have my first post ever on the Numerical Method blog. I’d like to discuss a design decision when writing our library. This is to give our friends some insight into how the SuanShu library works. Please leave me feedbacks and comments so that we can continue to improve the product for you.

We have decided to make the class library as parsimonious as possible to avoid method pollution. This is inspired by the jMatrices’ white paper. The challenge is to organize the methods by minimal and correct packages.

I will illustrate this with SuanShu’s Matrix packages.

The Matrix class has only 26 methods, of which 9 of them are constructors and the related; 3 are overrides for the AbstractMatrix interfaces; 8 are overrides for the MatrixSpace interfaces. Only 6 of them are class specific to make calling these methods convenient for the user. The other dozens of matrix operations, such as the different factorizations, properties like rank, transformations like inverse, are grouped into multiple classes and packages. In most cases, each of these operations is a class on its own.

For instance, the inverse operation itself is a class inheriting from Matrix. The constructor takes as input a Matrix to invert. For example, to find the inverse for

A = \begin{bmatrix} 1 & 2 & 3 \\ 6 & 5 & 4 \\ 8 & 7 & 9 \end{bmatrix},

we code


<pre>
Matrix A = new Matrix(new double[][]{
{1, 2, 3},
{6, 5, 4},
{8, 7, 9}
});

Matrix Ainv = new Inverse(A);
</pre>

SuanShu computes

A^{-1} = \begin{bmatrix} -0.809524 & -0.142857 & 0.333333 \\ 1.047619 & 0.714286 & -0.666667 \\ -0.095238 & -0.428571 & 0.333333 \end{bmatrix}

It is important to note that Ainv is a Matrix, created by the keyword new, not by a method call.

In summary, we choose to have 100s of classes, rather than to have a class with 100s of methods. Each class is kept deliberately short. This class parsimony principle is a key design decision guiding the whole library development.