Sunday, December 2, 2012

LSQR tutorial

My collaborator Michael Mahoney taught me about LSQR: an efficient iterative procedure for computing least squares. It is an oldie but goldie paper from 1982 by Paige and Saunders.
It is a conjugate gradient like iterative method where the heavy operation is the product with the sparse matrix A and its transpose.

LSQR solves one of the following cost functions:


This method is appealing from two reasons:
1) It applies to rectangular matrices - both over determined and under determined.
2) It can use a warm start initial vector. This is useful when the system slightly changes over time and you want to still keep track of the solution without fully solving the system from scratch.

The C code is downloadable from here: http://www.stanford.edu/group/SOL/software/lsqr.html
(I strongly advise against using the C++ version which is overly complicated).

Some tips:
0) It is advised to start first with the matlab/octave version which has surprisingly very good performance. Only once you are happy with algorithm performance it is advised to switch to C.
1) You have to implement the matrix vector product operator yourself:

/*
                  Compute the product
                If mode = 1, compute  y = y + A*x
                If mode = 2, compute  x = x + A(transpose)*y.
*/
2) For a warm start, I am using the following suggested solution:


//     Note that x is not an input parameter.
//     If some initial estimate x0 is known 
//     one could proceed as follows:
//
//       1. Compute a residual vector     r0 = b - A*x0.
//       2. Use LSQR to solve the system  A*dx = r0.
//       3. Add the correction dx to obtain a final solution x = x0 + dx.
3) It is advised to link against cblas to get additional performance gain. Use the
library flags: -lgsl -lgslcblas 


For a system of size 500x5000 with 150K non zeros, each iteration of LSQR takes 6 milliseconds!
(Using a single core Intel 2.56GHz).

No comments:

Post a Comment