MoreLinear \\ linear algebra methods in Clojure
Table of Contents
Preface
This project developed out of an initial effort to build a working linear algebra system in ELisp called elinear
As before I am following the textbook Matrix Analysis and Applied Linear Algebra by Carl Meyer. I highly recommend it and I will reference it often, but this document won't depend on any information presented in elinear
or the book.
Unlike elinear
this project doesn't explain Clojure nor the details of how to represent matrices on the computer. For more details, see the notes at the end.
Linear systems : Ax=b
We start off with the most straightforward square system Ax=b. We are give a know square system A and a known output b and we are tasked with finding the corresponding input x if it exists. We so far have the following methods:
- The LU decompositon
- Also known as Gaussian reduction. Reduces our system A to two systems/matrices L and U. One lower triangular and the other upper triangular
- The LU decompositon with partial pivoting
- Similar to the previous method but we swap rows so that the pivot is always that largest column value.
- The LU decompositon with complete pivoting
- Again similar to the previous method but the columns are switched as well.
- The Gram Schmidt QR decomposition
- Reduces the matrix A to an orthogonal basis Q and a upper triangular matrix R
- The Householder QR decomposition
- An alternate method to get a QR matrix pair through Householder reflectors
- The Given QR decomposition
- An alternate method to get a QR pair through rotation matrices
Note: The LU and GramSchmidt have been implemented in ELisp if you'd like to see how it's done: elinear
Notes on getting a good numerical solution:
Section 1.5 goes into a good amount of detail of why floating point arithmetic will introduce errors and how to mitigate the problem. It's suggested to use row-scaling and column-scaling.
Row-scaling will alter the magnitude of the output value of your linear system, while column scaling will alter the scale of your inputs. Don't hesitate to have the inputs and outputs use different units. page 28 suggests scaling the rows such that the maximum magnitude of each row is equal to 1 (ie. divide the row by the largest coefficient)
The topic of residuals, sensitivity and coditioning will be revisited later
Least Squares
The next step is extending the previous method to non-square systems and systems with no solution. In these cases applying the previous method directly will sometimes not work as a precise solution will not exist. We will need to use the Least Squares method to find a nearby solution
- Vandermonder Matrices
- Are used used to solve for a polynomials equation f(x) such that it fits a list of
x/y
coordinates. In combination with the Least Squares method we can fit polynomials to a list of points/measurements.
Similarity transforms : R-1BRx=b
Exploring the posibilities of transforming our system A into a different basis where it has a more convenient form S. We take out inputs x from the standard basis (e1, e2 .. en) into this new more convenient basis, then pass we them through the simpler system S and finally take the output and transform it back to the standard basis to get the output b.
- Hessenberg Form
- Using Householder reflectors we transform our matrix A into a form that is almost upper triangular
- Discrete Fourier Transform
- if x in Ax=b is n equally spaces samples (in time) then we can look at the periodicity of input signals by going to an orthogonal basis F which is composed of complex sinusoidals ( cos(x)+isin*x) ) which have periods equal to whole number of samples (ie. integer values from n to 1 - this ensures their orthogonality). F can be very simply normalized into a unitary matrix and then very easily inverted to get an F-1. So once we construct this basis F we can move inputs into this frequency space, have lineary systems operate on frequencies and then using F-1 we can easily reconstruct the resulting output. ie. F-1SFx=b
Other tools..
- Cholesky Decomposition
- Given a positive definite matrix (one that is symmetrix and has positive values on the diagonal) after we do Gaussian reduction we get A=LDU (where L and U have 1's on the diagonal and D is unit diagonal. Since A is symmetric then A=AT and therefore LDU=(LDU)T=UTDTLT. Since D==DT and the LU factorization is unique then L must equal UT and U is equal to LT - so we can write A=LDLT. Then we can split D but taking the square root of the diagonal elements (that's why they need to be positive!) and get A=LD1/2D1/2LT. set R=LD1/2 and you can write A=RRT
Notes on Clojure
This new linear algebra project is written in Clojure - a language that runs on both the JVM and in Javascript (I'm only really testing things on the JVM at the moment). There should be no language specific funny buisness and though everything is written in Clojure's "functional style" it shouldn't be difficult to translate to a different language and library
core.matrix
In an effort to mitigate the issue I had in ELisp, I'm using the core.matrix
library which acts as a "front end" API for many different backends - some on the JVM others in JS. It provides lots of helper functions so I can quickly write what I want. The library is generally very flexible and full features and uses a very generic N-dimensional array system.
That said, it's also not extremely performant and you can very easily end up doing operations that are very slow. There are lot of cases that this library is simply not set up to handle in an intelligent way and the N-dimensional array paradigm is in many partical scenarios a hinderance.
For instance if you want to represent a convolution using matrices you would take you input signal (which in for instance a short audio clip will equal to ten of thousands of data points) and mutliply it by a band matrix that is
N=x=N
. In a N-dimensional dense matrix system such a multiplication is either impossible or exceedingly slow. In a more advance matrix system you would have a special band matrix object and special matrix mutliplication operators for it that would be very efficient.
A more full featured performant library will start mixing in band matrices, symmetric matrices, upper/lower diagonal matrices.. etc etc and so the whole system becomes quite complicated and interdependent - and for the education purposes of this project that's mostly just noise
If you are concerned about getting as much as you can out of your system then I suggest looking at the neanderthal
library which provides a thin wrapper around the Intel MKL (which in effect BLAS). It makes it very easy to work with BLAS and if you're on an x64 system this is more or less the best CPU based performance you can hope for really.
Note: This project has a
neanderthal
branch where I starter implementing a few of the first functions. It should give you a good taste of what working in the more constrained BLAS environment looks like.It's an interesting work flow and really forces you to think a lot harder about your algorithms. But there is more code noise and the helper function need to be written manually for each case (b/c each one has its own nuances). It also doesn't run on ARM or the browser. It does have the ability to run in OpenCL but this funcationality also strangely requires an x64 system to back it up (for passing the matrices to and from the GPU)
Project managment
Project management in Clojure is done through a top level deps.edn
file which specifies project dependencies . In our case it's core.matrix
and the vectorz
backend (without which some functionality is broken).
{:deps {net.mikera/core.matrix {:mvn/version "0.62.0"} net.mikera/vectorz-clj {:mvn/version "0.48.0"}}}
TODO s
- Schur decomposition/compliment
- Implement the Sherman-Morrison update formula
- Sensitivity/Condition numbers needs to be revisited and expanded on (page 126-128)
- Do exercise 3.8.8
- Tridiagonal matrices - 3.10.6
- Implement the Least Squares numerical stability comparison (and maybe speed tests as well)
- Maybe work out a motivational exercise to drive all this..
- Figure out why the Cholesky doesn't need a permutation: https://math.stackexchange.com/questions/621045/why-does-cholesky-decomposition-exist
End
This webpage is generated from an org-document (at
./index.org
) that also generates all the files described.Once opened in Emacs:
C-c C-e h h
generates the webpageC-c C-v C-t
exports the code blocks into the appropriate filesC-c C-c
org-babel-execute-src-blockC-c C-v C-b
org-babel-execute-buffer