Disclaimer: The notes are made from information available here and wikipedia.

Conjugate gradient is effective for solving systems of the form:

is an unknown vector

is a known positive-definite, square, symmetric matrix

is a known vector

The CG method is most effective in solving systems where is sparse. In case where is dense, the solution can be found by factorizing and using back substitution. On the other hand, when is sparse, factorising and back-substitution become lengthy procedures.

Now, say we are finding the optimal solution of a quadratic equation :

For a positive definite square symmetric matrix the equation can be solved by finding the point where the gradient of becomes = 0. Therefore the solution of gives the critical point of .

So why use CG?

The idea is to solve the equation not by finding the intersection of hyperplanes rather, by iterative minimization of the problem.

*Method of Steepest Descent*

The method of steepest descent entails starting from an arbitrary point and consequently taking steps along the direction opposite to gradient which is , to reach at the optimal point of . It is convenient to say:

Where is the step length taken in the direction of which is .

As error and is essentially the direction of the steepest descent i.e. . Therefore the step length can be found by line search*.*

*Eigenvectors and Eigenvalues*

Eigenvector of matrix is the non zero vector, which when multiplied with B doesn’t rotate, rather changes in length or reverses its direction. Therefore, for some scalar , . This is called the eigenvalue of vector .

Spectral radius: . If the value of spectral radius is less than one, it indicates convergence. Lower the value of spectral radius, faster the convergence.

# Jacobi iterative method

In pursuit of solving , we split to matrices with its diagonal and non-diagonal elements

for, &

where error

Now, at the optimal point, hence,

b) This is an arbitrary choice with the hope to get that has smaller spectral radius.

Now, splitting to respective eigenvectors of ,we get corresponding eigenvalues which will eventually determine the rate of convergence of the error . Lesser the value of eigenvalue faster the convergence.

### Convergence

**steepest descent**the optimal point is reached at one attempt.

### REVIEW

So, Steepest Descent algorithm is rougher, this is not the most efficient way while Jacobi method is smoother because every eigenvector component is reduced in every iteration.