程序代写代做代考 flex matlab Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD)

SVD and the Pseudoinverse

We are now in a position to investigate SVD mechanics in analogy to

eigenvalue/eigenvector mechanics. A similar process of finding singular values

(eigenvalues) and the corresponding singular vectors (eigenvectors) yields a more general

and flexible factorization decomposition of matrix A but the notion of expanding vectors

on an eigenbasis remains intact. The terms ‘singular vector’ and ‘eigenvector’ will be

used interchangeably.

Singular value decomposition of matrix A can be written as

A = UWV
T

where

1. U – The columns of U are the eigenvectors of AA
T
. U is an m x m matrix containing

an orthonormal basis of vectors for both the column space and the left null space of A.

For orbit correction, the orbit vector will be expanded in terms of the basis vectors in U.

In linear algebra, U contains the left singular vectors of A.

2) W – The ‘singular values’ or ‘principle gains’ of A lie on the diagonal of W and are the

square root of the eigenvalues of both AA
T
and A

T
A, that is, the eigenvectors in U and

the eigenvectors in V share the same eigenvalues!

3) V – The rows of V
T
(columns of V) are the eigenvectors of A

T
A. V is an n x n matrix

containing an orthonormal basis of vectors for both the row space and the null space of

A. The column vector of corrector magnets vector will be expanded in terms of the bases

in V. V contains the right singular vectors of A.

For our applications, A
T
A and AA

T
are symmetric, real, positive-definite matrices so that

all singular values are real positive numbers. Solving for the two different eigenspaces (U

and V) corresponds to finding one diagonalizing transformation in the domain (V,

correctors) and another diagonalizing transformation in the range (U, orbit) such that the

original matrix becomes diagonal. In the square-matrix eigenvalue/eigenvector problem,

only one eigenbasis was required to diagonalize the matrix.

WHY IS SINGULAR VALUE DECOMPOSITION USEFUL IN ACCELERATOR PHYSICS?

Like many problems in mathematical physics, eigenvectors are used to form orthogonal

basis sets that can be used to expand the function of interest. Similar to expressing a

function in a Fourier series or expanding a wavefunction in quantum mechanics in terms

of energy eigenstates, is useful to expand the beam orbit on a set of basis vectors for orbit

control.

The SVD factorization of a matrix A generates a set of eigenvectors for both the

correctors and the orbit. There is a 1:1 correspondence between the i
th

eigenvector in V,

the i
th

singular value in W and i
th

eigenvector in U. The ‘singular values’ (eigenvalues)

scale eigenvectors as they are transformed from the corrector eigenspace to the orbit

eigenspace (or vice-versa). The process is analogous to square-matrix eigenvector

mechanics but can be applied to non-square matrices. What’s more, SVD generates the

full set of four fundamental subspaces of matrix A.

INPUT/OUTPUT VIEWPOINT

One way to look at SVD is that the vi are the input vectors of A and ui are the output

vectors of A. The linear mapping y=Ax can be decomposed as

y = Ax = UWV
T
x .

The action of the matrix goes like this:

1. compute coefficients of x along the input directions v1…vr .

V
T
x resolves the input vector x into the orthogonal basis of input vectors vi.

2. scale the vector coefficients by i on the diagonal of W

3. multiply by U: y=aiui is a linear superposition of the singular vectors ui

The difference from the eigenvalue decomposition for a symmetric matrix A is that the

input and output directions are different. Since the SVD returns the singular

value/eigenvector sets in descending order of the singular values,

v1 is the most sensitive (highest gain) input direction

u1 is the most sensitive (highest gain) output direction

Av1=1u1

SVD gives a clear picture of the gain as a function of input/output directions

Example: Consider a 4 x 4 by matrix A with singular values =diag(12, 10, 0.1, 0.05).

The input components along directions v1 and v2 are amplified by about a factor

of 10 and come out mostly along the plane spanned by u1 and u2 .

The input components along directions v3 and v4 are attenuated by ~10.

For some applications, you might say A is effectively rank 2!

EIGENVECTORS OF AA
T

AND A
T
A

Let’s look more closely at the basis vectors in U and V. For U the claim was that the

columns are the eigenvectors of AA
T
. Since AA

T
is real symmetric, U is an orthonormal

set and U
T
= U

-1
and U

T
U = I. Substituting the SVD of A into the outer product AA

T

AA
T
= (UWV

T
)(UWV

T
)

T
= (UWV

T
)(VW

T
U

T
) = UW

2
U

T

multiplying on the right by U,

AA
T
U = UW

2

Hence, U contains the eigenvectors for AA
T
. The eigenvalues are the diagonals on W.

Similarly, it can be shown that V contains the eigenvectors of A
T
A.

Another way of looking at the same thing:

Let v1…vr be a basis for the row space (correctors)

u1…ur be a basis for the column space (orbits)

By definition

Avi = ui

If vi are the eigenvectors of A
T
A

A
T
Avi = i


vi

pre-multiplying by A

AA
T
(Avi) = AA

T
(iui )

so ui =Avi/ i is an eigenvector of AA
T
.

MATLAB Demonstration of SVD – Forward multiplication

>>edit SVD_1

SUBSPACES OF A

The SVD factorization of an m x n matrix A with rank r is A = UWV
T
where W is a

quasi-diagonal matrix with singular values on the diagonals






00

0
W

The sparse matrix W also arranges the singular values in descending order

0…
21


r



The columns of U and V are

U = [u1 u2 …um] left singular vectors

V = [v1 v2 …vn] right singular vectors

The fundamental subspaces of A are spanned by

Range(A) = span[u1 u2 … ur] basis to expand orbits (column space)

Range(A
T
) = span[v1 v2 … vr] basis to expand correctors (row space)

Nullspace(A) = span[vr+1 vr+2…vn] corrector patterns that do not perturb orbit

Nullspace(A
T
) = span[ur+1 ur+2…um] orbit errors

In it’s full glory, the SVD of a well-behaved least-squares problem will look like

   T
mrrr

r
mrrr

vvvvvuuuuuA















 ……

|

0|

|

0

|

0|

|

……
121

1

121

MATLAB Demonstration of SVD – Subspaces

>>edit SVD_2

SINGULAR VALUE DECOMPOSITION – FORWARD SOLUTION

Use SVD to decompose the square response matrix: R = UWV
T
(m=n).

As a demonstration of SVD matrix mechanics, follow through a simple calculation.

x = (U W V
T
)   











n
v

v

WUx …

1







n
v

v

WUx |

1

(project  into V-space)











nn
v

v

w

w

Ux |

0

0
11

 (W diagonal)







nn
vw

vw

Ux |

11

 (multiply by ‘gains’ w)









nn

m

vw

vw

uux |

||

||
11

1

 
i

iii
uvwx )(    (expansion by eigenvectors)

The final equation expresses the orbit as a linear combination of orbit eigenvectors ui.

The coefficients of the expansion are projections of the corrector vector  into the

corrector eigenvectors vi weighted by the singular values wi.

RESPONSE MATRIX EXAMPLE: INPUT IS AN EIGENVECTOR Vi

Suppose the corrector column vector  is equal to corrector eigenvector vi. In this case,

i

nn

m
v

v

v

w

w

uux 

















 …

0

0

||

||
11

1















0

0

0

0

||

||
1

1 ii

n

m
vv

w

w

uux









0

0

||

||

1 im
wuux   )1( 

ii
vv 

ii
uwx  


In other words, corrector eigenvector vi produces orbit eigenvector ui scaled by the

singular value quantity wi.

An important point can be made here. Remember that we called the v-vectors the input

vector basis and the u-vectors the output vector basis. If we ‘zero out’ small singular

values in the w-matrix then projections of the input vector () onto the corresponding v-

vectors will not pass through. In this sense, we are creating a ‘low pass filter’ on the

transformation from -space to x-space. This will be even more important in the next

section (pseudoinverse) where we have the opportunity to low-pass the (noisy) measured

orbit.

MATLAB Demonstration of SVD – Vector expansion on a SVD eigenbasis

>>edit SVD_3

THE PSEUDOINVERSE

If a matrix A has the singular value decomposition

A=UWV
T

then the pseudo-inverse or Moore-Penrose inverse of A is

A
+
=V

T
W

-1
U

If A is ‘tall’ (m>n) and has full rank

A
+
=(A

T
A)

-1
A

T
(it gives the least-squares solution xlsq=A

+
b)

If A is ’short’ (m>edit SVD_4

SINGULAR VALUE DECOMPOSITION – BACKWARD SOLUTION (INVERSE)

Again the response matrix R is decomposed using SVD: R
-1

= VW
-1

U
T

Where W
-1

has the inverse elements of W along the diagonal. If an element of W is zero,

the inverse is set to zero.

We now repeat the matrix mechanics outlined above for the inverse problem:

 = (V W
-1

U
T
)x   x

u

u

WV

n











1

1
 







xu

xu

WV

n

|

1

1
 (project x into U-space)











xu

xu

w

w

V

nn

|

0

0
1

1

1

1

  (W diagonal)







xuw

xuw

V

nn

1

1

1

1

|  (multiply by inverse gains wi
-1

)










xuw

xuw

vv

nn

m

1

1

1

1

1
|

||

||

 

 

i

iii
vxuw )(

1
   (expansion by eigenvectors)

The final equation expresses the corrector set  as a linear combination of corrector

eigenvectors vi. The coefficients of the expansion are projections of the orbit vector x into

the orbit eigenvectors ui weighted by the inverse singular values wi
-1

. The process is

analogous to the forward problem outlined above.

ORBIT CORRECTION EXAMPLE (m=n)

As a simple example, suppose the orbit vector x is equal to orbit eigenvector ui. In this

case,

i

nn

m
u

u

u

w

w

vv 


















0

0

||

||
1

1

1

1

1
















0

0

0

0

||

||

1

1

1

1 ii

n

m
uu

w

w

vv










0

0

||

||
1

1 im
wvv   )1( 

ii
uu 

ii
vw

1
 

In other words, orbit eigenvector ui produces corrector eigenvector vi scaled by the

singular value quantity wi
-1

.

Again we point out that by zeroing-out small singular values (large w
-1

) we can low-pass

filter the input vector, x. This gives the opportunity to filter out noise in the measurement.

MATLAB Demonstration of SVD – Backsubstitution

>>edit SVD_5

Posted in Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *