Spectral analysis of nonlinear flows
Assume one has a linear dynamical system. n is so large that we cannot compute the eignevalues of A directly.
To estimate these eigenvalues, a standard method is a Krylov method. One starts with a random vector and computes iterates of it, until one gets a collection of m vectors that span a Krylov subspace. To find the approximate eigenvalues and eigenvectors, one projects A onto this m-dimensional subspace, and calculate the eigenvalues and eigenvectors of the resulting low-rank operator. The projection means to represent A with linear combinations of the basis of (which basis are independent)
Arnoldi algorithm is a type of Krylov method in which one orthonormalizes the iterates at each step, and it therefore involves computing the action of A on arbitrary vectors. Below if one variant by (Saad 1980) and (Schmid 2008) The logic. As the number of snapshots increase the captures the dominant features of the underlying physical process, it is reasonable to assume that, beyond a critical number of snapshots, adding further snapshots to the data sequence will not improve the . When the limit is reached, one can express the vector as a linear combination of the previous, and linearly independent vectors.
Expand it to matrices
Matrix is of companion type. The eigenvalues of then approximate some of the eigenvalues of . ( and are similar matrices, thus they have the same eigenvalues)
It is easy to prove that is an eigenvector of , with eigenvalue
Now comes the real situation where the m-th interate is not a linear combination of the previous iterates, there will be a residual adding to the equation:
The residual should be minimized by carfully choosing to make that
It is clearly stated in (Budisic 2012) that is chosen to minimize the norm of , corresponding to choosing the right projection operator so that your new observales are the least-square approximations to the original dynamic system. This implies that if the norm of is sufficiently small, then in the following equation, is thought of as an approximation of the action of the Koopman operator on the associated finite dimensional space . (Raak, 2016)
In this case, we can expand the relation to:
Ritz values - eigenvalues of , are approximations to the eigenvalues of
Ritz vectors - corresponding approximate eigenvectors of , given by
In the literature on DMD, the above procedure is sometimes called Arnoldi algorithm, which, in our opinion, is misplaced. In our opinion, the proper name should be Krylov-Rayleigh-Ritz procedure to point out that the procedure described here is just the Rayleigh-Ritz in a subspace defined by the Krylov basis. The Arnoldi algorithm is the method of applying the Gram-Schmidt orthogonalization (QR factorization) implicitly on the sequence of the . A true Arnoldi scheme in the DMD frame work is proposed only recently in “A parallel Dynamic Mode Decomposition algorithm using modified Full Orthogonalization Arnoldi for large sequential snapshots”. Remark 7.11
(Note there are following variants of Arnoldi method and eventually leading the problem from Koopman Mode calculations to the degenerated verison, Dynamic Mode. The full Arnoldi method reduces it to an upper Hessenberg matrix, while the simplized arnoldi focuses on the companion matrix we talked above.)
- Arnoldi. -> Henssenberg, by simple QR-decomposition of . One successively orthogonormalizes the vectors of resulting in a decomposition of the form with and as a Henssenberg matrix. Use the eigenvalues of to approximate some of the eigenvalues of (Schmid 2010)
- Classical Arnoldi. -> Henssenberg, by a sequence of projections onto successive Krylov subspaces. This is more stable but the matrix has to be available. (Schmid 2010)
- Variant Arnoldi. -> Companion. From observation is one solution. The dagger denotes the Moore-Penrose pseudoinverse. This solution is unique if and only if has linearly independent columns. In this case, we can use the pseudoinverse expansion . This is analytically correct but ill-conditioned in practice especially applied on noisy experimental data 2
- Variant Arnoldi: QR DMD. -> Companion. Denote as the economy-size QR-decomposition of the data sequence, then the last column of the companion matrix is . However, in practice this could be ill-conditioned that is often not capable of extracting more than the first one or two dominant dynamic modes. (Schmid 2010)
- Variant Arnoldi: Standard DMD. -> Companion. Introduce SVD and project onto a POD basis. No orthogonalization step, low algorithmic stability and convergence property, but allows ‘model-free’ application. (Schmid 2010)
- Variant Arnoldi: Proney-type method or Hankel DMD. It uses time delays and Drylov subspaces in observable space. Its advantage is that it prevents the rank deficiency observed in the original Arnoldi-type algorithm. 1
(As the conclusion, we may get the algorithm scrapped from (Rowley 2009). However, the paper details nothing about how to get the proper constants )
Input
Output empirical Ritz values , empirical Ritz vectors
- Define
- Find constants such that
- Define companion matrix:
- Find its eigenvalues and eigenvectors by
- empirical Ritz values are diagonal values of
- empirical Ritz vectors are columns of
But, how to understand these values?
If the linear relationship holds, these empirical Ritz values are the usual Ritz values of after m steps of the Arnoldi method. And, the Ritz values and vectors are good approximations of the eigenvalues and eigenvectos of .
If not hold, it produces nothing but approximations of the Koopman modes and associated eigenvalues. Why is that?
In step-6 we defined that and through a basic transformation we get:
Remember that and observed from the equations above we have
The spatial & temporal attributes
space << time, n << m. The rank of is at most n, that is, the rank-deficiency is inevitable when computing , which implies no uniqueness of . (It if unique when and only when has linear independent columns, i.e., full-ranked)
space >> time, n >> m. Especially in fluid mechanics, may have a row dimension over and a column dimension at . In this case, is much smaller than . So the DMD method by Schmid is much more efficient for less memory cost. (This is another kind of rank-deficiency where you truncate the space dimensions more essily with SVD) 2 (The DMD is more suitable for local short-time dynamics. Chen calls it as “snapshots approach”) The tacit assumption in a DMD analysis is that m<<n, that the number of snapshots m is much smaller than the ambient space dimension n. In other words, the action of is empirically known only in a small-dimensional subspace of its domain, . 1 P163
Examples:
28 × 241 Arnoldi for the KMD of conditioned room air temperature (Hiramatsu 2020)
20 × 24~120 141× 48~132 Arnoldi vs. DMD wind power farm (Raak 2016)
7 × 24~240 Arnoldi vs. DMD power system (Raak 2016)
1024^2 × 150~250 DMD on high-speed flame images (Ghosal 2016)
256 × 201 × 144 ~ 251 Arnoldi-like DMD on horse-shoe turbulence (Rowley 2009)
Tips
The eigenvalues of the following offset diagonal matrix, are the n-th roots of unity, , because you will always get solving the matrix.
SVD ![[SVD.jpg | 500 ]]
[U,S,V] = svd(A)
[U,S,V] = svd(A,"econ")
- Full SVD
- Thin/economy-sized SVD (remove columns of U not corresponding to rows of V*)
- Compact SVD (truncate to singular values)
- Truncated SVD (truncate to the largest singular value)
QR decomposition Q - an orthogonal () or unitary matrix () R - an upper triangular matrix The decomposition follows the Gram-Schmidt process. If we presume that there will be:
Reference