Bild

Low-rank-modified Galerkin methods for the Lyapunov equation

    Kathryn Lund, Davide Palitta

ETNA - Electronic Transactions on Numerical Analysis, pp. 1-21, 2024/04/18

doi: 10.1553/etna_vol62s1

doi: 10.1553/etna_vol62s1


PDF
X
BibTEX-Export:

X
EndNote/Zotero-Export:

X
RIS-Export:

X 
Researchgate-Export (COinS)

Permanent QR-Code

doi:10.1553/etna_vol62s1



doi:10.1553/etna_vol62s1

Abstract

Of all the possible projection methods for solving large-scale Lyapunov matrix equations, Galerkin approaches remain much more popular than minimal residual ones. This is mainly due to the different nature of the projected problems stemming from these two families of methods. While a Galerkin approach leads to the solution of a low-dimensional matrix equation per iteration, a matrix least-squares problem needs to be solved per iteration in a minimal residual setting. The significant computational cost of these least-squares problems has steered researchers towards Galerkin methods in spite of the appealing properties of minimal residual schemes. In this paper we introduce a framework that allows for modifying the Galerkin approach by low-rank, additive corrections to the projected matrix equation problem with the two-fold goal of attaining monotonic convergence rates similar to those of minimal residual schemes while maintaining essentially the same computational cost of the original Galerkin method. We analyze the well-posedness of our framework and determine possible scenarios where we expect the residual norm attained by two low-rank-modified variants to behave similarly to the one computed by a minimal residual technique. A panel of diverse numerical examples shows the behavior and potential of our new approach.

Keywords: Lyapunov equation, matrix equation, block Krylov subspace, model order reduction