Uniformization and computing matrix exponentials
Last Updated: Feb 15, 2024
Suppose Q is a rate matrix, that is, Q has nonnegative off-diagonal entries and \(q_{ii} = \sum_{j\neq i} q_{ij}\). It has long been observed that computing the matrix exponential \(e^{tQ}\) can be reduced to the problem of computing the matrix exponential of a stochastic matrix by writing \(Q = -\gamma I+ \gamma P\) for \(\gamma = \max_{i} (-q_{ii})\) and using \(e^{tQ} = e^{-t \gamma I} e^{t \gamma P} = e^{-t \gamma} e^{t \gamma P}\). This technique is known as uniformization. Since the Taylor series for \(e^{t \gamma P}\) contains only nonnegative terms, uniformization avoids the possibility of numerical error arising from cancellation. It is also worth observing that this choice of γ attains the minimum for \(\min_{\mu \in \mathbb{R}} \|A + \nu I\|_{\infty}\), reducing the \(\|\cdot\|_\infty\) norm by exactly 1/2.
Rate matrices
- Melloy and Bennett, Computing the exponential of an intensity matrix
- Sidje, Burrage, MacNamara Inexact Uniformization Method for Computing Transient Distributions of Markov Chains
- Sidje and Stewart, A numerical study of large sparse matrix exponentials arising in Markov chains
MatExp
- Classic: Moler & Van Loan 1978, 19 dubious ways to compute the exponential of a matrix
- Defez et al. A new efficient and accurate spline algorithm for the matrix exponential computation
- Higham, Functions of Matrices, chapter 10 (also contains discussion of rate matrices).
- Blog post: Numerically computing the exponential function with polynomial approximations
Elementary Functions