• PDF
• Cite
• Share
Article Contents  Article Contents

# Dynamically learning the parameters of a chaotic system using partial observations

• Motivated by recent progress in data assimilation, we develop an algorithm to dynamically learn the parameters of a chaotic system from partial observations. Under reasonable assumptions, we supply a rigorous analytical proof that guarantees the convergence of this algorithm to the true parameter values when the system in question is the classic three-dimensional Lorenz system. Such a result appears to be the first of its kind for dynamical parameter estimation of nonlinear systems. Computationally, we demonstrate the efficacy of this algorithm on the Lorenz system by recovering any proper subset of the three non-dimensional parameters of the system, so long as a corresponding subset of the state is observable. We moreover probe the limitations of the algorithm by identifying dynamical regimes under which certain parameters cannot be effectively inferred having only observed certain state variables. In such cases, modifications to the algorithm are proposed that ultimately result in recovery of the parameter. Lastly, computational evidence is provided that supports the efficacy of the algorithm well beyond the hypotheses specified by the theorem, including in the presence of noisy observations, stochastic forcing, and the case where the observations are discrete and sparse in time.

Mathematics Subject Classification: Primary: 34D06, 34A55, 37C50; Secondary: 35B30, 60H10.

 Citation: • • Figure 4.  Two parameters are recovered simultaneously. When applicable, the third parameter is fixed at $\sigma = 10, \; \rho = 300, \; \beta = 8/3$. The initial parameter estimate is always set to 10 above the true value. The two relaxation parameters, which correspond to the two unknown parameters, are set to $10,000$, and the relaxation period is $T_R = 5$. Each simulation is run out to $t = 50$ time units

Figure 6.  Parameter recovery with observations every 500 time-steps and either (A) stochastic forcing (of amplitude $\epsilon$) or (B) noisy observations (of amplitude $\eta$). The case where neither is present is included for comparison (C). $\sigma = 10$, $\widetilde{\sigma} = 0.8\sigma$, $\rho = 28$, $\widetilde{\rho} = 0.8\rho$, $\beta = 8/3$, $\widetilde{\beta} = 0.8\beta$, $\Delta t = 0.0001$, $\mu^{\rm{AOT}} = 1.8/\Delta t$, $\mu^{\rm{param}} = 1.8$

Figure 7.  Parameter recovery with observations every 500 time-steps and both stochastic forcing (of amplitude $\epsilon$) or noisy observations (of amplitude $\eta$). The initial estimates and algorithm parameters are the same as in Figure 6

Figure 1.  The parameter learning algorithm is applied to the true parameters $\sigma = 10, \, \rho = 28, \, \beta = 8/3$ with $\rho, \beta$ known and $\sigma$ recovered from continuous observations in $x(t)$. The initial guess is $\sigma_0 = \sigma+100$ and the algorithm parameters were set to $\mu_1 = 500$, $T_R = 1$. The analytically derived upper bounds on position and velocity error from Corollary 4.2 and Corollary 4.4 are shown to hold remarkably well

Figure 2.  Schematic of the threshold defined by (3.1)

Figure 3.  (Left) The parameter learning algorithm is used to recover $\sigma$ from 1,000 randomly sampled pairs $(\rho, \sigma) \in [0,150]^2$, with $\beta = 8/3$ fixed. The initial estimate used is $\sigma_0 = \sigma+10$, and the algorithm parameters are fixed at $\mu_1 = 10,000$ and $T_R = 5$. Each simulation is run to $t = 75$ time units. The color corresponds to the resulting absolute parameter error $| {\widetilde{{{\sigma}}}}-\sigma|$: red signifies $| {\widetilde{{{\sigma}}}}-\sigma|>|\sigma_0-\sigma|$; white signifies $| {\widetilde{{{\sigma}}}}-\sigma| = |\sigma_0-\sigma|$; blue signifies $| {\widetilde{{{\sigma}}}}-\sigma|<|\sigma_0-\sigma|$. The period of each solution is computed using the Poincaré surface of section method described in . (Right) For each $(\rho, \sigma)$ where $P_\pm$ are stable, instead of observing $x$ we observe the translated variable $z_\tau = z-\sigma-\rho$ and use the alternate formula (3.2) to recover $\sigma$

Figure 5.  (Log-linear plots) Parameter recovery with observations every 500 time-steps. $\sigma = 10$, $\rho = 28$, $\beta = 8/3$, , $\Delta t = 0.0001$, $\mu^{\rm{AOT}} = 1.8/\Delta t = 1,800$, $\mu^{\rm{param}} = 1.8$. (A) Observations only on $x$, (B) Observations only on $y$, (C) Observations only on $y$ and $z$, (D) Observations only on $x$ and $z$, (E) Observations only on $x$ and $y$, (F) Observations on $x$, $y$, and $z$. Note: observations only on $z$ with an unknown $\beta$ parameter did not converge and hence are not shown. In (A), the solution and the $\sigma$ parameter momentarily converged to the exact value for roughly $97.4\lesssim t\lesssim109.7$; hence the gap in the plot

• ## Article Metrics  DownLoad:  Full-Size Img  PowerPoint