\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Machine-learning construction of a model for a macroscopic fluid variable using the delay-coordinate of a scalar observable

Abstract Full Text(HTML) Figure(7) / Table(3) Related Papers Cited by
  • We construct a data-driven dynamical system model for a macroscopic variable the Reynolds number of a high-dimensionally chaotic fluid flow by training its scalar time-series data. We use a machine-learning approach, the reservoir computing for the construction of the model, and do not use the knowledge of a physical process of fluid dynamics in its procedure. It is confirmed that an inferred time-series obtained from the model approximates the actual one and that some characteristics of the chaotic invariant set mimic the actual ones. We investigate the appropriate choice of the delay-coordinate, especially the delay-time and the dimension, which enables us to construct a model having a relatively high-dimensional attractor with low computational costs.

    Mathematics Subject Classification: Primary: 76F20; Secondary: 68T05, 65P20.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Inference of a time-series of the Reynolds number of a fluid flow. Time-series of $ s_1 = \tilde{R}_{\lambda} $ is inferred from the reservoir model in comparison with that of a reference data obtained by the direct numerical simulation of the Navier–Stokes equation (top left). The variable $ t^\prime \; ( = t-T>0) $ denotes the time after finishing the training phase at $ t = T $. The inference errors $ \varepsilon_1, \varepsilon_2 $ defined by $ \varepsilon_1(t) = | \mathbf{s}(t)-\hat{ \mathbf{s}}(t)| $, and $ \varepsilon_2(t) = |s_1(t)-\hat{s}_1(t)| = |\tilde{R}_{\lambda}(t)-\hat{\tilde{R}}_{\lambda}(t)| $ are shown to increase exponentially due to the chaotic property (top right). In the bottom figure switching between laminar state with a small amplitude fluctuation and bursting state with a large amplitude fluctuation appear in an inferred time-series of $ s_1 = \tilde{R}_{\lambda} $, which are observed in the actual time-series

    Figure 2.  Reproducing the delay property which is to be satisfied for the successfully inferred time-series $ \hat{\mathbf s} $. We observe that for all values of $ m = 2, \cdots, 14 $ and for most $ t^{\prime} $, $ \hat{s}_1(t^\prime)\approx\hat{s}_{m}(t^\prime+(m-1)\Delta\tau) $, although the time-series of only $ \hat{s}_1(t^\prime) $ and $ \hat{s}_{14}(t^\prime+13\Delta\tau) $ $ (7000\le t^{\prime}\le 8000) $ are shown

    Figure 3.  Poincaré points on the plane $ (s_2, s_3) $ along the trajectory $ \hat{ \mathbf{s}} $ obtained from the reservoir model (red) and $ \mathbf{s} $ from the Navier-Stokes equation (blue). The time length of each trajectory is $ 90000 $. The Poincaré section is defined by $ {s}_{1} = 0, k_ge d{s}_{1}/dt>0 $. Two sections are similar to each other, although a trajectory generated from the reservoir model does not cover some region of bursting states

    Figure 4.  Density distributions generated from trajectories for a variable $ s_1 $ obtained from the constructed reservoir model (reservoir output) and from the direct numerical simulation of the Navier-Stokes equation (actual). Each trajectory with a time-length 50000 has a different initial condition. The distributions are similar to each other in the sense that the peak is taken at $ s_1\approx0.2 $, and the distribution has relatively long tails

    Figure 5.  Inference of a time-series of the Reynolds number for $ t^{\prime}>T_{\text{out}} $ ($ T_{\text{out}} = 1000 $) using the reservoir model constructed by using the training data for $ t^\prime\le 0 $ (see Fig. 1). We use the same $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ as those used for the model inferring the trajectory in Fig. 1. But we use the time-series $ s_1(t^\prime) $ for $ T_{\text{out}}- T_1<t^\prime<T_{\text{out}} $ as an initial condition, where $ T_1 $ is the transient time for the reservoir state vector $ \mathbf{r}(t) $ to be converged. In the top panel, switching between laminar and bursting states is observed in the inferred trajectory. The bottom panel is the enlargement of the top panel, and shows that the model has a predictability for $ 1000<t^\prime<1080 $

    Figure 6.  Inference of time-series of the Reynolds number in many time-intervals $ T_{\text{out}}<t^{\prime}<T_{\text{out}}+250 $ ($ T_{\text{out}} = 500, 1000, \cdots, 6000 $) using the same reservoir model constructed by using the training data for $ t^\prime\le 0 $ (see Fig. 1 and 5.) As in Fig. 5, we only change the initial condition for each case, while the model is fixed after the appropriate choice of $ \mathbf{W}_\text{in}, \mathbf{A}, \mathbf{W}^*_\text{out} $ and $ \mathbf{c}^* $ is determined by using the training data for $ t^{\prime}<0 $

    Figure 7.  Auto-correlation function $ C(x) $ for a trajectory $ \{R_{\lambda}(t)\} $ with respect to the value of time-delay $ x $ (left), and its enlarged figure (right). Auto-correlation function $ C(x) $ is shown together with the straight lines $ \pm0.3, \pm0.5 $ (left panel), and $ 0.3, 0.7 $ (right panel). Each of the different colors represents $ C(x) $ computed from a trajectory from a different initial condition with time-lengths 5000. The difference is mainly due to the intermittent property of the dynamics. In the left panel the envelope $ C_e(x)( = \exp(-x/60)) $ is shown to go below 0.5 when $ x \approx 40 $, and also go below 0.3 when $ x \approx 75 $. From the right panel $ C(x) $ is shown to go below $ 0.7 $ at the first time, when $ x \approx 3.0 $, and go below $ 0.3 $ at the first time, when $ x \approx 5.0 $

    Table 1.  The list of variables and matrices in the reservoir computing

    variable
    $ \mathbf{u}\; (\in \mathbf{R}^M) $ input variable
    $ \mathbf{r}\; (\in \mathbf{R}^N) $ reservoir state vector
    $ \mathbf{s}\; (\in \mathbf{R}^M) $ actual output variable obtained from Navier–Stokes equation
    $ \hat{ \mathbf{s}}\; (\in \mathbf{R}^M) $ inferred output variable obtained from reservoir computing
    $ \mathbf{A}\; (\in \mathbf{R}^{N \times N}) $ weighted adjacency matrix
    $ \mathbf{W}_{\text{in}}\; (\in \mathbf{R}^{M \times N}) $ linear input weight
    $ \mathbf{W}_{\text{out}}\; (\in \mathbf{R}^{N \times M}) $ matrix used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
    $ \mathbf{c}\; (\in \mathbf{R}^{M}) $ vector used for translation from $ \mathbf{r} $ to output variable $ \hat{ \mathbf{s}} $
    $ \tilde{x} $ normalized variable of $ {x} $
     | Show Table
    DownLoad: CSV

    Table 2.  The list of parameters and their values used in the reservoir computing in each section

    parameter Sec. 4 Sec. 5
    $ M $ dimension of input and output variables 14 Table. 3
    $ \Delta \tau $ delay-time of the delay-coordinate 4.0 Table. 3
    $ N $ dimension of reservoir state vector 3000 2000
    $ D $ parameter of determining $ \mathbf{A} $ 120 80
    $ \Delta t $ time step for reservoir dynamics 0.5
    $ T_0 $ transient time for $ \mathbf{r} $ to be converged 3750
    $ T $ training time 40000
    $ L_0 $ $ \; (=T_0/\Delta t) $ number of iterations for the transient 7500
    $ L $ $ \; (=T/\Delta t) $ number of iterations for the training 80000
    $ \rho $ maximal eigenvalue of $ \mathbf{A} $ 0.7
    $ \sigma $ scale of input weights in $ \mathbf{W}_{\text{in}} $ 0.5
    $ \alpha $ nonlinearity degree of reservoir dynamics 0.6
    $ \beta $ regularization parameter 0.1
     | Show Table
    DownLoad: CSV

    Table 3.  The number of successful trials for each choice of the delay-time $ \Delta \tau $ and the dimension $ M $ of the delay-coordinate. The matrices $ \mathbf{A} $ and $ \mathbf{W}_{\text{in}} $ are chosen randomly, and the number of successful cases are counted. See Table. \ref{tab:parameter} for the parameter values. We say the inference is successful, if the three conditions (ⅰ)(ⅱ)(ⅲ) in (12) hold, where the criteria $ (e_{60}, e_{90}) $ are set as (a)$ (0.14, 0.30) $ and (b)$ (0.13, 0.17) $. For each set of values $ (\Delta \tau, M) $ we tried 8160 cases of $ \mathbf{A} $ and $ \mathbf{W}_\text{in} $. For each value of $ \Delta\tau $, the best choice of $ M $ is identified by the bold number(s) (blue), and the best among each criterion is identified by the underlined bold number(s) (red)

    $ {\rm{ (a) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.14, 0.30)$
    $ \Delta \tau $ $ \backslash $ $ M $ 10 11 12 13 14 15 16 17 18 19 20
    3.0 0 0 0 0 0 1 19 24 43 37 27
    3.5 0 0 0 11 20 28 57 48 21 11 7
    4.0 0 3 18 43 107 59 21 14 2 4 5
    4.5 3 14 43 54 21 15 8 1 1 1 0
    5.0 10 24 26 19 9 1 1 1 0 0 0
    $ {\rm{ (b) }}\left( {{e_{60}}, {e_{90}}} \right) = (0.13, 0.17)$
    $\Delta \tau$ $\backslash$ $M$ 10 11 12 13 14 15 16 17 18 19 20
    3.0 0 0 0 0 0 0 3 6 10 8 4
    3.5 0 0 0 2 3 5 6 4 1 3 1
    4.0 0 0 2 8 14 10 1 4 1 0 1
    4.5 1 1 8 14 1 0 1 0 0 0 0
    5.0 2 4 6 6 3 0 1 0 0 0 0
     | Show Table
    DownLoad: CSV
  • [1] P. Antonik, M. Gulina, J. Pauwels and S. Massar, Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.012215.
    [2] P. C. Di Leoni, A. Mazzino and L. Biferale, Inferring flow parameters and turbulent configuration with physics-informed data assimilation and spectral nudging, Phys. Rev. Fluids, 3 (2018). doi: 10.1103/PhysRevFluids.3.104604.
    [3] D. Ibáñez-Soria, J. Garcia-Ojalvo, A. Soria-Frisch and G. Ruffini, Detection of generalized synchronization using echo state networks, Chaos, 28 (2018), 7pp. doi: 10.1063/1.5010285.
    [4] M. Inubushi and K. Yoshimura, Reservoir computing beyond memory-nonlinearity trade-off, Scientific Reports, 7 (2017). doi: 10.1038/s41598-017-10257-6.
    [5] T. Ishihara and Y. Kaneda, High resolution DNS of incompressible homogeneous forced turbulence-time dependence of the statistics, in Statistical Theories and Computational Approaches to Turbulence, Springer, Tokyo, 2003,177–188. doi: 10.1007/978-4-431-67002-5_11.
    [6] K. Ishioka, ispack-0.4.1, 1999. Available from: http://www.gfd-dennou.org/arch/ispack/.
    [7] H. Jaeger, The "echo state" approach to analysing and training recurrent neural networks, GMD Report, 148 (2001).
    [8] H. Jaeger and H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304 (2004), 78-80.  doi: 10.1126/science.1091277.
    [9] Z. Lu, B. R. Hunt and E. Ott, Attractor reconstruction by machine learning, Chaos, 28 (2018), 9pp. doi: 10.1063/1.5039508.
    [10] Z. Lu, J. Pathak, B. Hunt, M. Girvan, R. Brockett and E. Ott, Reservoir observers: Model-free inference of unmeasured variables in chaotic systems, Chaos, 27 (2017). doi: 10.1063/1.4979665.
    [11] M. Lukosevivcius and H. Jaeger, Reservoir computing approaches to recurrent neural network training, Comput. Science Rev., 3 (2009), 127-149.  doi: 10.1016/j.cosrev.2009.03.005.
    [12] W. MaassT. Natschläger and H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Comput., 14 (2002), 2531-2560.  doi: 10.1162/089976602760407955.
    [13] K. Nakai and Y. Saiki, Machine-learning inference of fluid variables from data using reservoir computing, Phys. Rev. E, 98 (2018). doi: 10.1103/PhysRevE.98.023111.
    [14] J. Pathak, B. Hunt, M. Girvan, Z. Lu and E. Ott, Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach, Phys. Rev. Lett., 120 (2018). doi: 10.1103/PhysRevLett.120.024102.
    [15] J. Pathak, Z. Lu, B. Hunt, M. Girvan and E. Ott, Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data, Chaos, 27 (2017), 9pp. doi: 10.1063/1.5010300.
    [16] T. SauerJ. A. Yorke and M. Casdagli, Embedology, J. Statist. Phys., 65 (1991), 579-616.  doi: 10.1007/BF01053745.
    [17] F. Takens, Detecting strange attractors in turbulence, in Dynamical Systems and Turbulence, Lecture Notes in Math., 898, Springer, Berlin-New York, 1981,366–381. doi: 10.1007/BFb0091924.
    [18] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, Scripta Series in Mathematics, V. H. Winston & Sons, Washington, D.C.: John Wiley & Sons, New York-Toronto, Ont.-London, 1977.
    [19] D. VerstraetenB. SchrauwenM. D'Haene and and D. A. Stroobandt, An experimental unification of reservoir computing methods, Neural Network, 20 (2007), 391-403.  doi: 10.1016/j.neunet.2007.04.003.
  • 加载中

Figures(7)

Tables(3)

SHARE

Article Metrics

HTML views(1099) PDF downloads(243) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return