\`x^2+y_1+z_12^34\`
Advanced Search
Article Contents
Article Contents

Multi-fidelity generative deep learning turbulent flows

  • *Corresponding author: Nicholas Zabaras

    *Corresponding author: Nicholas Zabaras
Abstract Full Text(HTML) Figure(29) / Table(7) Related Papers Cited by
  • In computational fluid dynamics, there is an inevitable trade off between accuracy and computational cost. In this work, a novel multi-fidelity deep generative model is introduced for the surrogate modeling of high-fidelity turbulent flow fields given the solution of a computationally inexpensive but inaccurate low-fidelity solver. The resulting surrogate is able to generate physically accurate turbulent realizations at a computational cost magnitudes lower than that of a high-fidelity simulation. The deep generative model developed is a conditional invertible neural network, built with normalizing flows, with recurrent LSTM connections that allow for stable training of transient systems with high predictive accuracy. The model is trained with a variational loss that combines both data-driven and physics-constrained learning. This deep generative model is applied to non-trivial high Reynolds number flows governed by the Navier-Stokes equations including turbulent flow over a backwards facing step at different Reynolds numbers and turbulent wake behind an array of bluff bodies. For both of these examples, the model is able to generate unique yet physically accurate turbulent fluid flows conditioned on an inexpensive low-fidelity solution.

    Mathematics Subject Classification: Primary: 68T07, 68T37; Secondary: 37N10.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Comparison between traditional hybrid VLES-LES simulation (left) and the proposed multi-fidelity deep generative turbulence model (right) for studying the wake behind a wall-mounted cube

    Figure 2.  Comparison of the forward and backward passes of various INN structures including (left to right) the standard INN, conditional INN (CINN) [76] and transient multi-fidelity Glow (TM-Glow) introduced in Section 3.2

    Figure 3.  Unfolded computational graph of a recurrent neural network model for which the arrows show functional dependence

    Figure 4.  TM-Glow model. This model is comprised of a low-fidelity encoder that conditions a generative flow model to produce samples of high-fidelity field snapshots. LSTM affine blocks are introduced to pass information between time-steps using recurrent connections. Boxes with rounded corners in (a) indicate a stack of the elements inside and should not be confused with plate notation. Arrows illustrate the forward pass of the INN. (For interpretation of the colors in the figure(s), the reader is referred to the web version of this article.)

    Figure 5.  The unrolled computational graph of the TM-Glow model for a model depth of $ k_{d} = 3 $

    Figure 6.  The LSTM affine block used in TM-Glow consisting of $ k_{c} $ affine coupling layers including an unnormalized conditional affine block (UnNorm Block), a stack of conditional affine blocks (Conditional Block) and a conditional LSTM affine block (LSTM Block)

    Figure 7.  The two variants of affine coupling layers used in TM-Glow with an input and output denoted as $ \mathit{\boldsymbol{h}}_{k-1} = \left\{\mathit{\boldsymbol{h}}_{k-1}^{1}, \mathit{\boldsymbol{h}}_{k-1}^{2}\right\} $ and $ \mathit{\boldsymbol{h}}_{k} = \left\{\mathit{\boldsymbol{h}}_{k}^{1}, \mathit{\boldsymbol{h}}_{k}^{2}\right\} $, respectively. Time-step superscripts have been omitted for clarity of presentation

    Figure 8.  Squeeze and split forward operations used to manipulate the dimensionality of the features in TM-Glow. (Left) The squeeze operation compresses the input feature map $ \mathit{\boldsymbol{h}}_{k-1} $ using a checkerboard pattern halving the spatial dimensionality and increasing the number of channels by four. (Right) The split operation factors out half of an input $ \mathit{\boldsymbol{h}}_{k-1} $ which are then taken to be latent random variable $ \mathit{\boldsymbol{z}}^{(i)} $. The remaining features, $ \mathit{\boldsymbol{h}}_{k} $ are sent deeper in the network. Time-step superscripts have been omitted for clarity of presentation

    Figure 9.  Dense block with a growth rate and length of $ 2 $. Residual connections between convolutions progressively stack feature maps resulting in $ 12 $ output channels in this schematic. Standard batch-normalization [25] and Rectified Linear Unit (ReLU) activation functions are used [11] in junction with the convolutional operations. Convolutions are denoted by the kernel size $ k $, stride $ s $ and padding $ p $

    Figure 10.  (Left to right) Velocity magnitude MSE and turbulent kinetic energy (TKE) test MSE for TM-Glow models containing $ k_{d}\cdot k_{c} $ affine coupling layers

    Figure 11.  Reliability diagrams of the predicted x-velocity, y-velocity and pressure fields predicted with TM-Glow evaluated over $ 12000 $ model predictions. The black dashed line indicates matching empirical distributions between the model's samples and observed validation data

    Figure 12.  Flow over a backwards step. The green region indicates the recirculation region TM-Glow will be used to predict. All domain boundaries are no-slip with the exceptions of the uniform inlet and zero gradient outlet. The total outlet simulation length is made to be double that of the prediction range to negate effects of the boundary condition on this zone

    Figure 13.  Computational mesh around the backwards step used for the low- and high-fidelity CFD simulations solved with OpenFOAM [27]

    Figure 14.  (Left to right) Flow over backwards step velocity magnitude and turbulent kinetic energy (TKE) error during training of TM-Glow on different data set sizes. Error values were average over five model samples

    Figure 15.  (Top to bottom) Velocity magnitude of the high-fidelity target, low-fidelity input, $ 3 $ TM-Glow samples and standard deviation for two test flows

    Figure 16.  (Top to bottom) Q-criterion of the high-fidelity target, low-fidelity input and three TM-Glow samples for two test flows

    Figure 17.  TM-Glow time-series samples of $ x- $velocity, $ y- $velocity and pressure fields for a backwards step test case at $ Re = 7500 $. For each field (top to bottom) the high-fidelity ground truth, low-fidelity input, three TM-Glow samples and the resulting standard deviation are plotted

    Figure 18.  (Top to bottom) Time averaged x-velocity, y-velocity and pressure profiles for two different test cases at (left to right) $ Re = 7500 $ and $ Re = 47500 $. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $ 2\sigma $) are computed using $ 20 $ time-series samples

    Figure 19.  (Top to bottom) Turbulent kinetic energy and Reynolds shear stress profiles for two different test cases at (left to right) $ Re = 7500 $ and $ Re = 47500 $. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $ 2\sigma $) are computed using $ 20 $ time-series samples

    Figure 20.  Flow around array of bluff bodies. The red region indicates the area for which the bodies can be placed randomly. The green region indicates the wake zone that we will use TM-Glow to predict a high-fidelity response from a low-fidelity simulation

    Figure 21.  Velocity magnitude of the low-fidelity and high-fidelity simulations for two different cylinder arrays. (Left to right) Cylinder array configuration and the corresponding (top to bottom) high-fidelity and low-fidelity finite volume simulation results at several time-steps

    Figure 22.  Computational mesh around the cylinder array used for the low- and high-fidelity CFD simulations solved with OpenFOAM [27]

    Figure 23.  (Left to right) Cylinder array velocity magnitude and turbulent kinetic energy (TKE) error during training of TM-Glow on different data set sizes. Error values were average over five model samples

    Figure 24.  (Top to bottom) Velocity magnitude of the high-fidelity target, low-fidelity input, three TM-Glow samples and standard deviation for two test cases

    Figure 25.  TM-Glow time-series samples of $ x- $velocity, $ y- $velocity and pressure fields for a cylinder array test case. For each field (top to bottom) the high-fidelity ground truth, low-fidelity input, three TM-Glow samples and the resulting standard deviation are plotted

    Figure 26.  TM-Glow time-series samples of $ x- $velocity, $ y- $velocity and pressure fields for a cylinder array test case. For each field (top to bottom) the high-fidelity ground truth, low-fidelity input, three TM-Glow samples and the resulting standard deviation are plotted

    Figure 27.  Time-averaged flow profiles for two test flows. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $ 2\sigma $) are computed using $ 20 $ time-series samples

    Figure 28.  Turbulent statistic profiles for two test flows. TM-Glow expectation (TM-Glow) and confidence interval (TM-Glow $ 2\sigma $) are computed using $ 20 $ time-series samples

    Figure 29.  Computational requirement for training TM-Glow given training data-sets of various sizes. Computation is quantified using Service Units (SU) defined in Table 6

    Table 1.  Invertible operations used in the generative normalizing flow method of TM-Glow. Being consistent with the notation in [31], we assume the inputs and outputs of each operation are of dimension $ \mathit{\boldsymbol{h}}_{k-1}, \mathit{\boldsymbol{h}}_{k} \in \mathbb{R}^{c\times h \times w} $ with $ c $ channels and a feature map size of $ \left[h \times w\right] $. Indexes over the spatial domain of the feature map are denoted by $ \mathit{\boldsymbol{h}}(x, y)\in \mathbb{R}^{c} $. The coupling neural network and convolutional LSTM are abbreviated as $ NN $ and $ LSTM $, respectively. Time-step superscripts have been neglected for clarity of presentation

    Operation Forward Inverse Log Jacobian
    Conditional Affine Layer $\begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1}\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{i-1}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned}$ $\begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)})\\ \boldsymbol{h}{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned}$ $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$
    LSTM Affine Layer $\begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{i-1}\\ \boldsymbol{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k-1}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}{h}_{k}^{2}=\exp\left(\log \boldsymbol{s}\right)\odot \boldsymbol{h}_{k-1}^{2} + \boldsymbol{t}\\ \boldsymbol{h}{h}_{k}^{1} = \boldsymbol{h}_{k-1}^{1}\\ \boldsymbol{h}{h}_{k} = \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} \end{aligned}$ $\begin{aligned} \left\{\boldsymbol{h}_{k}^{1}, \boldsymbol{h}_{k}^{2}\right\} = \boldsymbol{h}_{k} \\ \boldsymbol{h}{a}^{(i)}_{out}, \boldsymbol{c}^{(i)}_{out} = LSTM\left(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{in}, \boldsymbol{c}^{(i)}_{in}\right)\\ (\log \boldsymbol{s}, \boldsymbol{t}) = NN(\boldsymbol{h}_{k}^{1}, \boldsymbol{\xi}^{(i)}, \boldsymbol{a}^{(i)}_{out})\\ \boldsymbol{h}_{k-1}^{2}= \left(\boldsymbol{h}_{k}^{2} - \boldsymbol{t}\right)/\exp\left(\log \boldsymbol{s}\right)\\ \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k}^{1}\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned}$ $\textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$
    ActNorm $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{s}\odot \boldsymbol{h}_{k-1}(x,y) + \boldsymbol{b}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y)=(\boldsymbol{h}_{k}(x,y)-\boldsymbol{b})/\boldsymbol{s}$ $h\cdot w \cdot \textrm{sum}\left(\log \left|\boldsymbol{s}\right|\right)$
    $1\times 1$ Convolution $\forall x,y\quad \boldsymbol{h}_{k}(x,y)=\boldsymbol{W}\boldsymbol{h}_{k-1}(x,y) \quad \boldsymbol{W}\in\mathbb{R}^{c\times c}$ $\forall x,y\quad \boldsymbol{h}_{k-1}(x,y) =\boldsymbol{W}^{-1}\boldsymbol{h}_{k}(x,y)$ $h\cdot w \cdot \log\left(\det \left|\boldsymbol{W}\right|\right)$
    Split $\begin{aligned} \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} = \boldsymbol{h}_{k-1} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right) \\ p_{\boldsymbol{\theta}}(\boldsymbol{z}_{k}) = \mathcal{N}\left(\boldsymbol{h}_{k-1}^{2}| \boldsymbol{\mu}, \boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k} = \boldsymbol{h}_{k-1}^{1} \end{aligned}$ $\begin{aligned} \boldsymbol{h}_{k-1}^{1} = \boldsymbol{h}_{k} \\ \left(\boldsymbol{\mu},\boldsymbol{\sigma}\right) = NN\left(\boldsymbol{h}_{k-1}^{1}\right)\\ \boldsymbol{h}{h}_{k-1}^{2} \sim \mathcal{N}\left(\boldsymbol{\mu},\boldsymbol{\sigma} \right)\\ \boldsymbol{h}{h}_{k-1} = \left\{\boldsymbol{h}_{k-1}^{1}, \boldsymbol{h}_{k-1}^{2}\right\} \end{aligned}$ N/A
     | Show Table
    DownLoad: CSV

    Table 2.  TM-Glow model and training parameters used for both numerical test cases. For the parameters that vary between test cases the superscript $ \dagger $ and $ \ddagger $ to denote numerical examples in Sections 5 and 6, respectively. Hyper-parameter differences are due to memory constraints imposed from the varying predictive domain sizes

    TM-Glow Training
    Model Depth, $ k_{d} $ $ 3 $ Optimizer ADAM [29]
    Conditional Features, $ \mathit{\boldsymbol{\xi}}^{(i)} $ $ 32 $ Weight Decay $ 1e-6 $
    Recurrent Features, $ \mathit{\boldsymbol{a}}^{(i)}_{in}, \mathit{\boldsymbol{c}}^{(i)}_{in} $ $ 64, 64 $ Epochs $ 400 $
    Affine Coupling Layers, $ k_{c} $ $ 16 $ Mini-batch Size $ 32^{\dagger}, 64^{\ddagger} $
    Coupling NN Layers $ 2 $ BPTT $ 10 $ time-steps
    Inverse Temp., $ \beta $ 200
     | Show Table
    DownLoad: CSV

    Table 3.  Ablation study of the impact of different parts of the backward KL loss. As a base-line we also train TM-Glow using the standard maximum likelihood estimation (MLE) approach. The mean square error (MSE) of various flow field quantities for various loss formulations are listed. The lowest values for each error are bolded

    MLE $ V_{Pres} $ $ V_{Div} $ $ V_{L2} $ $ V_{RMS} $ $ MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right) $ $ MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right) $ $ MSE\left(\overline{\mathit{\boldsymbol{p}}}\right) $ $ MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right) $ $ MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right) $ $ MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right) $ $ \overline{V_{Div}} $ $ \overline{V_{Pres}} $
    0.0589 0.0085 0.0135 0.0204 0.0486 0.0137 0.0019 0.0615
    0.0490 0.0115 0.0188 0.0168 0.0292 0.0125 0.0012 0.0192
    0.0390 0.0078 0.0189 0.0162 0.0251 0.0106 0.0013 0.0402
    0.0463 0.0113 0.0158 0.0166 0.0256 0.0129 0.0012 0.0424
    0.0435 0.0089 0.0140 0.0168 0.0272 0.0131 0.0012 0.0366
     | Show Table
    DownLoad: CSV

    Table 4.  Backwards step test error of various normalized time-averaged flow field quantities of the low-fidelity solution interpolated to the high-fidelity mesh and TM-Glow trained on various training data set sizes. Lower is better. TM-Glow errors were averaged over $ 20 $ samples from the model. The training wall-clock (WC) time of each data set size is also listed

    $ MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}/u_{0}\right) $ $ MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}/u_{0}\right) $ $ MSE\left(\overline{\mathit{\boldsymbol{p}}}/u^{2}_{0}\right) $ $ MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}/u_{0}\right) $ $ MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}/u_{0}\right) $ $ MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}/u^{2}_{0}\right) $ WC Hrs.
    Low-Fidelity 0.1212 0.0224 0.0199 0.0237 0.0177 0.0124 -
    $ 8 $ Flows 0.0182 0.0036 0.0023 0.0053 0.0059 0.0034 6.5
    $ 16 $ Flows 0.0185 0.0031 0.0021 0.0030 0.0033 0.0023 10.0
    $ 32 $ Flows 0.0091 0.0019 0.0014 0.0022 0.0022 0.0014 12.1
    $ 48 $ Flows 0.0074 0.0017 0.0014 0.0021 0.0022 0.0013 16.6
     | Show Table
    DownLoad: CSV

    Table 5.  Cylinder array test error of various time-averaged flow field quantities of the low-fidelity solution interpolated to the high-fidelity mesh and TM-Glow trained on different training data set sizes. Lower is better. TM-Glow errors were averaged over $ 20 $ samples from the model. The training wall-clock (WC) time of each data set size is also listed

    $ MSE\left(\overline{\mathit{\boldsymbol{u}}}_{x}\right) $ $ MSE\left(\overline{\mathit{\boldsymbol{u}}}_{y}\right) $ $ MSE\left(\overline{\mathit{\boldsymbol{p}}}\right) $ $ MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{x}\right)^{2}}}\right) $ $ MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{u}}^{'}_{y}\right)^{2}}}\right) $ $ MSE\left(\sqrt{\overline{\left(\mathit{\boldsymbol{p}}^{'}\right)^{2}}}\right) $ WC Hrs.
    Low-Fidelity 0.1033 0.0081 0.0179 0.0655 0.0981 0.02156 -
    $ 16 $ Flows 0.0461 0.0078 0.0292 0.0116 0.0191 0.00096 4.3
    $ 32 $ Flows 0.0461 0.0078 0.0166 0.0128 0.0185 0.0093 4.9
    $ 64 $ Flows 0.0409 0.0062 0.0118 0.0107 0.0172 0.0084 6.8
    $ 96 $ Flows 0.0386 0.0059 0.0128 0.0100 0.0152 0.0074 10.3
     | Show Table
    DownLoad: CSV

    Table 6.  Hardware used to run the low-fidelity and high-fidelity CFD simulations as well as the training and prediction of TM-Glow for both numerical examples

    CPU Cores CPU Model GPUs GPU Model SU Hour
    Low-Fidelity 1 Intel Xeon E5-2680 - - 1
    High-Fidelity 8 Intel Xeon E5-2680 - - 8
    TM-Glow 1 Intel Xeon Gold 6226 4 NVIDIA Tesla V100 8
     | Show Table
    DownLoad: CSV

    Table 7.  Prediction cost of the surrogate compared to the high-fidelity simulator for flow over a backwards step (left) and flow around a cylinder array (right).

    Backwards Step SU Hours Wall-clock (mins)
    Low-Fidelity 0.06 4.5
    TM-Glow 20 Samples 0.03 0.75
    Surrogate Prediction 0.09 5.25
    High-Fidelity Prediction 5.6 42
     | Show Table
    DownLoad: CSV
  • [1] F. Ahmed and N. Rajaratnam, Flow around bridge piers, Journal of Hydraulic Engineering, 124 (1998), 288-300.  doi: 10.1061/(ASCE)0733-9429(1998)124:3(288).
    [2] L. Ardizzone, C. Lüth, J. Kruse, C. Rother and U. Köthe, Guided image generation with conditional invertible neural networks, preprint, arXiv: 1907.02392.
    [3] K. Bieker, S. Peitz, S. L. Brunton, J. N. Kutz and M. Dellnitz, Deep model predictive control with online learning for complex physical systems, preprint, arXiv: 1905.10094.
    [4] L. ChenK. AsaiT. NonomuraG. Xi and T. Liu, A review of backward-facing step (BFS) flow mechanisms, heat transfer and control, Thermal Science and Engineering Progress, 6 (2018), 194-216.  doi: 10.1016/j.tsep.2018.04.004.
    [5] J. Chung, S. Ahn and Y. Bengio, Hierarchical multiscale recurrent neural networks, preprint, arXiv: 1609.01704.
    [6] L. Dinh, D. Krueger and Y. Bengio, Nice: Non-linear independent components estimation, preprint, arXiv: 1410.8516.
    [7] L. Dinh, J. Sohl-Dickstein and S. Bengio, Density estimation using real nvp, preprint, arXiv: 1605.08803.
    [8] E. Erturk, Numerical solutions of 2-D steady incompressible flow over a backward-facing step, part I: High Reynolds number solutions, Computers and Fluids, 37 (2008), 633-655.  doi: 10.1016/j.compfluid.2007.09.003.
    [9] N. Geneva and N. Zabaras, Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks, Journal of Computational Physics, 109056. doi: 10.1016/j.jcp.2019.109056.
    [10] N. Geneva and N. Zabaras, Quantifying model form uncertainty in Reynolds-averaged turbulence models with Bayesian deep neural networks, Journal of Computational Physics, 383 (2019), 125-147.  doi: 10.1016/j.jcp.2019.01.021.
    [11] X. Glorot, A. Bordes and Y. Bengio, Deep sparse rectifier neural networks, in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011,315–323.
    [12] J. S. González, A. G. G. Rodriguez, J. C. Mora, J. R. Santos and M. B. Payan, Optimization of wind farm turbines layout using an evolutive algorithm, Renewable Energy, 35 (2010), 1671–1681. Available from: http://www.sciencedirect.com/science/article/pii/S0960148110000145.
    [13] I. GoodfellowY. Bengio and  A. CourvilleDeep Learning, MIT press, 2016. 
    [14] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville and Y. Bengio, Generative adversarial nets, in Advances in Neural Information Processing Systems, 2014, 2672–2680.
    [15] W. Grathwohl, R. T. Chen, J. Betterncourt, I. Sutskever and D. Duvenaud, Ffjord: Free-form continuous dynamics for scalable reversible generative models, preprint, arXiv: 1810.01367.
    [16] X. Guo, W. Li and F. Iorio, Convolutional neural networks for steady flow approximation, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining doi: 10.1145/2939672.2939738.
    [17] G. Haller, An objective definition of a vortex, Journal of Fluid Mechanics, 525 (2005), 1-26.  doi: 10.1017/S0022112004002526.
    [18] R. Han, Y. Wang, Y. Zhang and G. Chen, A novel spatial-temporal prediction method for unsteady wake flows based on hybrid deep neural network, Physics of Fluids, 31 (2019), 127101. doi: 10.1063/1.5127247.
    [19] O. Hennigh, Lat-net: Compressing lattice Boltzmann flow simulations using deep neural networks, preprint, arXiv: 1705.09036.
    [20] J. Hoffman and C. Johnson, A new approach to computational turbulence modeling, Computer Methods in Applied Mechanics and Engineering, 195 (2006), 2865-2880.  doi: 10.1016/j.cma.2004.09.015.
    [21] J. HolgateA. SkillenT. Craft and A. Revell, A review of embedded large eddy simulation for internal flows, Archives of Computational Methods in Engineering, 26 (2019), 865-882.  doi: 10.1007/s11831-018-9272-5.
    [22] G. Huang, Z. Liu, L. van der Maaten and K. Q. Weinberger, Densely connected convolutional networks, in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. doi: 10.1109/CVPR.2017.243.
    [23] W. HuangQ. Yang and H. Xiao, CFD modeling of scale effects on turbulence flow and scour around bridge piers, Computers and Fluids, 38 (2009), 1050-1058.  doi: 10.1016/j.compfluid.2008.01.029.
    [24] J. C. Hunt, A. A. Wray and P. Moin, Eddies, streams, and convergence zones in turbulent flows, in Center for Turbulence Research Report, CTR-S88, 1988. Available from: https://ntrs.nasa.gov/search.jsp?R=19890015184.
    [25] S. Ioffe and C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, preprint, arXiv: 1502.03167.
    [26] J.-H. Jacobsen, A. Smeulders and E. Oyallon, i-revnet: Deep invertible networks, preprint, arXiv: 1802.07088.
    [27] H. Jasak, A. Jemcov, Z. Tukovic, et al., OpenFOAM: A C++ library for complex physics simulations, in International Workshop on Coupled Methods in Numerical Dynamics, 1000, IUC Dubrovnik, Croatia, 2007, 1–20.
    [28] B. KimV. C. AzevedoN. ThuereyT. KimM. Gross and B. Solenthaler, Deep fluids: A generative network for parameterized fluid simulations, Computer Graphics Forum, 38 (2019), 59-70.  doi: 10.1111/cgf.13619.
    [29] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization, preprint, arXiv: 1412.6980.
    [30] D. P. Kingma and M. Welling, Auto-encoding variational bayes, arXiv: 1312.6114.
    [31] D. P. Kingma and P. Dhariwal, Glow: Generative flow with invertible 1x1 convolutions, in Advances in Neural Information Processing Systems, 2018, 10215–10224.
    [32] M. Kumar, M. Babaeizadeh, D. Erhan, C. Finn, S. Levine, L. Dinh and D. Kingma, Videoflow: A flow-based generative model for video, preprint, arXiv: 1903.01434.
    [33] R. Kumar, S. Ozair, A. Goyal, A. Courville and Y. Bengio, Maximum entropy generators for energy-based models, preprint, arXiv: 1901.08508.
    [34] C. J. LapeyreA. MisdariisN. CazardD. Veynante and T. Poinsot, Training convolutional neural networks to estimate turbulent sub-grid scale reaction rates, Combustion and Flame, 203 (2019), 255-264.  doi: 10.1016/j.combustflame.2019.02.019.
    [35] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato and F. Huang, A tutorial on energy-based learning, Predicting Structured Data, 1 (2006), 59 pp.
    [36] C. Li, J. Li, G. Wang and L. Carin, Learning to sample with adversarially learned likelihood-ratio, 2018. Available from: https://openreview.net/forum?id=S1eZGHkDM.
    [37] J. LingA. Kurzawski and J. Templeton, Reynolds averaged turbulence modelling using deep neural networks with embedded invariance, Journal of Fluid Mechanics, 807 (2016), 155-166.  doi: 10.1017/jfm.2016.615.
    [38] P. Liu, X. Qiu, X. Chen, S. Wu and X.-J. Huang, Multi-timescale long short-term memory neural network for modelling sentences and documents, in Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015, 2326–2335. doi: 10.18653/v1/D15-1280.
    [39] R. MaulikO. SanA. Rasheed and P. Vedula, Subgrid modelling for two-dimensional turbulence using neural networks, Journal of Fluid Mechanics, 858 (2019), 122-144.  doi: 10.1017/jfm.2018.770.
    [40] S. M. Mitran, A Comparison of Adaptive Mesh Refinement Approaches for Large Eddy Simulation, Technical report, Washington University, Seattle, Department of Applied Mathematics, 2001.
    [41] S. MoY. ZhuN. ZabarasX. Shi and J. Wu, Deep convolutional encoder-decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media, Water Resources Research, 55 (2019), 703-728.  doi: 10.1029/2018WR023528.
    [42] A. Mohan, D. Daniel, M. Chertkov and D. Livescu, Compressed convolutional lstm: An efficient deep learning framework to model high fidelity 3d turbulence, preprint, arXiv: 1903.00033.
    [43] M. H. Patel, Dynamics of Offshore Structures, Butterworth-Heinemann, 2013.
    [44] S. B. PopeTurbulent Flows, Cambridge University Press, Cambridge, 2000.  doi: 10.1017/CBO9780511840531.
    [45] P. Quéméré and P. Sagaut, Zonal multi-domain rans/les simulations of turbulent flows, International Journal for Numerical Methods in Fluids, 40 (2002), 903-925.  doi: 10.1002/fld.381.
    [46] J. RabaultM. KuchtaA. JensenU. Réglade and N. Cerardi, Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control, Journal of Fluid Mechanics, 865 (2019), 281-302.  doi: 10.1017/jfm.2019.62.
    [47] M. RaissiP. Perdikaris and G. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics, 378 (2019), 686-707.  doi: 10.1016/j.jcp.2018.10.045.
    [48] M. RaissiZ. WangM. S. Triantafyllou and G. E. Karniadakis, Deep learning of vortex-induced vibrations, Journal of Fluid Mechanics, 861 (2019), 119-137.  doi: 10.1017/jfm.2018.872.
    [49] P. Sagaut, Multiscale and Multiresolution Approaches in Turbulence: LES, DES and Hybrid RANS/LES Methods: Applications and Guidelines, World Scientific, 2013. doi: 10.1142/p878.
    [50] M. Samorani, The wind farm layout optimization problem, in Handbook of Wind Power Systems (eds. P. M. Pardalos, S. Rebennack, M. V. F. Pereira, N. A. Iliadis and V. Pappu) doi: 10.1007/978-3-642-41080-2_2.
    [51] J. U. Schlüter, H. Pitsch and P. Moin, Large-eddy simulation inflow conditions for coupling with reynolds-averaged flow solvers, AIAA Journal, 42 (2004), 478–484. Available from: https://doi.org/10.2514/1.3488.
    [52] X. SHI, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong and W.-c. Woo, Convolutional LSTM network: A machine learning approach for precipitation nowcasting, in Advances in Neural Information Processing Systems 28, Curran Associates, Inc., 2015,802–810. Available from: http://papers.nips.cc/paper/5955-convolutional-lstm-network-a-machine-learningapproach-for-precipitation-nowcasting.pdf
    [53] J. Smagorinsky, General circulation experiments with the primitive equations: I. The basic experiment, Monthly Weather Review, 91 (1963), 99-164.  doi: 10.1175/1520-0493(1963)091<0099:GCEWTP>2.3.CO;2.
    [54] I. Sobel and G. Feldman, A 3x3 isotropic gradient operator for image processing, Presented at a talk at the Stanford Artificial Intelligence Project, 271–272.
    [55] C. G. Speziale, Computing non-equilibrium turbulent flows with time-dependent RANS and VLES, in Fifteenth International Conference on Numerical Methods in Fluid Dynamics, Springer, 1997,123–129. doi: 10.1007/BFb0107089.
    [56] A. Subramaniam, M. L. Wong, R. D. Borker, S. Nimmagadda and S. K. Lele, Turbulence enrichment using generative adversarial networks, preprint, arXiv: 2003.01907.
    [57] L. Sun, H. Gao, S. Pan and J.-X. Wang, Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data, Computer Methods in Applied Mechanics and Engineering, 361 (2020), 112732. doi: 10.1016/j.cma.2019.112732.
    [58] E. G. Tabak and C. V. Turner, A family of nonparametric density estimation algorithms, Communications on Pure and Applied Mathematics, 66 (2013), 145-164.  doi: 10.1002/cpa.21423.
    [59] E. G. Tabak and E. Vanden-Eijnden, et al., Density estimation by dual ascent of the log-likelihood, Communications in Mathematical Sciences, 8 (2010), 217-233.  doi: 10.4310/CMS.2010.v8.n1.a11.
    [60] S. Taghizadeh, F. D. Witherden and S. S. Girimaji, Turbulence closure modeling with data-driven techniques: Physical compatibility and consistency considerations, preprint, arXiv: 2004.03031.
    [61] M. TerracolE. ManohaC. HerreroE. LabourasseS. Redonnet and P. Sagaut, Hybrid methods for airframe noise numerical prediction, Theoretical and Computational Fluid Dynamics, 19 (2005), 197-227.  doi: 10.1007/s00162-005-0165-5.
    [62] M. TerracolP. Sagaut and C. Basdevant, A multilevel algorithm for large-eddy simulation of turbulent compressible flows, Journal of Computational Physics, 167 (2001), 439-474.  doi: 10.1016/S0021-9991(02)00017-7.
    [63] J. Tompson, K. Schlachter, P. Sprechmann and K. Perlin, Accelerating eulerian fluid simulation with convolutional networks, in Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, JMLR.org, 2017, 3424–3433. Available from: http://dl.acm.org/citation.cfm?id=3305890.3306035.
    [64] A. Travin, M. Shur, M. Strelets and P. R. Spalart, Physical and numerical upgrades in the detached-eddy simulation of complex turbulent flows, in Advances in LES of Complex Flows (eds. R. Friedrich and W. Rodi), Springer Netherlands, Dordrecht, 2002,239–254. doi: 10.1007/0-306-48383-1_16.
    [65] Y.-H. TsengC. Meneveau and M. B. Parlange, Modeling flow around bluff bodies and predicting urban dispersion using large eddy simulation, Environmental Science & Technology, 40 (2006), 2653-2662.  doi: 10.1021/es051708m.
    [66] J.-X. Wang, J.-L. Wu and H. Xiao, Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data, Phys. Rev. Fluids, 2 (2017), 034603. doi: 10.1103/PhysRevFluids.2.034603.
    [67] Z. Wang, K. Luo, D. Li, J. Tan and J. Fan, Investigations of data-driven closure for subgrid-scale stress in large-eddy simulation, Physics of Fluids, 30 (2018), 125101. doi: 10.1063/1.5054835.
    [68] M. Werhahn, Y. Xie, M. Chu and N. Thuerey, A multi-pass GAN for fluid flow super-resolution, preprint, arXiv: 1906.01689. doi: 10.1145/3340251.
    [69] S. WiewelM. Becher and N. Thuerey, Latent space physics: Towards learning the temporal evolution of fluid flow, Computer Graphics Forum, 38 (2019), 71-82.  doi: 10.1111/cgf.13620.
    [70] J. WuH. XiaoR. Sun and Q. Wang, Reynolds-averaged Navier-Stokes equations with explicit data-driven Reynolds stress closure can be ill-conditioned, Journal of Fluid Mechanics, 869 (2019), 553-586.  doi: 10.1017/jfm.2019.205.
    [71] H. XiaoJ.-L. WuJ.-X. WangR. Sun and C. Roy, Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach, Journal of Computational Physics, 324 (2016), 115-136.  doi: 10.1016/j.jcp.2016.07.038.
    [72] W. Xiong, W. Luo, L. Ma, W. Liu and J. Luo, Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, 2364–2373. doi: 10.1109/CVPR.2018.00251.
    [73] Y. Yang and P. Perdikaris, Adversarial uncertainty quantification in physics-informed neural networks, Journal of Computational Physics, 394 (2019), 136-152.  doi: 10.1016/j.jcp.2019.05.027.
    [74] L. Zhao, X. Peng, Y. Tian, M. Kapadia and D. Metaxas, Learning to forecast and refine residual motion for image-to-video generation, in Proceedings of the European Conference on Computer Vision (ECCV), 2018,387–403. doi: 10.1007/978-3-030-01267-0_24.
    [75] Y. Zhu and N. Zabaras, Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification, Journal of Computational Physics, 366 (2018), 415-447.  doi: 10.1016/j.jcp.2018.04.018.
    [76] Y. ZhuN. ZabarasP.-S. Koutsourelakis and P. Perdikaris, Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data, Journal of Computational Physics, 394 (2019), 56-81.  doi: 10.1016/j.jcp.2019.05.024.
  • 加载中

Figures(29)

Tables(7)

SHARE

Article Metrics

HTML views(2253) PDF downloads(903) Cited by(0)

Access History

Other Articles By Authors

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return