All Issues

Volume 15, 2022

Volume 14, 2021

Volume 13, 2020

Volume 12, 2019

Volume 11, 2018

Volume 10, 2017

Volume 9, 2016

Volume 8, 2015

Volume 7, 2014

Volume 6, 2013

Volume 5, 2012

Volume 4, 2011

Volume 3, 2010

Volume 2, 2009

Volume 1, 2008

Discrete and Continuous Dynamical Systems - S

April 2022 , Volume 15 , Issue 4

Issue on stochastic computing in data science

Select all articles


Feng Bao
2022, 15(4): ⅰ-ⅰ doi: 10.3934/dcdss.2022057 +[Abstract](210) +[HTML](168) +[PDF](64.41KB)
A dictionary learning algorithm for compression and reconstruction of streaming data in preset order
Richard Archibald and Hoang Tran
2022, 15(4): 655-668 doi: 10.3934/dcdss.2021102 +[Abstract](799) +[HTML](294) +[PDF](3161.69KB)

There has been an emerging interest in developing and applying dictionary learning (DL) to process massive datasets in the last decade. Many of these efforts, however, focus on employing DL to compress and extract a set of important features from data, while considering restoring the original data from this set a secondary goal. On the other hand, although several methods are able to process streaming data by updating the dictionary incrementally as new snapshots pass by, most of those algorithms are designed for the setting where the snapshots are randomly drawn from a probability distribution. In this paper, we present a new DL approach to compress and denoise massive dataset in real time, in which the data are streamed through in a preset order (instances are videos and temporal experimental data), so at any time, we can only observe a biased sample set of the whole data. Our approach incrementally builds up the dictionary in a relatively simple manner: if the new snapshot is adequately explained by the current dictionary, we perform a sparse coding to find its sparse representation; otherwise, we add the new snapshot to the dictionary, with a Gram-Schmidt process to maintain the orthogonality. To compress and denoise noisy datasets, we apply the denoising to the snapshot directly before sparse coding, which deviates from traditional dictionary learning approach that achieves denoising via sparse coding. Compared to full-batch matrix decomposition methods, where the whole data is kept in memory, and other mini-batch approaches, where unbiased sampling is often assumed, our approach has minimal requirement in data sampling and storage: i) each snapshot is only seen once then discarded, and ii) the snapshots are drawn in a preset order, so can be highly biased. Through experiments on climate simulations and scanning transmission electron microscopy (STEM) data, we demonstrate that the proposed approach performs competitively to those methods in data reconstruction and denoising.

Solving the linear transport equation by a deep neural network approach
Zheng Chen, Liu Liu and Lin Mu
2022, 15(4): 669-686 doi: 10.3934/dcdss.2021070 +[Abstract](1074) +[HTML](506) +[PDF](1228.81KB)

In this paper, we study linear transport model by adopting deep learning method, in particular deep neural network (DNN) approach. While the interest of using DNN to study partial differential equations is arising, here we adapt it to study kinetic models, in particular the linear transport model. Moreover, theoretical analysis on the convergence of neural network and its approximated solution towards analytic solution is shown. We demonstrate the accuracy and effectiveness of the proposed DNN method in numerical experiments.

Stable numerical methods for a stochastic nonlinear Schrödinger equation with linear multiplicative noise
Xiaobing Feng and Shu Ma
2022, 15(4): 687-711 doi: 10.3934/dcdss.2021071 +[Abstract](843) +[HTML](471) +[PDF](2251.9KB)

This paper is concerned with fully discrete finite element approximations of a stochastic nonlinear Schrödinger (sNLS) equation with linear multiplicative noise of the Stratonovich type. The goal of studying the sNLS equation is to understand the role played by the noises for a possible delay or prevention of the collapsing and/or blow-up of the solution to the sNLS equation. In the paper we first carry out a detailed analysis of the properties of the solution which lays down a theoretical foundation and guidance for numerical analysis, we then present a family of three-parameters fully discrete finite element methods which differ mainly in their time discretizations and contains many well-known schemes (such as the explicit and implicit Euler schemes and the Crank-Nicolson scheme) with different combinations of time discetization strategies. The prototypical \begin{document}$ \theta $\end{document}-schemes are analyzed in detail and various stability properties are established for its numerical solution. An extensive numerical study and performance comparison are also presented for the proposed fully discrete finite element schemes.

Stochastic quasi-subgradient method for stochastic quasi-convex feasibility problems
Gang Li, Minghua Li and Yaohua Hu
2022, 15(4): 713-725 doi: 10.3934/dcdss.2021127 +[Abstract](479) +[HTML](213) +[PDF](328.15KB)

The feasibility problem is at the core of the modeling of many problems in various disciplines of mathematics and physical sciences, and the quasi-convex function is widely applied in many fields such as economics, finance, and management science. In this paper, we consider the stochastic quasi-convex feasibility problem (SQFP), which is to find a common point of infinitely many sublevel sets of quasi-convex functions. Inspired by the idea of a stochastic index scheme, we propose a stochastic quasi-subgradient method to solve the SQFP, in which the quasi-subgradients of a random (and finite) index set of component quasi-convex functions at the current iterate are used to construct the descent direction at each iteration. Moreover, we introduce a notion of Hölder-type error bound property relative to the random control sequence for the SQFP, and use it to establish the global convergence theorem and convergence rate theory of the stochastic quasi-subgradient method. It is revealed in this paper that the stochastic quasi-subgradient method enjoys both advantages of low computational cost requirement and fast convergence feature.

A drift homotopy implicit particle filter method for nonlinear filtering problems
Xin Li, Feng Bao and Kyle Gallivan
2022, 15(4): 727-746 doi: 10.3934/dcdss.2021097 +[Abstract](785) +[HTML](278) +[PDF](3237.56KB)

In this paper, we develop a drift homotopy implicit particle filter method. The methodology of our approach is to adopt the concept of drift homotopy in the resampling procedure of the particle filter method for solving the nonlinear filtering problem, and we introduce an implicit particle filter method to improve the efficiency of the drift homotopy resampling procedure. Numerical experiments are carried out to demonstrate the effectiveness and efficiency of our drift homotopy implicit particle filter.

ISALT: Inference-based schemes adaptive to large time-stepping for locally Lipschitz ergodic systems
Xingjie Helen Li, Fei Lu and Felix X.-F. Ye
2022, 15(4): 747-771 doi: 10.3934/dcdss.2021103 +[Abstract](531) +[HTML](256) +[PDF](1553.44KB)

Efficient simulation of SDEs is essential in many applications, particularly for ergodic systems that demand efficient simulation of both short-time dynamics and large-time statistics. However, locally Lipschitz SDEs often require special treatments such as implicit schemes with small time-steps to accurately simulate the ergodic measures. We introduce a framework to construct inference-based schemes adaptive to large time-steps (ISALT) from data, achieving a reduction in time by several orders of magnitudes. The key is the statistical learning of an approximation to the infinite-dimensional discrete-time flow map. We explore the use of numerical schemes (such as the Euler-Maruyama, the hybrid RK4, and an implicit scheme) to derive informed basis functions, leading to a parameter inference problem. We introduce a scalable algorithm to estimate the parameters by least squares, and we prove the convergence of the estimators as data size increases.

We test the ISALT on three non-globally Lipschitz SDEs: the 1D double-well potential, a 2D multiscale gradient system, and the 3D stochastic Lorenz equation with a degenerate noise. Numerical results show that ISALT can tolerate time-step magnitudes larger than plain numerical schemes. It reaches optimal accuracy in reproducing the invariant measure when the time-step is medium-large.

Explicit multistep stochastic characteristic approximation methods for forward backward stochastic differential equations
Ying Liu, Yabing Sun and Weidong Zhao
2022, 15(4): 773-795 doi: 10.3934/dcdss.2021044 +[Abstract](1086) +[HTML](573) +[PDF](427.7KB)

In this work, by combining with stochastic approximation methods, we proposed a new explicit multistep scheme for solving the forward backward stochastic differential equations. Compared with the one constructed by using derivative approximation method, the new one covers the approximation of the stochastic part and is more accurate and easier to realize. Several numerical tests are presented to show the stability and effectiveness of the proposed scheme.

Bayesian topological signal processing
Christopher Oballe, Alan Cherne, Dave Boothe, Scott Kerick, Piotr J. Franaszczuk and Vasileios Maroulas
2022, 15(4): 797-817 doi: 10.3934/dcdss.2021084 +[Abstract](968) +[HTML](631) +[PDF](1823.57KB)

Topological data analysis encompasses a broad set of techniques that investigate the shape of data. One of the predominant tools in topological data analysis is persistent homology, which is used to create topological summaries of data called persistence diagrams. Persistent homology offers a novel method for signal analysis. Herein, we aid interpretation of the sublevel set persistence diagrams of signals by 1) showing the effect of frequency and instantaneous amplitude on the persistence diagrams for a family of deterministic signals, and 2) providing a general equation for the probability density of persistence diagrams of random signals via a pushforward measure. We also provide a topologically-motivated, efficiently computable statistical descriptor analogous to the power spectral density for signals based on a generalized Bayesian framework for persistence diagrams. This Bayesian descriptor is shown to be competitive with power spectral densities and continuous wavelet transforms at distinguishing signals with different dynamics in a classification problem with autoregressive signals.

Numerical methods preserving multiple Hamiltonians for stochastic Poisson systems
Lijin Wang, Pengjun Wang and Yanzhao Cao
2022, 15(4): 819-836 doi: 10.3934/dcdss.2021095 +[Abstract](733) +[HTML](268) +[PDF](1120.34KB)

In this paper, we propose a class of numerical schemes for stochastic Poisson systems with multiple invariant Hamiltonians. The method is based on the average vector field discrete gradient and an orthogonal projection technique. The proposed schemes preserve all the invariant Hamiltonians of the stochastic Poisson systems simultaneously, with possibility of achieving high convergence orders in the meantime. We also prove that our numerical schemes preserve the Casimir functions of the systems under certain conditions. Numerical experiments verify the theoretical results and illustrate the effectiveness of our schemes.

Resampled ensemble Kalman inversion for Bayesian parameter estimation with sequential data
Jiangqi Wu, Linjie Wen and Jinglai Li
2022, 15(4): 837-850 doi: 10.3934/dcdss.2021045 +[Abstract](885) +[HTML](508) +[PDF](471.21KB)

Many real-world problems require to estimate parameters of interest in a Bayesian framework from data that are collected sequentially in time. Conventional methods to sample the posterior distributions, such as Markov Chain Monte Carlo methods can not efficiently deal with such problems as they do not take advantage of the sequential structure. To this end, the Ensemble Kalman inversion (EnKI), which updates the particles whenever a new collection of data arrive, becomes a popular tool to solve this type of problems. In this work we present a method to improve the performance of EnKI, which removes some particles that significantly deviate from the posterior distribution via a resampling procedure. Specifically we adopt an idea developed in the sequential Monte Carlo sampler, and simplify it to compute an approximate weight function. Finally we use the computed weights to identify and remove those particles seriously deviating from the target distribution. With numerical examples, we demonstrate that, without requiring any additional evaluations of the forward model, the proposed method can improve the performance of standard EnKI in certain class of problems.

Highly accurate operator factorization methods for the integral fractional Laplacian and its generalization
Yixuan Wu and Yanzhi Zhang
2022, 15(4): 851-876 doi: 10.3934/dcdss.2022016 +[Abstract](271) +[HTML](85) +[PDF](823.45KB)

In this paper, we propose a new class of operator factorization methods to discretize the integral fractional Laplacian \begin{document}$ (- \Delta)^\frac{{ \alpha}}{{2}} $\end{document} for \begin{document}$ \alpha \in (0, 2) $\end{document}. One main advantage is that our method can easily increase numerical accuracy by using high-degree Lagrange basis functions, but remain its scheme structure and computer implementation unchanged. Moreover, it results in a symmetric (multilevel) Toeplitz differentiation matrix, enabling efficient computation via the fast Fourier transforms. If constant or linear basis functions are used, our method has an accuracy of \begin{document}$ {\mathcal O}(h^2) $\end{document}, while \begin{document}$ {\mathcal O}(h^4) $\end{document} for quadratic basis functions with \begin{document}$ h $\end{document} a small mesh size. This accuracy can be achieved for any \begin{document}$ \alpha \in (0, 2) $\end{document} and can be further increased if higher-degree basis functions are chosen. Numerical experiments are provided to approximate the fractional Laplacian and solve the fractional Poisson problems. It shows that if the solution of fractional Poisson problem satisfies \begin{document}$ u \in C^{m, l}(\bar{ \Omega}) $\end{document} for \begin{document}$ m \in {\mathbb N} $\end{document} and \begin{document}$ 0 < l < 1 $\end{document}, our method has an accuracy of \begin{document}$ {\mathcal O}(h^{\min\{m+l, \, 2\}}) $\end{document} for constant and linear basis functions, while \begin{document}$ {\mathcal O}(h^{\min\{m+l, \, 4\}}) $\end{document} for quadratic basis functions. Additionally, our method can be readily applied to approximate the generalized fractional Laplacians with symmetric kernel function, and numerical study on the tempered fractional Poisson problem demonstrates its efficiency.

Analytic continuation of noisy data using Adams Bashforth residual neural network
Xuping Xie, Feng Bao, Thomas Maier and Clayton Webster
2022, 15(4): 877-892 doi: 10.3934/dcdss.2021088 +[Abstract](1071) +[HTML](401) +[PDF](580.8KB)

We propose a data-driven learning framework for the analytic continuation problem in numerical quantum many-body physics. Designing an accurate and efficient framework for the analytic continuation of imaginary time using computational data is a grand challenge that has hindered meaningful links with experimental data. The standard Maximum Entropy (MaxEnt)-based method is limited by the quality of the computational data and the availability of prior information. Also, the MaxEnt is not able to solve the inversion problem under high level of noise in the data. Here we introduce a novel learning model for the analytic continuation problem using a Adams-Bashforth residual neural network (AB-ResNet). The advantage of this deep learning network is that it is model independent and, therefore, does not require prior information concerning the quantity of interest given by the spectral function. More importantly, the ResNet-based model achieves higher accuracy than MaxEnt for data with higher level of noise. Finally, numerical examples show that the developed AB-ResNet is able to recover the spectral function with accuracy comparable to MaxEnt where the noise level is relatively small.

A stochastic collocation method based on sparse grids for a stochastic Stokes-Darcy model
Zhipeng Yang, Xuejian Li, Xiaoming He and Ju Ming
2022, 15(4): 893-912 doi: 10.3934/dcdss.2021104 +[Abstract](941) +[HTML](256) +[PDF](520.9KB)

In this paper, we develop a sparse grid stochastic collocation method to improve the computational efficiency in handling the steady Stokes-Darcy model with random hydraulic conductivity. To represent the random hydraulic conductivity, the truncated Karhunen-Loève expansion is used. For the discrete form in probability space, we adopt the stochastic collocation method and then use the Smolyak sparse grid method to improve the efficiency. For the uncoupled deterministic subproblems at collocation nodes, we apply the general coupled finite element method. Numerical experiment results are presented to illustrate the features of this method, such as the sample size, convergence, and randomness transmission through the interface.

An out-of-distribution-aware autoencoder model for reduced chemical kinetics
Pei Zhang, Siyan Liu, Dan Lu, Ramanan Sankaran and Guannan Zhang
2022, 15(4): 913-930 doi: 10.3934/dcdss.2021138 +[Abstract](568) +[HTML](251) +[PDF](4209.63KB)

While detailed chemical kinetic models have been successful in representing rates of chemical reactions in continuum scale computational fluid dynamics (CFD) simulations, applying the models in simulations for engineering device conditions is computationally prohibitive. To reduce the cost, data-driven methods, e.g., autoencoders, have been used to construct reduced chemical kinetic models for CFD simulations. Despite their success, data-driven methods rely heavily on training data sets and can be unreliable when used in out-of-distribution (OOD) regions (i.e., when extrapolating outside of the training set). In this paper, we present an enhanced autoencoder model for combustion chemical kinetics with uncertainty quantification to enable the detection of model usage in OOD regions, and thereby creating an OOD-aware autoencoder model that contributes to more robust CFD simulations of reacting flows. We first demonstrate the effectiveness of the method in OOD detection in two well-known datasets, MNIST and Fashion-MNIST, in comparison with the deep ensemble method, and then present the OOD-aware autoencoder for reduced chemistry model in syngas combustion.

Augmented Gaussian random field: Theory and computation
Sheng Zhang, Xiu Yang, Samy Tindel and Guang Lin
2022, 15(4): 931-957 doi: 10.3934/dcdss.2021098 +[Abstract](1029) +[HTML](314) +[PDF](3088.69KB)

We propose the novel augmented Gaussian random field (AGRF), which is a universal framework incorporating the data of observable and derivatives of any order. Rigorous theory is established. We prove that under certain conditions, the observable and its derivatives of any order are governed by a single Gaussian random field, which is the aforementioned AGRF. As a corollary, the statement "the derivative of a Gaussian process remains a Gaussian process" is validated, since the derivative is represented by a part of the AGRF. Moreover, a computational method corresponding to the universal AGRF framework is constructed. Both noiseless and noisy scenarios are considered. Formulas of the posterior distributions are deduced in a nice closed form. A significant advantage of our computational method is that the universal AGRF framework provides a natural way to incorporate arbitrary order derivatives and deal with missing data. We use four numerical examples to demonstrate the effectiveness of the computational method. The numerical examples are composite function, damped harmonic oscillator, Korteweg-De Vries equation, and Burgers' equation.

Effective Mori-Zwanzig equation for the reduced-order modeling of stochastic systems
Yuanran Zhu and Huan Lei
2022, 15(4): 959-982 doi: 10.3934/dcdss.2021096 +[Abstract](721) +[HTML](294) +[PDF](3687.64KB)

Built upon the hypoelliptic analysis of the effective Mori-Zwanzig (EMZ) equation for observables of stochastic dynamical systems, we show that the obtained semigroup estimates for the EMZ equation can be used to derive prior estimates of the observable statistics for systems in the equilibrium and nonequilibrium state. In addition, we introduce both first-principle and data-driven methods to approximate the EMZ memory kernel and prove the convergence of the data-driven parametrization schemes using the regularity estimate of the memory kernel. The analysis results are validated numerically via the Monte-Carlo simulation of the Langevin dynamics for a Fermi-Pasta-Ulam chain model. With the same example, we also show the effectiveness of the proposed memory kernel approximation methods.

2021 Impact Factor: 1.865
5 Year Impact Factor: 1.622
2021 CiteScore: 3.6

Editors/Guest Editors



Call for special issues

Email Alert

[Back to Top]