American Institute of Mathematical Sciences

ISSN:
1930-8337

eISSN:
1930-8345

All Issues

Inverse Problems & Imaging

August 2013 , Volume 7 , Issue 3

Special issue dedicated to Tony F. Chan on the occasion of his 60th birthday

Select all articles

Export/Reference:

2013, 7(3): i-ii doi: 10.3934/ipi.2013.7.3i +[Abstract](2183) +[PDF](345.2KB)
Abstract:
In 2012, there were two scientific conferences in honor of Professor Tony F. Chan's 60th birthday. The first one was The International Conference on Scientific Computing'', which took place in Hong Kong from January 4-7. The second one, The International Conference on the Frontier of Computational and Applied Mathematics'', was held at the Institute of Pure and Applied Mathematics (IPAM) of UCLA from June 8-10. Invitations were also sent out to conference speakers, participants, Professor Chan's former colleagues, collaborators and students, to solicit for original research papers. After the standard peer review processes, we have collected 23 papers in this special issue dedicated to Professor Chan to celebrate his contribution and leadership in the area of scientific computing and image processing.

2013, 7(3): 663-678 doi: 10.3934/ipi.2013.7.663 +[Abstract](2595) +[PDF](2331.4KB)
Abstract:
The anisotropic perfectly matched layer (PML) defines a continuous vector field outside a rectangle domain and performs the complex coordinate stretching along the direction of the vector field. In this paper we propose a new way of constructing the vector field which allows us to prove the exponential decay of the stretched Green function without the constraint on the thickness of the PML layer. We report numerical experiments to illustrate the competitive behavior of the proposed PML method.
2013, 7(3): 679-695 doi: 10.3934/ipi.2013.7.679 +[Abstract](4129) +[PDF](866.5KB)
Abstract:
In this paper, we will investigate the first- and second-order implicit-explicit schemes with parameters for solving the Allen-Cahn equation. It is known that the Allen-Cahn equation satisfies a nonlinear stability property, i.e., the free-energy functional decreases in time. The goal of this paper is to consider implicit-explicit schemes that inherit the nonlinear stability of the continuous model, which will be achieved by properly choosing parameters associated with the implicit-explicit schemes. Theoretical justifications for the nonlinear stability of the schemes will be provided, and the theoretical results will be verified by several numerical examples.
2013, 7(3): 697-716 doi: 10.3934/ipi.2013.7.697 +[Abstract](2540) +[PDF](979.3KB)
Abstract:
This paper is devoted to exploring the effects of non-Gaussian fluctuations on dynamical evolution of a tumor growth model with immunization, subject to non-Gaussian $\alpha$-stable type Lévy noise. The corresponding deterministic model has two meaningful states which represent the state of tumor extinction and the state of stable tumor, respectively. To characterize the time for different initial densities of tumor cells staying in the domain between these two states and the likelihood of crossing this domain, the mean exit time and the escape probability are quantified by numerically solving differential-integral equations with appropriate exterior boundary conditions. The relationships between the dynamical properties and the noise parameters are examined. It is found that in the different stages of tumor, the noise parameters have different influences on the time and the likelihood inducing tumor extinction. These results are relevant for determining efficient therapeutic regimes to induce the extinction of tumor cells.
2013, 7(3): 717-736 doi: 10.3934/ipi.2013.7.717 +[Abstract](3181) +[PDF](1271.6KB)
Abstract:
We propose iterative thresholding algorithms based on the iterated Tikhonov method for image deblurring problems. Our method is similar in idea to the modified linearized Bregman algorithm (MLBA) so is easy to implement. In order to obtain good restorations, MLBA requires an accurate estimate of the regularization parameter $\alpha$ which is hard to get in real applications. Based on previous results in iterated Tikhonov method, we design two nonstationary iterative thresholding algorithms which give near optimal results without estimating $\alpha$. One of them is based on the iterative soft thresholding algorithm and the other is based on MLBA. We show that the nonstationary methods, if converge, will converge to the same minimizers of the stationary variants. Numerical results show that the accuracy and convergence of our nonstationary methods are very robust with respect to the changes in the parameters and the restoration results are comparable to those of MLBA with optimal $\alpha$.
2013, 7(3): 737-755 doi: 10.3934/ipi.2013.7.737 +[Abstract](3572) +[PDF](3779.0KB)
Abstract:
In this work, we introduce a numerical method to approximate differential operators and integrals on point clouds sampled from a two dimensional manifold embedded in $\mathbb{R}^n$. Global mesh structure is usually hard to construct in this case. While our method only relies on the local mesh structure at each data point, which is constructed through local triangulation in the tangent space obtained by local principal component analysis (PCA). Once the local mesh is available, we propose numerical schemes to approximate differential operators and define mass matrix and stiffness matrix on point clouds, which are utilized to solve partial differential equations (PDEs) and variational problems on point clouds. As numerical examples, we use the proposed local mesh method and variational formulation to solve the Laplace-Beltrami eigenproblem and solve the Eikonal equation for computing distance map and tracing geodesics on point clouds.
2013, 7(3): 757-775 doi: 10.3934/ipi.2013.7.757 +[Abstract](3176) +[PDF](2584.8KB)
Abstract:
This work is concerned with a direct sampling method (DSM) for inverse acoustic scattering problems using far-field data. Using one or few incident waves, the DSM provides quite reasonable profiles of scatterers in time-harmonic inverse acoustic scattering without a priori knowledge of either the physical properties or the number of disconnected components of the scatterer. We shall first present a novel derivation of the DSM using far-field data, then carry out a systematic evaluation of the performances and distinctions of the DSM using both near-field and far-field data. A new interpretation from the physical perspective is provided based on some numerical observations. It is shown from a variety of numerical experiments that the method has several interesting and promising potentials: a) ability to identify not only medium scatterers, but also obstacles, and even cracks, using measurement data from one or few incident directions, b) robustness with respect to large noise, and c) computational efficiency with only inner products involved.
2013, 7(3): 777-794 doi: 10.3934/ipi.2013.7.777 +[Abstract](2741) +[PDF](1880.5KB)
Abstract:
Color image demosaicing consists in recovering full resolution color information from color-filter-array (CFA) samples with 66.7% amount of missing data. Most of the existing color demosaicing methods [14, 25, 16, 2, 26] are based on interpolation from inter-channel correlation and local geometry, which are not robust to highly saturated color images with small geometric features. In this paper, we introduce wavelet frame based methods by using a sparse wavelet [8, 22, 9, 23] approximation of individual color channels and color differences that recovers both geometric features and color information. The proposed models can be efficiently solved by Bregmanized operator splitting algorithm [27]. Numerical simulations of two datasets: McM and Kodak PhotoCD, show that our method outperforms other existing methods in terms of PSNR and visual quality.
Qun Lin and
2013, 7(3): 795-811 doi: 10.3934/ipi.2013.7.795 +[Abstract](2708) +[PDF](428.3KB)
Abstract:
A short survey of lower bounds of eigenvalue problems by nonconforming finite element methods is given. The class of eigenvalue problems considered covers Laplace, Steklov, biharmonic and Stokes eigenvalue problems.
2013, 7(3): 813-838 doi: 10.3934/ipi.2013.7.813 +[Abstract](2341) +[PDF](13267.2KB)
Abstract:
Patches are small images (typically $8\times 8$ to $12\times 12$) extracted from natural images. The patch space'' is the set of all observable patches extracted from digital images in the world. This observable space is huge and should permit a sophisticated statistical analysis. In the past ten years, statistical inquiries and applications involving the patch space'' have tried to explore its structure on several levels. The first attempts have invalidated models based on PCA or Fourier analysis. Redundant bases (or patch dictionaries) obtained by independent component analysis (ICA) or related processes have tried to find a reduced set of patches on which every other patch obtains a sparse decomposition. Optimization algorithms such as EM have been used to explore the patch space as a Gaussian mixture. The goal of the present paper is to review this literature, and to extend its methodology to gain more insight on the independent components of the patch space.
The conclusion of our analysis is that the sophisticated ICA tools introduced to analyze the patch space require a previous geometric normalization step to yield non trivial results. Indeed, we demonstrate by a simple experimental setup and by the analysis of the literature that, without this normalization, the patch space structure is actually hidden by the rotations, translations, and contrast changes. Thus, ICA models applied on a random set of patches boil down to segmenting the patch space depending on insignificant dimensions such as the patch orientation or the position of its gradient barycenter. When, instead of exploring the raw patches, one decides to explore the quotient of the set of patches by these action groups, a geometrically interpretable patch structure is revealed.
2013, 7(3): 839-861 doi: 10.3934/ipi.2013.7.839 +[Abstract](4465) +[PDF](1592.7KB)
Abstract:
We present a method to enhance the quality of a video sequence captured through a turbulent atmospheric medium, and give an estimate of the radiance of the distant scene, represented as a latent image,'' which is assumed to be static throughout the video. Due to atmospheric turbulence, temporal averaging produces a blurred version of the scene's radiance. We propose a method combining Sobolev gradient and Laplacian to stabilize the video sequence, and a latent image is further found utilizing the lucky region" method. The video sequence is stabilized while keeping sharp details, and the latent image shows more consistent straight edges. We analyze the well-posedness for the stabilizing PDE and the linear stability of the numerical scheme.
2013, 7(3): 863-884 doi: 10.3934/ipi.2013.7.863 +[Abstract](2850) +[PDF](6978.6KB)
Abstract:
We address the problem of surface inpainting, which aims to fill in holes or missing regions on a Riemann surface based on its surface geometry. In practical situation, surfaces obtained from range scanners often have holes or missing regions where the 3D models are incomplete. In order to analyze the 3D shapes effectively, restoring the incomplete shape by filling in the surface holes is necessary. In this paper, we propose a novel conformal approach to inpaint surface holes on a Riemann surface based on its surface geometry. The basic idea is to represent the Riemann surface using its conformal factor and mean curvature. According to Riemann surface theory, a Riemann surface can be uniquely determined by its conformal factor and mean curvature up to a rigid motion. Given a Riemann surface $S$, its mean curvature $H$ and conformal factor $\lambda$ can be computed easily through its conformal parameterization. Conversely, given $\lambda$ and $H$, a Riemann surface can be uniquely reconstructed by solving the Gauss-Codazzi equation on the conformal parameter domain. Hence, the conformal factor and the mean curvature are two geometric quantities fully describing the surface. With this $\lambda$-$H$ representation of the surface, the problem of surface inpainting can be reduced to the problem of image inpainting of $\lambda$ and $H$ on the conformal parameter domain. The inpainting of $\lambda$ and $H$ can be done by conventional image inpainting models. Once $\lambda$ and $H$ are inpainted, a Riemann surface can be reconstructed which effectively restores the 3D surface with missing holes. Since the inpainting model is based on the geometric quantities $\lambda$ and $H$, the restored surface follows the surface geometric pattern as much as possible. We test the proposed algorithm on synthetic data, 3D human face data and MRI-derived brain surfaces. Experimental results show that our proposed method is an effective surface inpainting algorithm to fill in surface holes on an incomplete 3D models based their surface geometry.
2013, 7(3): 885-906 doi: 10.3934/ipi.2013.7.885 +[Abstract](2389) +[PDF](6793.9KB)
Abstract:
In this paper, we present a novel fuse-before-detect algorithm for multi-view foreground segmentation via fourth order tensor learning. By using several camera views, most of the existing algorithms first detect the various object features for each view and then fuse the data together for foreground segmentation or tracking. However, this kind of single view foreground segmentation algorithm always suffers from various environmental problems, such as reflection and shadow induced by shiny objects, especially floor and wall. These segmentation errors reduce the accuracy of the multi-view tracking algorithms. In the proposed algorithm, we first fuse multi-view camera data to a fourth-order tensor through multiple parallelized planes projections. An incremental fourth-order tensor learning algorithm is then employed to perform foreground segmentation in the fused tensor data. By collecting all the information from different views, this approach could restrain the specific environmental effects in each view and give better segmentation results. Experimental results are reported to show the performance of the proposed method is better than the state-of-the-art methods in challenged environments.
2013, 7(3): 907-926 doi: 10.3934/ipi.2013.7.907 +[Abstract](2902) +[PDF](751.1KB)
Abstract:
We consider the problem of establishing a statistical ranking for a set of alternatives from a dataset which consists of an (inconsistent and incomplete) set of quantitative pairwise comparisons of the alternatives. If we consider the directed graph where vertices represent the alternatives and the pairwise comparison data is a function on the arcs, then the statistical ranking problem is to find a potential function, defined on the vertices, such that the gradient of the potential optimally agrees with the pairwise comparisons. Potentials, optimal in the $l^{2}$-norm sense, can be found by solving a least-squares problem on the digraph and, recently, the residual has been interpreted using the Hodge decomposition (Jiang et. al., 2010). In this work, we consider an $l^{1}$-norm formulation of the statistical ranking problem. We describe a fast graph-cut approach for finding $\epsilon$-optimal solutions, which has been used successfully in image processing and computer vision problems. Applying this method to several datasets, we demonstrate its efficacy at finding solutions with sparse residual.
2013, 7(3): 927-946 doi: 10.3934/ipi.2013.7.927 +[Abstract](2565) +[PDF](2702.9KB)
Abstract:
Cartoon-texture regularization is a technique for reconstructing fine-scale details from ill-posed imaging problems, which are often ill-posed because of some non-invertible convolution kernel. Here we propose a cartoon-texture regularization for deblurring problems with semi-known kernel. The cartoon component is modeled by a function of bounded variation, while the texture component is measured by an approximate duality with Lipschitz functions ($W^{1,\infty}$). To approximate the dual Lipschitz norm, which is difficult to calculate, we propose an approach using concentration of measure. This provides an accurate and differentiable expression. We also present numerical results for our cartoon-texture decomposition, both in the case of a semi-known deblurring kernel and the case of a known kernel. The texture norm enables one to numerically reconstruct fine scale details that are typically difficult to recover from blurred images, and for semi-known deblurring the method quickly leads to the correct kernel, even when that kernel contains noise.
2013, 7(3): 947-959 doi: 10.3934/ipi.2013.7.947 +[Abstract](2997) +[PDF](1947.1KB)
Abstract:
In this paper, we propose an adaptive finite element method for simulating the moving contact line problems in three dimensions. The model that we used is the coupled Cahn-Hilliard Navier-Stokes equations with the generalized Navier boundary condition(GNBC) proposed in [18]. In our algorithm, to improve the efficiency of the simulation, we use the residual type adaptive finite element algorithm. It is well known that the phase variable decays much faster away from the interface than the velocity variables. Therefore we use an adaptive strategy that will take into account of such difference. Numerical experiments show that our algorithm is both efficient and reliable.
2013, 7(3): 961-966 doi: 10.3934/ipi.2013.7.961 +[Abstract](2173) +[PDF](279.0KB)
Abstract:
This note describes three recent factorizations of banded invertible infinite matrices
1. If $A$ has a banded inverse;: $A = BC$ with block--diagonal factors $B$ and $C$.
2. Permutations factor into a shift times $N < 2w$ tridiagonal permutations.
3. $A = LPU$ with lower triangular $L$, permutation $P$, upper triangular $U$.
We include examples and references and outlines of proofs.
2013, 7(3): 967-986 doi: 10.3934/ipi.2013.7.967 +[Abstract](3365) +[PDF](570.0KB)
Abstract:
Wave propagation problems arise in a wide range of applications. The energy conserving property is one of the guiding principles for numerical algorithms, in order to minimize the phase or shape errors after long time integration. In this paper, we develop and analyze a local discontinuous Galerkin (LDG) method for solving the wave equation. We prove optimal error estimates, superconvergence toward a particular projection of the exact solution, and the energy conserving property for the semi-discrete formulation. The analysis is extended to the fully discrete LDG scheme, with the centered second-order time discretization (the leap-frog scheme). Our numerical experiments demonstrate optimal rates of convergence and superconvergence. We also show that the shape of the solution, after long time integration, is well preserved due to the energy conserving property.
2013, 7(3): 987-1005 doi: 10.3934/ipi.2013.7.987 +[Abstract](2645) +[PDF](933.4KB)
Abstract:
In this paper, we propose the single-grid multilevel (SGML) method for large-scale linear systems discretized from partial differential equations. The SGML method combines the methodologies of both the geometric and the algebraic multigrid methods. It uses the underlying geometric information from the finest grid. A simple and isotropic coarsening strategy is applied to explicitly control the complexity of the hierarchical structure, and smoothers are chosen based on the property of the model problem and the underlying grid information to complement the coarsening and maintain overall efficiency. Additionally, the underlying grid is used to design an efficient parallel algorithm in order to parallelize the SGML method. We apply the SGML method on the Poisson problem and the convection diffusion problem as examples, and we present the numerical results to demonstrate the performance of the SGML method.
2013, 7(3): 1007-1029 doi: 10.3934/ipi.2013.7.1007 +[Abstract](4033) +[PDF](849.4KB)
Abstract:
Obtaining high quality images is very important in many areas of applied sciences, such as medical imaging, optical microscopy, and astronomy. Image reconstruction can be considered as solving the ill-posed and inverse problem $y=Ax+n$, where $x$ is the image to be reconstructed and $n$ is the unknown noise. In this paper, we propose general robust expectation maximization (EM)-type algorithms for image reconstruction. Both Poisson noise and Gaussian noise types are considered. The EM-type algorithms are performed using iteratively EM (or SART for weighted Gaussian noise) and regularization in the image domain. The convergence of these algorithms is proved in several ways: EM with a priori information and alternating minimization methods. To show the efficiency of EM-type algorithms, the application in computerized tomography reconstruction is chosen.
2013, 7(3): 1031-1050 doi: 10.3934/ipi.2013.7.1031 +[Abstract](2647) +[PDF](3492.0KB)
Abstract:
The primal-dual hybrid gradient (PDHG) algorithm has been successfully applied to a number of total variation (TV) based image reconstruction problems for fast numerical solutions. We show that PDHG can also effectively solve the computational problem of image inpainting in wavelet domain, where high quality images are to be recovered from incomplete wavelet coefficients due to lossy data transmission. In particular, as the original PDHG algorithm requires the orthogonality of encoding operators for optimal performance, we propose an approximated PDHG algorithm to tackle the non-orthogonality of Daubechies 7-9 wavelet which is widely used in practice. We show that this approximated version essentially alters the gradient descent direction in the original PDHG algorithm, but eliminates its orthogonality restriction and retains low computation complexity. Moreover, we prove that the sequences generated by the approximated PDHG algorithm always converge monotonically to an exact solution of the TV based image reconstruction problem starting from any initial guess. We demonstrate that the approximated PDHG algorithm also works on more general image reconstruction problems with total variation regularizations, and analyze the condition on the step sizes that guarantees the convergence.
2013, 7(3): 1051-1074 doi: 10.3934/ipi.2013.7.1051 +[Abstract](2902) +[PDF](2351.8KB)
Abstract:
For the Wigner equation with discontinuous potentials, a phase space Gaussian beam (PSGB) summation method is proposed in this paper. We first derive the equations satisfied by the parameters for PSGBs and establish the relations for parameters of the Gaussian beams between the physical space (GBs) and the phase space, which motivates an efficient initial data preparation thus a reduced computational cost than previous method in the literature. The method consists of three steps: 1) Decompose the initial value of the wave function into the sum of GBs and use the parameter relations to prepare the initial values of PSGBs; 2) Solve the evolution equations for each PSGB; 3) Sum all the PSGBs to construct the approximate solution of the Wigner equation. Additionally, in order to connect PSGBs at the discontinuous points of the potential, we provide interface conditions for a single phase space Gaussian beam. Numerical examples are given to verify the validity and accuracy of method.
2013, 7(3): 1075-1097 doi: 10.3934/ipi.2013.7.1075 +[Abstract](2404) +[PDF](757.6KB)
Abstract:
We propose a novel fast numerical method for denoising of 1D signals based on curvature minimization. Motivated by the primal-dual formulation for total variation minimization introduced by Chan, Golub, and Mulet, the proposed method makes use of some auxiliary variables to reformulate the stiff terms presented in the Euler-Lagrange equation which is a fourth-order differential equation. A direct application of Newton's method to the resulting system of equations often fails to converge. We propose a modified Newton's iteration which exhibits local superlinear convergence and global convergence in practical settings. The method is much faster than other existing methods for the model. Unlike all other existing methods, it also does not require tuning any additional parameter besides the model parameter. Numerical experiments are presented to demonstrate the effectiveness of the proposed method.
2013, 7(3): 1099-1113 doi: 10.3934/ipi.2013.7.1099 +[Abstract](2838) +[PDF](1110.9KB)
Abstract:
Image segmentation is an essential problem in imaging science. One of the most successful segmentation models is the piecewise constant Mumford-Shah minimization model. This minimization problem is however difficult to carry out, mainly due to the non-convexity of the energy. Recent advances based on convex relaxation methods are capable of estimating almost perfectly the geometry of the regions to be segmented when the mean intensity and the number of segmented regions are known a priori. The next important challenge is to provide a tight approximation of the optimal geometry, mean intensity and the number of regions simultaneously while keeping the computational time and memory usage reasonable. In this work, we propose a new algorithm that combines convex relaxation methods with the four color theorem to deal with the unsupervised segmentation problem. More precisely, the proposed algorithm can segment any a priori unknown number of regions with only four intensity functions and four indicator (labeling") functions. The number of regions in our segmentation model is decided by one parameter that controls the regularization strength of the geometry, i.e., the total length of the boundary of all the regions. The segmented image function can take as many constant values as needed.

2020 Impact Factor: 1.639
5 Year Impact Factor: 1.720
2020 CiteScore: 2.6