
ISSN:
1930-8337
eISSN:
1930-8345
All Issues
Inverse Problems and Imaging
February 2021 , Volume 15 , Issue 1
Special issue on mathematical/statistical approaches in data sciences
Select all articles
Export/Reference:
Image segmentation is the task of partitioning an image into individual objects, and has many important applications in a wide range of fields. The majority of segmentation methods rely on image intensity gradient to define edges between objects. However, intensity gradient fails to identify edges when the contrast between two objects is low. In this paper we aim to introduce methods to make such weak edges more prominent in order to improve segmentation results of objects of low contrast. This is done for two kinds of segmentation models: global and local. We use a combination of a reproducing kernel Hilbert space and approximated Heaviside functions to decompose an image and then show how this decomposition can be applied to a segmentation model. We show some results and robustness to noise, as well as demonstrating that we can combine the reconstruction and segmentation model together, allowing us to obtain both the decomposition and segmentation simultaneously.
Pathological examination has been done manually by visual inspection of hematoxylin and eosin (H&E)-stained images. However, this process is labor intensive, prone to large variations, and lacking reproducibility in the diagnosis of a tumor. We aim to develop an automatic workflow to extract different cell nuclei found in cancerous tumors portrayed in digital renderings of the H&E-stained images. For a given image, we propose a semantic pixel-wise segmentation technique using dilated convolutions. The architecture of our dilated convolutional network (DCN) is based on SegNet, a deep convolutional encoder-decoder architecture. For the encoder, all the max pooling layers in the SegNet are removed and the convolutional layers are replaced by dilated convolution layers with increased dilation factors to preserve image resolution. For the decoder, all max unpooling layers are removed and the convolutional layers are replaced by dilated convolution layers with decreased dilation factors to remove gridding artifacts. We show that dilated convolutions are superior in extracting information from textured images. We test our DCN network on both synthetic data sets and a public available data set of H&E-stained images and achieve better results than the state of the art.
In this paper, we study the dynamics of gradient descent in learning neural networks for classification problems. Unlike in existing works, we consider the linearly non-separable case where the training data of different classes lie in orthogonal subspaces. We show that when the network has sufficient (but not exceedingly large) number of neurons, (1) the corresponding minimization problem has a desirable landscape where all critical points are global minima with perfect classification; (2) gradient descent is guaranteed to converge to the global minima. Moreover, we discovered a geometric condition on the network weights so that when it is satisfied, the weight evolution transitions from a slow phase of weight direction spreading to a fast phase of weight convergence. The geometric condition says that the convex hull of the weights projected on the unit sphere contains the origin.
We present in this paper some worst-case datasets of deterministic first-order methods for solving large-scale binary logistic regression problems. Under the assumption that the number of algorithm iterations is much smaller than the problem dimension, with our worst-case datasets it requires at least
Sparse representation of a single measurement vector (SMV) has been explored in a variety of compressive sensing applications. Recently, SMV models have been extended to solve multiple measurement vectors (MMV) problems, where the underlying signal is assumed to have joint sparse structures. To circumvent the NP-hardness of the
The robust principal component analysis (RPCA) decomposes a data matrix into a low-rank part and a sparse part. There are mainly two types of algorithms for RPCA. The first type of algorithm applies regularization terms on the singular values of a matrix to obtain a low-rank matrix. However, calculating singular values can be very expensive for large matrices. The second type of algorithm replaces the low-rank matrix as the multiplication of two small matrices. They are faster than the first type because no singular value decomposition (SVD) is required. However, the rank of the low-rank matrix is required, and an accurate rank estimation is needed to obtain a reasonable solution. In this paper, we propose algorithms that combine both types. Our proposed algorithms require an upper bound of the rank and SVD on small matrices. First, they are faster than the first type because the cost of SVD on small matrices is negligible. Second, they are more robust than the second type because an upper bound of the rank instead of the exact rank is required. Furthermore, we apply the Gauss-Newton method to increase the speed of our algorithms. Numerical experiments show the better performance of our proposed algorithms.
We improve the robustness of Deep Neural Net (DNN) to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation remarkably improves both the generalization and robustness of DNN. In the CIFAR10 benchmark, we raise the robust accuracy of the adversarially trained ResNet20 from
Training deep neural networks can be difficult. For classical neural networks, the initialization method by Xavier and Yoshua which is later generalized by He, Zhang, Ren and Sun can facilitate stable training. However, with the recent development of new layer types, we find that the above mentioned initialization methods may fail to lead to successful training. Based on these two methods, we will propose a new initialization by studying the parameter space of a network. Our principal is to put constrains on the growth of parameters in different layers in a consistent way. In order to do so, we introduce a norm to the parameter space and use this norm to measure the growth of parameters. Our new method is suitable for a wide range of layer types, especially for layers with parameter-sharing weight matrices.
A fast non-convex low-rank matrix decomposition method for potential field data separation is presented. The singular value decomposition of the large size trajectory matrix, which is also a block Hankel matrix, is obtained using a fast randomized singular value decomposition algorithm in which fast block Hankel matrix-vector multiplications are implemented with minimal memory storage. This fast block Hankel matrix randomized singular value decomposition algorithm is integrated into the
2020
Impact Factor: 1.639
5 Year Impact Factor: 1.720
2020 CiteScore: 2.6
Readers
Authors
Editors
Referees
Librarians
Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]