• Previous Article
    LANTERN: Learn analysis transform network for dynamic magnetic resonance imaging
  • IPI Home
  • This Issue
  • Next Article
    Automatic segmentation of the femur and tibia bones from X-ray images based on pure dilated residual U-Net
December  2021, 15(6): 1347-1362. doi: 10.3934/ipi.2021017

Nonlocal latent low rank sparse representation for single image super resolution via self-similarity learning

Zhongyuan University of Technology, School of Science, 450007 Zhengzhou, China

* Corresponding author: Changming Song

Received  October 2019 Revised  September 2020 Published  December 2021 Early access  February 2021

Fund Project: This work is supported partially by the National Natural Science Foundation of China under Grants No. 11671367

In this paper, we propose a novel scheme for single image super resolution (SR) reconstruction. Firstly, we construct a new self-similarity framework by regarding the low resolution (LR) images as the low rank version of corresponding high resolution (HR) images. Subsequently, nuclear norm minimization (NNM) is employed to generate LR image pyramids from HR ones. The structure of our framework is beneficial to extract LR features, where we regard the quotient image, calculated between HR image and LR image at the same layer, as LR feature. This LR feature has the same dimension as LR image; however the dimension of commonly used gradient feature is 4 times than LR image. On the other hand, we employ nonlocal similar patch, within the same scale and across different scales, to generate HR and LR dictionaries. In the course of encoding, codes are calculated from both row and column of LR dictionary for each LR patch; at the same time, both low rank and sparse constraints on codes matrix give us a hand to remove coding noises. Finally, both quantitative and perceptual results demonstrate that our proposed method has a good SR performance.

Citation: Changming Song, Yun Wang. Nonlocal latent low rank sparse representation for single image super resolution via self-similarity learning. Inverse Problems & Imaging, 2021, 15 (6) : 1347-1362. doi: 10.3934/ipi.2021017
References:
[1]

J. Allebach and P. W. Wong, Edge-Directed Interpolation, International Conference on Image Processing IEEE, 1996. doi: 10.1109/ICIP.1996.560768.  Google Scholar

[2]

S. Baker and T. Kanade, Limits on super-resolution and how to break them, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24 (2002), 1167-1183.  doi: 10.1109/TPAMI.2002.1033210.  Google Scholar

[3]

T. Chan, S. Esedoglu and A. Yip, Recent Developments in Total Variation Image Restoration, Mathematical Models of Computer Vision, 2011. Google Scholar

[4]

W. DongL. ZhangG. Shi and Xin Li, Nonlocally centralized sparse representation for image restoration, IEEE Transactions on Image Processing, 22 (2013), 1620-1630.  doi: 10.1109/TIP.2012.2235847.  Google Scholar

[5]

W. Dong, L. Zhang and G. Shi, Centralized sparse representation for image restoration, International Conference on Computer Vision, 2011, 1259-1266. Google Scholar

[6]

W. DongL. ZhangG. Shi and X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization, IEEE Transactions on Image Processing, 20 (2011), 1838-1857.  doi: 10.1109/TIP.2011.2108306.  Google Scholar

[7]

W. T. FreemanT.R. Jones and E. C. Pasztor, Example based super resolution, IEEE Computer Graphics and Applications, 22 (2012), 56-65.  doi: 10.1109/38.988747.  Google Scholar

[8]

S. Gu, W. Zuo, Q. Xie, D. Meng, X. Feng and L. Zhang, Convolutional sparse coding for image super-resolution, International Conference on Computer Vision, (2015), 1823-1831. doi: 10.1109/ICCV.2015.212.  Google Scholar

[9]

S. GuQ. XiD. MengW. ZuoX. Feng and L. Zhang, Weighted nuclear norm minimization and its applications to low level vision, International Journal of Computer Vision, 121 (2017), 183-208.  doi: 10.1007/s11263-016-0930-5.  Google Scholar

[10]

H. Chang, D.-Y. Yeung and Y. Xiong, Super-resolution through neighbor embedding, Computer Vision and Pattern Recognition, 2004,275-282. doi: 10.1109/CVPR.2004.1315043.  Google Scholar

[11]

R. G. Keys, Cubic convolution interpolation for digital image processing, IEEE Transactions on Acoustics, Speech, and Signal Processing, 29 (1981), 1153-1160.  doi: 10.1109/TASSP.1981.1163711.  Google Scholar

[12]

X. Li and M. T. Orchard, New Edge-Directed interpolation, IEEE Transactions on Image Processing, 10 (2001), 1521-1527.  doi: 10.1109/83.951537.  Google Scholar

[13]

Z. Lin and H.-Y. Shum, Fundamental limits of reconstruction-based superresolution algorithms under local translation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 26 (2004), 83-97.  doi: 10.1109/TPAMI.2004.1261081.  Google Scholar

[14]

Z. Lin, M. Chen and Y. Ma, The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices, arXiv: 1009.5055. Google Scholar

[15]

G. Liu and S. Yan, Latent low-rank representation for subspace segmentation and feature extraction, International Conference on Computer Vision, (2011), 1615-1622. doi: 10.1109/ICCV.2011.6126422.  Google Scholar

[16]

L. I. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear Phenomena, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[17]

J. Shi and C. Qi, Low-rank sparse representation for single image super-resolution via self-similarity learning, International Conference on Image Processing, (2016), 1424-1428. doi: 10.1109/ICIP.2016.7532593.  Google Scholar

[18]

J. A. Tropp and S. J. Wright, Computational methods for sparse solution of linear inverse problems, Proceedings of the IEEE, 96 (2010), 948-958.   Google Scholar

[19]

S. L. Wang, D. Zhang and L. Yan, Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis, Computer Vision and Pattern Recognition, (2012), 2216-2223. Google Scholar

[20]

H. Wang, S. Z. Li and Y. Wang, Face recognition under varying lighting conditions using self quotient image, IEEE International Conference on Automatic Face Gesture Recognition, (2004), 819-824. Google Scholar

[21]

J. YangJ. WrightT. S. Huang and Y. Ma, Image super-resolution via sparse representation, IEEE Transactions on Image Processing, 19 (2010), 2861-2873.  doi: 10.1109/TIP.2010.2050625.  Google Scholar

[22]

C.-Y. Yang, J.-B. Huang and M.-H. Yang, Exploiting self-similarities for single frame super-resolution, Asian Conference on Computer Vision, (2010), 497-510. doi: 10.1007/978-3-642-19318-7_39.  Google Scholar

[23]

G. YuG. Sapiro and S. Mallat, Solving inverse problems with piecewise linear estimators:From Gaussian mixture models to structured sparsity, IEEE Transactions on Image Processing, 21 (2012), 2481-2499.  doi: 10.1109/TIP.2011.2176743.  Google Scholar

[24]

T. Zhang, B. Ghanem, S. Liu, C. Xu and N. Ahuja, Low-rank sparse coding for image classification, International Conference on Computer Vision, (2013), 281-288. doi: 10.1109/ICCV.2013.42.  Google Scholar

show all references

References:
[1]

J. Allebach and P. W. Wong, Edge-Directed Interpolation, International Conference on Image Processing IEEE, 1996. doi: 10.1109/ICIP.1996.560768.  Google Scholar

[2]

S. Baker and T. Kanade, Limits on super-resolution and how to break them, IEEE Transactions on Pattern Analysis and Machine Intelligence, 24 (2002), 1167-1183.  doi: 10.1109/TPAMI.2002.1033210.  Google Scholar

[3]

T. Chan, S. Esedoglu and A. Yip, Recent Developments in Total Variation Image Restoration, Mathematical Models of Computer Vision, 2011. Google Scholar

[4]

W. DongL. ZhangG. Shi and Xin Li, Nonlocally centralized sparse representation for image restoration, IEEE Transactions on Image Processing, 22 (2013), 1620-1630.  doi: 10.1109/TIP.2012.2235847.  Google Scholar

[5]

W. Dong, L. Zhang and G. Shi, Centralized sparse representation for image restoration, International Conference on Computer Vision, 2011, 1259-1266. Google Scholar

[6]

W. DongL. ZhangG. Shi and X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization, IEEE Transactions on Image Processing, 20 (2011), 1838-1857.  doi: 10.1109/TIP.2011.2108306.  Google Scholar

[7]

W. T. FreemanT.R. Jones and E. C. Pasztor, Example based super resolution, IEEE Computer Graphics and Applications, 22 (2012), 56-65.  doi: 10.1109/38.988747.  Google Scholar

[8]

S. Gu, W. Zuo, Q. Xie, D. Meng, X. Feng and L. Zhang, Convolutional sparse coding for image super-resolution, International Conference on Computer Vision, (2015), 1823-1831. doi: 10.1109/ICCV.2015.212.  Google Scholar

[9]

S. GuQ. XiD. MengW. ZuoX. Feng and L. Zhang, Weighted nuclear norm minimization and its applications to low level vision, International Journal of Computer Vision, 121 (2017), 183-208.  doi: 10.1007/s11263-016-0930-5.  Google Scholar

[10]

H. Chang, D.-Y. Yeung and Y. Xiong, Super-resolution through neighbor embedding, Computer Vision and Pattern Recognition, 2004,275-282. doi: 10.1109/CVPR.2004.1315043.  Google Scholar

[11]

R. G. Keys, Cubic convolution interpolation for digital image processing, IEEE Transactions on Acoustics, Speech, and Signal Processing, 29 (1981), 1153-1160.  doi: 10.1109/TASSP.1981.1163711.  Google Scholar

[12]

X. Li and M. T. Orchard, New Edge-Directed interpolation, IEEE Transactions on Image Processing, 10 (2001), 1521-1527.  doi: 10.1109/83.951537.  Google Scholar

[13]

Z. Lin and H.-Y. Shum, Fundamental limits of reconstruction-based superresolution algorithms under local translation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 26 (2004), 83-97.  doi: 10.1109/TPAMI.2004.1261081.  Google Scholar

[14]

Z. Lin, M. Chen and Y. Ma, The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices, arXiv: 1009.5055. Google Scholar

[15]

G. Liu and S. Yan, Latent low-rank representation for subspace segmentation and feature extraction, International Conference on Computer Vision, (2011), 1615-1622. doi: 10.1109/ICCV.2011.6126422.  Google Scholar

[16]

L. I. RudinS. Osher and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D: Nonlinear Phenomena, 60 (1992), 259-268.  doi: 10.1016/0167-2789(92)90242-F.  Google Scholar

[17]

J. Shi and C. Qi, Low-rank sparse representation for single image super-resolution via self-similarity learning, International Conference on Image Processing, (2016), 1424-1428. doi: 10.1109/ICIP.2016.7532593.  Google Scholar

[18]

J. A. Tropp and S. J. Wright, Computational methods for sparse solution of linear inverse problems, Proceedings of the IEEE, 96 (2010), 948-958.   Google Scholar

[19]

S. L. Wang, D. Zhang and L. Yan, Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis, Computer Vision and Pattern Recognition, (2012), 2216-2223. Google Scholar

[20]

H. Wang, S. Z. Li and Y. Wang, Face recognition under varying lighting conditions using self quotient image, IEEE International Conference on Automatic Face Gesture Recognition, (2004), 819-824. Google Scholar

[21]

J. YangJ. WrightT. S. Huang and Y. Ma, Image super-resolution via sparse representation, IEEE Transactions on Image Processing, 19 (2010), 2861-2873.  doi: 10.1109/TIP.2010.2050625.  Google Scholar

[22]

C.-Y. Yang, J.-B. Huang and M.-H. Yang, Exploiting self-similarities for single frame super-resolution, Asian Conference on Computer Vision, (2010), 497-510. doi: 10.1007/978-3-642-19318-7_39.  Google Scholar

[23]

G. YuG. Sapiro and S. Mallat, Solving inverse problems with piecewise linear estimators:From Gaussian mixture models to structured sparsity, IEEE Transactions on Image Processing, 21 (2012), 2481-2499.  doi: 10.1109/TIP.2011.2176743.  Google Scholar

[24]

T. Zhang, B. Ghanem, S. Liu, C. Xu and N. Ahuja, Low-rank sparse coding for image classification, International Conference on Computer Vision, (2013), 281-288. doi: 10.1109/ICCV.2013.42.  Google Scholar

Figure 1.  Nuclear norm minimization
Figure 2.  Our method for conducting image pyramid
$ \times2 $)">Figure 3.  Visual results comparison for the image"butterfly"($ \times2 $)
$ \times2 $)">Figure 4.  Visual results comparison for the image"girl"($ \times2 $)
$ \times3 $)">Figure 5.  Visual results comparison for the image"foreman"($ \times3 $)
$ \times3 $)">Figure 6.  Visual results comparison for the image"parrots"($ \times3 $)
$ \times2 $)">Figure 7.  Visual results comparison for the image"hat"($ \times2 $)
Table 1.  The running time for patch size"$ \times2 $"
patch size 5 6 7 8 9 10 11
time 300 217 173 137 128 121 119
psnr 30.236 30.228 30.226 30.199 30.199 30.198 30.194
ssim 0.898 0.898 0.898 0.898 0.898 0.897 0.896
patch size 5 6 7 8 9 10 11
time 300 217 173 137 128 121 119
psnr 30.236 30.228 30.226 30.199 30.199 30.198 30.194
ssim 0.898 0.898 0.898 0.898 0.898 0.897 0.896
Table 2.  Comparison among different methods "$ \times2 $"
Methods lena Child butterfly foreman house hat bike parrots girl pepper
Bicubic 29.469 30.741 24.140 32.186 29.005 29.205 22.801 27.998 33.718 30.939
0.908 0.909 0.824 0.907 0.840 0.833 0.705 0.883 0.846 0.941
ScSR 30.056 32.428 24.579 32.789 30.334 29.626 23.426 28.680 34.278 31.157
0.840 0.844 0.704 0.660 0.472 0.525 0.653 0.621 0.605 0.853
LRSC 29.758 30.753 24.133 32.709 29.319 29.446 23.016 28.357 34.066 30.926
0.912 0.904 0.831 0.913 0.845 0.842 0.718 0.891 0.851 0.942
Our 30.102 31.517 24.830 32.719 29.909 29.649 23.515 28.589 34.099 31.296
0.913 0.910 0.832 0.914 0.845 0.842 0.718 0.892 0.853 0.942
Methods lena Child butterfly foreman house hat bike parrots girl pepper
Bicubic 29.469 30.741 24.140 32.186 29.005 29.205 22.801 27.998 33.718 30.939
0.908 0.909 0.824 0.907 0.840 0.833 0.705 0.883 0.846 0.941
ScSR 30.056 32.428 24.579 32.789 30.334 29.626 23.426 28.680 34.278 31.157
0.840 0.844 0.704 0.660 0.472 0.525 0.653 0.621 0.605 0.853
LRSC 29.758 30.753 24.133 32.709 29.319 29.446 23.016 28.357 34.066 30.926
0.912 0.904 0.831 0.913 0.845 0.842 0.718 0.891 0.851 0.942
Our 30.102 31.517 24.830 32.719 29.909 29.649 23.515 28.589 34.099 31.296
0.913 0.910 0.832 0.914 0.845 0.842 0.718 0.892 0.853 0.942
Table 3.  Comparison among different methods "$ \times3 $"
Methods lena Child butterfly foreman house hat bike parrots girl pepper
Bicubic 28.913 30.432 24.320 32.814 30.213 29.921 23.411 28.536 33.685 29.901
0.933 0.933 0.894 0.947 0.912 0.896 0.804 0.927 0.900 0.963
ScSR 30.136 31.452 25.104 33.468 30.878 30.559 24.089 29.264 34.194 30.778
0.672 0.702 0.574 0.590 0.415 0.440 0.557 0.548 0.489 0.683
LRSC 29.054 30.753 24.466 33.203 30.387 30.381 23.411 28.408 33.915 29.897
0.938 0.904 0.894 0.940 0.915 0.902 0.801 0.931 0.901 0.963
Our 29.782 30.933 24.671 33.450 30.487 29.649 23.775 28.837 33.916 29.901
0.939 0.936 0.897 0.950 0.916 0.842 0.810 0.931 0.904 0.967
The values in the cell are PSNR (dB) and SSIM from top to bottom.
Methods lena Child butterfly foreman house hat bike parrots girl pepper
Bicubic 28.913 30.432 24.320 32.814 30.213 29.921 23.411 28.536 33.685 29.901
0.933 0.933 0.894 0.947 0.912 0.896 0.804 0.927 0.900 0.963
ScSR 30.136 31.452 25.104 33.468 30.878 30.559 24.089 29.264 34.194 30.778
0.672 0.702 0.574 0.590 0.415 0.440 0.557 0.548 0.489 0.683
LRSC 29.054 30.753 24.466 33.203 30.387 30.381 23.411 28.408 33.915 29.897
0.938 0.904 0.894 0.940 0.915 0.902 0.801 0.931 0.901 0.963
Our 29.782 30.933 24.671 33.450 30.487 29.649 23.775 28.837 33.916 29.901
0.939 0.936 0.897 0.950 0.916 0.842 0.810 0.931 0.904 0.967
The values in the cell are PSNR (dB) and SSIM from top to bottom.
Table 4.  Noisy case: Comparison among different methods "$ \times2 $"
Methods lena Child butterfly foreman house hat bike parrots girl pepper
Bicubic 25.059 26.557 22.790 26.795 25.980 25.726 22.918 25.262 27.138 26.470
0.594 0.593 0.615 0.506 0.484 0.444 0.550 0.497 0.476 0.598
ScSR 25.094 25.422 20.334 25.416 24.784 24.660 21.567 24.309 25.564 25.393
0.404 0.409 0.465 0.243 0.231 0.1778 0.428 0.234 0.214 0.404
LRSC 26.370 26.315 23.452 27.611 26.529 26.472 22.463 26.057 27.994 26.785
0.675 0.667 0.671 0.585 0.555 0.532 0.614 0.585 0.558 0.671
WNNM 25.774 25.941 22.937 27.275 27.819 25.632 27.635 25.463 27.866 25.798
0.621 0.653 0.658 0.576 0.569 0.523 0.564 0.534 0.545 0.579
Our 27.125 26.315 24.121 28.833 27.375 27.352 22.988 26.896 29.295 27.720
0.727 0.724 0.717 0.659 0.619 0.598 0.661 0.653 0.632 0.732
The values in the cell are PSNR (dB) and SSIM from top to bottom.
Methods lena Child butterfly foreman house hat bike parrots girl pepper
Bicubic 25.059 26.557 22.790 26.795 25.980 25.726 22.918 25.262 27.138 26.470
0.594 0.593 0.615 0.506 0.484 0.444 0.550 0.497 0.476 0.598
ScSR 25.094 25.422 20.334 25.416 24.784 24.660 21.567 24.309 25.564 25.393
0.404 0.409 0.465 0.243 0.231 0.1778 0.428 0.234 0.214 0.404
LRSC 26.370 26.315 23.452 27.611 26.529 26.472 22.463 26.057 27.994 26.785
0.675 0.667 0.671 0.585 0.555 0.532 0.614 0.585 0.558 0.671
WNNM 25.774 25.941 22.937 27.275 27.819 25.632 27.635 25.463 27.866 25.798
0.621 0.653 0.658 0.576 0.569 0.523 0.564 0.534 0.545 0.579
Our 27.125 26.315 24.121 28.833 27.375 27.352 22.988 26.896 29.295 27.720
0.727 0.724 0.717 0.659 0.619 0.598 0.661 0.653 0.632 0.732
The values in the cell are PSNR (dB) and SSIM from top to bottom.
[1]

Duo Wang, Zheng-Fen Jin, Youlin Shang. A penalty decomposition method for nuclear norm minimization with l1 norm fidelity term. Evolution Equations & Control Theory, 2019, 8 (4) : 695-708. doi: 10.3934/eect.2019034

[2]

Amine Laghrib, Abdelkrim Chakib, Aissam Hadri, Abdelilah Hakim. A nonlinear fourth-order PDE for multi-frame image super-resolution enhancement. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 415-442. doi: 10.3934/dcdsb.2019188

[3]

Fatimzehrae Ait Bella, Aissam Hadri, Abdelilah Hakim, Amine Laghrib. A nonlocal Weickert type PDE applied to multi-frame super-resolution. Evolution Equations & Control Theory, 2021, 10 (3) : 633-655. doi: 10.3934/eect.2020084

[4]

Liqun Qi, Shenglong Hu, Yanwei Xu. Spectral norm and nuclear norm of a third order tensor. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021010

[5]

Wei Wan, Weihong Guo, Jun Liu, Haiyang Huang. Non-local blind hyperspectral image super-resolution via 4d sparse tensor factorization and low-rank. Inverse Problems & Imaging, 2020, 14 (2) : 339-361. doi: 10.3934/ipi.2020015

[6]

Xavier Bresson, Tony F. Chan. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Inverse Problems & Imaging, 2008, 2 (4) : 455-484. doi: 10.3934/ipi.2008.2.455

[7]

Fan Jia, Xue-Cheng Tai, Jun Liu. Nonlocal regularized CNN for image segmentation. Inverse Problems & Imaging, 2020, 14 (5) : 891-911. doi: 10.3934/ipi.2020041

[8]

Jie Huang, Xiaoping Yang, Yunmei Chen. A fast algorithm for global minimization of maximum likelihood based on ultrasound image segmentation. Inverse Problems & Imaging, 2011, 5 (3) : 645-657. doi: 10.3934/ipi.2011.5.645

[9]

Yan Jin, Jürgen Jost, Guofang Wang. A new nonlocal variational setting for image processing. Inverse Problems & Imaging, 2015, 9 (2) : 415-430. doi: 10.3934/ipi.2015.9.415

[10]

Yifu Feng, Min Zhang. A $p$-spherical section property for matrix Schatten-$p$ quasi-norm minimization. Journal of Industrial & Management Optimization, 2020, 16 (1) : 397-407. doi: 10.3934/jimo.2018159

[11]

Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma. Tensor train rank minimization with nonlocal self-similarity for tensor completion. Inverse Problems & Imaging, 2021, 15 (3) : 475-498. doi: 10.3934/ipi.2021001

[12]

Yonggui Zhu, Yuying Shi, Bin Zhang, Xinyan Yu. Weighted-average alternating minimization method for magnetic resonance image reconstruction based on compressive sensing. Inverse Problems & Imaging, 2014, 8 (3) : 925-937. doi: 10.3934/ipi.2014.8.925

[13]

Haijuan Hu, Jacques Froment, Baoyan Wang, Xiequan Fan. Spatial-Frequency domain nonlocal total variation for image denoising. Inverse Problems & Imaging, 2020, 14 (6) : 1157-1184. doi: 10.3934/ipi.2020059

[14]

Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Robustness of time-dependent attractors in H1-norm for nonlocal problems. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1011-1036. doi: 10.3934/dcdsb.2018140

[15]

Sepideh Mirrahimi. Adaptation and migration of a population between patches. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 753-768. doi: 10.3934/dcdsb.2013.18.753

[16]

Shikun Wang. Dynamics of a chemostat system with two patches. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6261-6278. doi: 10.3934/dcdsb.2019138

[17]

Ghislain Fourier, Gabriele Nebe. Degenerate flag varieties in network coding. Advances in Mathematics of Communications, 2021  doi: 10.3934/amc.2021027

[18]

Chang-Yuan Cheng, Xingfu Zou. On predation effort allocation strategy over two patches. Discrete & Continuous Dynamical Systems - B, 2021, 26 (4) : 1889-1915. doi: 10.3934/dcdsb.2020281

[19]

Taoufik Hmidi, Joan Mateu. Bifurcation of rotating patches from Kirchhoff vortices. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5401-5422. doi: 10.3934/dcds.2016038

[20]

Arseny Egorov. Morse coding for a Fuchsian group of finite covolume. Journal of Modern Dynamics, 2009, 3 (4) : 637-646. doi: 10.3934/jmd.2009.3.637

2020 Impact Factor: 1.639

Metrics

  • PDF downloads (234)
  • HTML views (345)
  • Cited by (0)

Other articles
by authors

[Back to Top]