
-
Previous Article
Stabilization of 2-d Mindlin-Timoshenko plates with localized acoustic boundary feedback
- JIMO Home
- This Issue
-
Next Article
On the convexity for the range set of two quadratic functions
Side-information-induced reweighted sparse subspace clustering
1. | School of Mathematics and Statistics, Xidian University, Xi'an 710171, Shaanxi, China |
2. | Key Laboratory of Group & Graph Theories and Applications, Chongqing University of Arts and Sciences, Chongqing 402160, China |
Subspace clustering segments a collection of data from a union of several subspaces into clusters with each cluster corresponding to one subspace. The geometric information of the dataset reflects its intrinsic structure and can be utilized to assist the segmentation. In this paper, we propose side-information-induced reweighted sparse subspace clustering (SRSSC) for high-dimensional data clustering. In our method, the geometric information of the high-dimensional data points in a target space is utilized to induce subspace clustering as side-information. We solve the method by iterating the reweighted $ l_1 $-norm minimization to obtain the self-representation coefficients of the data and segment the data using the spectral clustering framework. We compare the performance of our proposed algorithm with some state-of-the-art algorithms using synthetic data and three famous real datasets. Our proposed SRSSC algorithm is the simplest but the most effective. In the experiments, the results of these clustering algorithms verify the effectiveness of our proposed algorithm.
References:
[1] |
R. G. Baraniuk and et al., Applications of sparse representation and compressive sensing [scanning the issue], Proceedings of the IEEE 98, 6 (2010), 906–909. Google Scholar |
[2] |
S. Boyd and et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations & Trends in Machine Learning, 3 (2010), 1–122. Google Scholar |
[3] |
S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004.
doi: 10.1017/CBO9780511804441.![]() ![]() |
[4] |
M. Brbić and I. Kopriva, $\ell_0$-Motivated Low-Rank Sparse Subspace Clustering, IEEE Transactions on Cybernetics, 2018. Google Scholar |
[5] |
E. J. Candes, M. B. Wakin and S. P. Boyd,
Enhancing sparsity by reweighted $l_1$minimization, Journal of Fourier Analysis & Applications, 14 (2008), 877-905.
doi: 10.1007/s00041-008-9045-x. |
[6] |
H. Z. Chen and et al., Discriminative and coherent subspace clustering, Neurocomputing, 284 (2018), 177–186. Google Scholar |
[7] |
H. Z. Chen, W. Wang and X. Feng, Structured sparse subspace clustering with grouping-effect-within-cluster, Pattern Recognition, (2018), S0031320318301948. Google Scholar |
[8] |
E. Elhamifar and R. Vidal, Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2765-2781. Google Scholar |
[9] |
M. Fazel, H. Hindi and S. P. Boyd, Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices., American Control Conference, Proceedings of the 2003 IEEE, (2003). Google Scholar |
[10] |
A. S. Georghiades, P. N. Belhumeur and D. J. Kriegman, From few to many: Illumination cone models for face recognition under variable lighting and pose, IEEE Transactions on Pattern Analysis and Machine Intelligence, 23 (2001), 643-660. Google Scholar |
[11] |
I. F. Gorodnitsky and B. D. Rao, Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm, IEEE Transactions on Signal Processing, 45 (2002), 600-616. Google Scholar |
[12] |
M. Grant, S. Boyd and Y. Ye, CVX: Matlab software for disciplined convex programming, (2008). Google Scholar |
[13] |
H. Hu, Z. Lin, J. Feng and et al., Smooth representation clustering, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), 3834–3841. Google Scholar |
[14] |
J. J. Hull, A database for handwritten text recognition research, IEEE Trans. Pattern Anal. Mach. Intell., 16 (1994). Google Scholar |
[15] |
V. C. Klema and A. J. Laub,
The singular value decomposition: Its computation and some applications, IEEE Transactions on Automatic Control, 25 (1980), 164-176.
doi: 10.1109/TAC.1980.1102314. |
[16] |
V. P. Kshirsagar, M. R. Baviskar and M. E. Gaikwad, Face Recognition Using Eigenfaces, International Conference on Computer Research & Development, 2011. Google Scholar |
[17] |
K. C. Lee, J. Ho and D. J. Kriegman, Acquiring linear subspaces for face recognition under variable lighting, IEEE Transactions on Pattern Analysis & Machine Intelligence, 27 (2005), 684-698. Google Scholar |
[18] |
C.-G. Li, C. You and R. Vidal.,
Structured sparse subspace clustering: A joint affinity learning and subspace clustering framework, IEEE Transactions on Image Processing, 26 (2017), 2988-3001.
doi: 10.1109/TIP.2017.2691557. |
[19] |
C. L. Liu and et al., Handwritten digit recognition: Benchmarking of state-of-the-art techniques, Pattern Recognition, 36 (2003), 2271–2285. Google Scholar |
[20] |
G. C. Liu and et al., Robust recovery of subspace structures by low-rank representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 171–184. Google Scholar |
[21] |
C. Y. Lu, H. Min, Z. Q. Zhao and et al., Robust and efficient subspace segmentation via least squares regression, European Conference on Computer Vision. Springer, Berlin, Heidelberg, (2012), 347–360. Google Scholar |
[22] |
U. von Luxburg,
A tutorial on spectral clustering, Statistics and Computing, 17 (2007), 395-416.
doi: 10.1007/s11222-007-9033-z. |
[23] |
S. A. Nene, S. K. Nayar and H. Murase, Columbia Object Image Library (COIL-20) (CUCS-005-96), Department of Computer Science, Columbia University, 1996. Google Scholar |
[24] |
N. R. Pal and K. P. Sankar, A review on image segmentation techniques, Pattern Recognition, 26 (1993), 1277-1294. Google Scholar |
[25] |
P. Wang and et al., Structural Reweight Sparse Subspace Clustering, Neural Processing Letters, 2018. Google Scholar |
[26] |
J. B. Shi and J. Malik, Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (2000), 888-905. Google Scholar |
[27] |
R. Tibshirani,
Regression Shrinkage and Selection via the LASSO: A retrospective., Journal of the Royal Statistical Society. Series B: Methodological, 73 (2011), 273-282.
doi: 10.1111/j.1467-9868.2011.00771.x. |
[28] |
C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, International Journal of Computer Vision, 9 (1992), 137-154. Google Scholar |
[29] |
R. Tron and R. Vidal, A Benchmark for the Comparison of 3-D Motion Segmentation Algorithms, 2007 IEEE Conference on Computer Vision and Pattern Recognition IEEE Computer Society, 2007. Google Scholar |
[30] |
R. Vidal, Subspace clustering, Signal Processing Magazine IEEE, 28 (2011), 52-68. Google Scholar |
[31] |
W. W. Wang, C. Y. Chen, H. Z. Chen and X. C. Feng,
Unified discriminative and coherent semi-supervised subspace clustering, IEEE Transactions on Image Processing, 27 (2018), 2461-2470.
doi: 10.1109/TIP.2018.2806278. |
[32] |
W. W. Wang and C. L. Wu, Image segmentation by correlation adaptive weighted regression, Neurocomputing, 267 (2017), 426-435. Google Scholar |
[33] |
W. W. Wang, B. Zhang and X. Feng, Subspace segmentation by correlation adaptive regression, IEEE Transactions on Circuits & Systems for Video Technology, 99 (2017), 1-1. Google Scholar |
[34] |
J. W. Wong, Signal-to-Noise Ratio (SNR), Encyclopedia of Radiation Oncology, 2013. Google Scholar |
[35] |
J. Xu, K. Xu, K. Chen and et al., Reweighted sparse subspace clustering, Computer Vision and Image Understanding, 138 (2015), 25–37. Google Scholar |
[36] |
J. Y. Yan and M. Pollefeys, A General Framework for Motion Segmentation: Independent, Articulated, Rigid, Non-rigid, Degenerate and Non-degenerate, Computer Vision-ECCV 2006, 2006. Google Scholar |
[37] |
Y. Yang, J. Feng, N. Jojic and et al., $\ell^{0} $-sparse subspace clustering, European Conference on Computer Vision, Springer, Cham, (2016), 731–747. Google Scholar |
[38] |
T. Zhang, A. Szlam, Y. Wang and G. Lerman,
Hybrid linear modeling via local best-fit flats, International Journal of Computer Vision, 100 (2012), 217-240.
doi: 10.1007/s11263-012-0535-6. |
show all references
References:
[1] |
R. G. Baraniuk and et al., Applications of sparse representation and compressive sensing [scanning the issue], Proceedings of the IEEE 98, 6 (2010), 906–909. Google Scholar |
[2] |
S. Boyd and et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations & Trends in Machine Learning, 3 (2010), 1–122. Google Scholar |
[3] |
S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004.
doi: 10.1017/CBO9780511804441.![]() ![]() |
[4] |
M. Brbić and I. Kopriva, $\ell_0$-Motivated Low-Rank Sparse Subspace Clustering, IEEE Transactions on Cybernetics, 2018. Google Scholar |
[5] |
E. J. Candes, M. B. Wakin and S. P. Boyd,
Enhancing sparsity by reweighted $l_1$minimization, Journal of Fourier Analysis & Applications, 14 (2008), 877-905.
doi: 10.1007/s00041-008-9045-x. |
[6] |
H. Z. Chen and et al., Discriminative and coherent subspace clustering, Neurocomputing, 284 (2018), 177–186. Google Scholar |
[7] |
H. Z. Chen, W. Wang and X. Feng, Structured sparse subspace clustering with grouping-effect-within-cluster, Pattern Recognition, (2018), S0031320318301948. Google Scholar |
[8] |
E. Elhamifar and R. Vidal, Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2765-2781. Google Scholar |
[9] |
M. Fazel, H. Hindi and S. P. Boyd, Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices., American Control Conference, Proceedings of the 2003 IEEE, (2003). Google Scholar |
[10] |
A. S. Georghiades, P. N. Belhumeur and D. J. Kriegman, From few to many: Illumination cone models for face recognition under variable lighting and pose, IEEE Transactions on Pattern Analysis and Machine Intelligence, 23 (2001), 643-660. Google Scholar |
[11] |
I. F. Gorodnitsky and B. D. Rao, Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm, IEEE Transactions on Signal Processing, 45 (2002), 600-616. Google Scholar |
[12] |
M. Grant, S. Boyd and Y. Ye, CVX: Matlab software for disciplined convex programming, (2008). Google Scholar |
[13] |
H. Hu, Z. Lin, J. Feng and et al., Smooth representation clustering, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), 3834–3841. Google Scholar |
[14] |
J. J. Hull, A database for handwritten text recognition research, IEEE Trans. Pattern Anal. Mach. Intell., 16 (1994). Google Scholar |
[15] |
V. C. Klema and A. J. Laub,
The singular value decomposition: Its computation and some applications, IEEE Transactions on Automatic Control, 25 (1980), 164-176.
doi: 10.1109/TAC.1980.1102314. |
[16] |
V. P. Kshirsagar, M. R. Baviskar and M. E. Gaikwad, Face Recognition Using Eigenfaces, International Conference on Computer Research & Development, 2011. Google Scholar |
[17] |
K. C. Lee, J. Ho and D. J. Kriegman, Acquiring linear subspaces for face recognition under variable lighting, IEEE Transactions on Pattern Analysis & Machine Intelligence, 27 (2005), 684-698. Google Scholar |
[18] |
C.-G. Li, C. You and R. Vidal.,
Structured sparse subspace clustering: A joint affinity learning and subspace clustering framework, IEEE Transactions on Image Processing, 26 (2017), 2988-3001.
doi: 10.1109/TIP.2017.2691557. |
[19] |
C. L. Liu and et al., Handwritten digit recognition: Benchmarking of state-of-the-art techniques, Pattern Recognition, 36 (2003), 2271–2285. Google Scholar |
[20] |
G. C. Liu and et al., Robust recovery of subspace structures by low-rank representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 171–184. Google Scholar |
[21] |
C. Y. Lu, H. Min, Z. Q. Zhao and et al., Robust and efficient subspace segmentation via least squares regression, European Conference on Computer Vision. Springer, Berlin, Heidelberg, (2012), 347–360. Google Scholar |
[22] |
U. von Luxburg,
A tutorial on spectral clustering, Statistics and Computing, 17 (2007), 395-416.
doi: 10.1007/s11222-007-9033-z. |
[23] |
S. A. Nene, S. K. Nayar and H. Murase, Columbia Object Image Library (COIL-20) (CUCS-005-96), Department of Computer Science, Columbia University, 1996. Google Scholar |
[24] |
N. R. Pal and K. P. Sankar, A review on image segmentation techniques, Pattern Recognition, 26 (1993), 1277-1294. Google Scholar |
[25] |
P. Wang and et al., Structural Reweight Sparse Subspace Clustering, Neural Processing Letters, 2018. Google Scholar |
[26] |
J. B. Shi and J. Malik, Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (2000), 888-905. Google Scholar |
[27] |
R. Tibshirani,
Regression Shrinkage and Selection via the LASSO: A retrospective., Journal of the Royal Statistical Society. Series B: Methodological, 73 (2011), 273-282.
doi: 10.1111/j.1467-9868.2011.00771.x. |
[28] |
C. Tomasi and T. Kanade, Shape and motion from image streams under orthography: A factorization method, International Journal of Computer Vision, 9 (1992), 137-154. Google Scholar |
[29] |
R. Tron and R. Vidal, A Benchmark for the Comparison of 3-D Motion Segmentation Algorithms, 2007 IEEE Conference on Computer Vision and Pattern Recognition IEEE Computer Society, 2007. Google Scholar |
[30] |
R. Vidal, Subspace clustering, Signal Processing Magazine IEEE, 28 (2011), 52-68. Google Scholar |
[31] |
W. W. Wang, C. Y. Chen, H. Z. Chen and X. C. Feng,
Unified discriminative and coherent semi-supervised subspace clustering, IEEE Transactions on Image Processing, 27 (2018), 2461-2470.
doi: 10.1109/TIP.2018.2806278. |
[32] |
W. W. Wang and C. L. Wu, Image segmentation by correlation adaptive weighted regression, Neurocomputing, 267 (2017), 426-435. Google Scholar |
[33] |
W. W. Wang, B. Zhang and X. Feng, Subspace segmentation by correlation adaptive regression, IEEE Transactions on Circuits & Systems for Video Technology, 99 (2017), 1-1. Google Scholar |
[34] |
J. W. Wong, Signal-to-Noise Ratio (SNR), Encyclopedia of Radiation Oncology, 2013. Google Scholar |
[35] |
J. Xu, K. Xu, K. Chen and et al., Reweighted sparse subspace clustering, Computer Vision and Image Understanding, 138 (2015), 25–37. Google Scholar |
[36] |
J. Y. Yan and M. Pollefeys, A General Framework for Motion Segmentation: Independent, Articulated, Rigid, Non-rigid, Degenerate and Non-degenerate, Computer Vision-ECCV 2006, 2006. Google Scholar |
[37] |
Y. Yang, J. Feng, N. Jojic and et al., $\ell^{0} $-sparse subspace clustering, European Conference on Computer Vision, Springer, Cham, (2016), 731–747. Google Scholar |
[38] |
T. Zhang, A. Szlam, Y. Wang and G. Lerman,
Hybrid linear modeling via local best-fit flats, International Journal of Computer Vision, 100 (2012), 217-240.
doi: 10.1007/s11263-012-0535-6. |








Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours | |
Accuracy | Mean | 93.57% | 94.35% | 92.46% | 94.04% | 94.08% | 96.60% | 97.00% |
Median | 92.85% | 94.07% | 92.13% | 93.60% | 93.70% | 96.10% | 96.90% |
Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours | |
Accuracy | Mean | 93.57% | 94.35% | 92.46% | 94.04% | 94.08% | 96.60% | 97.00% |
Median | 92.85% | 94.07% | 92.13% | 93.60% | 93.70% | 96.10% | 96.90% |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 7.35 | 1.75 | 2.13 | 1.87 | 1.23 | 0.57 | 0.48 |
Median | 7.03 | 0.78 | 0.78 | 0 | 0 | 0 | 0 | |
3 Subjects | Mean | 9.92 | 3.03 | 3.49 | 3.29 | 2.84 | 1.08 | 0.78 |
Median | 10.41 | 2.08 | 2.08 | 0.52 | 0.52 | 0 | 0.52 | |
4 Subjects | Mean | 13.66 | 3.25 | 4.86 | 3.80 | 2.95 | 1.65 | 1.07 |
Median | 14.07 | 2.34 | 3.91 | 1.95 | 0.78 | 0.39 | 0.39 | |
5 Subjects | Mean | 17.56 | 3.91 | 5.92 | 4.33 | 3.31 | 2.21 | 1.40 |
Median | 17.81 | 2.50 | 4.99 | 2.50 | 1.25 | 0.62 | 0.62 | |
6 Subjects | Mean | 20.95 | 5.28 | 6.83 | 4.87 | 3.73 | 2.79 | 1.68 |
Median | 21.07 | 2.86 | 5.99 | 3.39 | 2.08 | 1.30 | 1.04 | |
7 Subjects | Mean | 24.31 | 6.38 | 7.75 | 5.40 | 4.01 | 3.43 | 2.03 |
Median | 24.10 | 3.13 | 7.14 | 4.46 | 2.68 | 1.79 | 1.34 | |
8 Subjects | Mean | 27.52 | 6.83 | 11.05 | 5.92 | 4.38 | 3.98 | 2.59 |
Median | 27.83 | 3.71 | 7.42 | 4.69 | 3.13 | 1.86 | 1.56 | |
9 Subjects | Mean | 31.01 | 7.14 | 10.32 | 6.46 | 4.56 | 4.55 | 2.84 |
Median | 31.42 | 4.51 | 7.81 | 4.77 | 3.65 | 2.43 | 1.65 | |
10 Subjects | Mean | 33.49 | 7.81 | 16.95 | 7.40 | 4.74 | 4.90 | 3.33 |
Median | 32.81 | 7.03 | 18.91 | 5.63 | 4.22 | 3.59 | 2.03 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 7.35 | 1.75 | 2.13 | 1.87 | 1.23 | 0.57 | 0.48 |
Median | 7.03 | 0.78 | 0.78 | 0 | 0 | 0 | 0 | |
3 Subjects | Mean | 9.92 | 3.03 | 3.49 | 3.29 | 2.84 | 1.08 | 0.78 |
Median | 10.41 | 2.08 | 2.08 | 0.52 | 0.52 | 0 | 0.52 | |
4 Subjects | Mean | 13.66 | 3.25 | 4.86 | 3.80 | 2.95 | 1.65 | 1.07 |
Median | 14.07 | 2.34 | 3.91 | 1.95 | 0.78 | 0.39 | 0.39 | |
5 Subjects | Mean | 17.56 | 3.91 | 5.92 | 4.33 | 3.31 | 2.21 | 1.40 |
Median | 17.81 | 2.50 | 4.99 | 2.50 | 1.25 | 0.62 | 0.62 | |
6 Subjects | Mean | 20.95 | 5.28 | 6.83 | 4.87 | 3.73 | 2.79 | 1.68 |
Median | 21.07 | 2.86 | 5.99 | 3.39 | 2.08 | 1.30 | 1.04 | |
7 Subjects | Mean | 24.31 | 6.38 | 7.75 | 5.40 | 4.01 | 3.43 | 2.03 |
Median | 24.10 | 3.13 | 7.14 | 4.46 | 2.68 | 1.79 | 1.34 | |
8 Subjects | Mean | 27.52 | 6.83 | 11.05 | 5.92 | 4.38 | 3.98 | 2.59 |
Median | 27.83 | 3.71 | 7.42 | 4.69 | 3.13 | 1.86 | 1.56 | |
9 Subjects | Mean | 31.01 | 7.14 | 10.32 | 6.46 | 4.56 | 4.55 | 2.84 |
Median | 31.42 | 4.51 | 7.81 | 4.77 | 3.65 | 2.43 | 1.65 | |
10 Subjects | Mean | 33.49 | 7.81 | 16.95 | 7.40 | 4.74 | 4.90 | 3.33 |
Median | 32.81 | 7.03 | 18.91 | 5.63 | 4.22 | 3.59 | 2.03 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 15.05 | 13.75 | 14.86 | 8.13 | 0.76 | 1.35 | 0.43 |
Median | 13.91 | 10.42 | 13.07 | 0 | 0 | 0 | 0 | |
3 Subjects | Mean | 22.16 | 21.97 | 21.81 | 10.87 | 1.86 | 1.37 | 0.72 |
Median | 20.22 | 19.44 | 19.65 | 1.85 | 0 | 0 | 0 | |
20 Subjects | Mean(Mdian) | 25.49 | 24.72 | 24.67 | 20.07 | 15.73 | 16.32 | 7.78 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 15.05 | 13.75 | 14.86 | 8.13 | 0.76 | 1.35 | 0.43 |
Median | 13.91 | 10.42 | 13.07 | 0 | 0 | 0 | 0 | |
3 Subjects | Mean | 22.16 | 21.97 | 21.81 | 10.87 | 1.86 | 1.37 | 0.72 |
Median | 20.22 | 19.44 | 19.65 | 1.85 | 0 | 0 | 0 | |
20 Subjects | Mean(Mdian) | 25.49 | 24.72 | 24.67 | 20.07 | 15.73 | 16.32 | 7.78 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 15.49 | 15.25 | 14.76 | 11.80 | 11.48 | 10.90 | 10.80 |
Median | 13.91 | 13.42 | 13.07 | 9.00 | 8.50 | 9.50 | 7.00 | |
3 Subjects | Mean | 29.13 | 28.91 | 28.81 | 27.59 | 27.88 | 27.44 | 26.67 |
Median | 27.02 | 16.34 | 26.37 | 21.83 | 25.67 | 23.82 | 21.33 | |
10 Subjects | Mean(Median) | 46.25 | 45.97 | 45.53 | 44.89 | 44.25 | 44.03 | 43.95 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 15.49 | 15.25 | 14.76 | 11.80 | 11.48 | 10.90 | 10.80 |
Median | 13.91 | 13.42 | 13.07 | 9.00 | 8.50 | 9.50 | 7.00 | |
3 Subjects | Mean | 29.13 | 28.91 | 28.81 | 27.59 | 27.88 | 27.44 | 26.67 |
Median | 27.02 | 16.34 | 26.37 | 21.83 | 25.67 | 23.82 | 21.33 | |
10 Subjects | Mean(Median) | 46.25 | 45.97 | 45.53 | 44.89 | 44.25 | 44.03 | 43.95 |
[1] |
Meng Ding, Ting-Zhu Huang, Xi-Le Zhao, Michael K. Ng, Tian-Hui Ma. Tensor train rank minimization with nonlocal self-similarity for tensor completion. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021001 |
[2] |
Bao Wang, Alex Lin, Penghang Yin, Wei Zhu, Andrea L. Bertozzi, Stanley J. Osher. Adversarial defense via the data-dependent activation, total variation minimization, and adversarial training. Inverse Problems & Imaging, 2021, 15 (1) : 129-145. doi: 10.3934/ipi.2020046 |
[3] |
Abdollah Borhanifar, Maria Alessandra Ragusa, Sohrab Valizadeh. High-order numerical method for two-dimensional Riesz space fractional advection-dispersion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020355 |
[4] |
Tong Peng. Designing prorated lifetime warranty strategy for high-value and durable products under two-dimensional warranty. Journal of Industrial & Management Optimization, 2021, 17 (2) : 953-970. doi: 10.3934/jimo.2020006 |
[5] |
El Haj Laamri, Michel Pierre. Stationary reaction-diffusion systems in $ L^1 $ revisited. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 455-464. doi: 10.3934/dcdss.2020355 |
[6] |
Ziang Long, Penghang Yin, Jack Xin. Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data. Inverse Problems & Imaging, 2021, 15 (1) : 41-62. doi: 10.3934/ipi.2020077 |
[7] |
Max E. Gilmore, Chris Guiver, Hartmut Logemann. Sampled-data integral control of multivariable linear infinite-dimensional systems with input nonlinearities. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021001 |
[8] |
Weinan E, Weiguo Gao. Orbital minimization with localization. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 249-264. doi: 10.3934/dcds.2009.23.249 |
[9] |
Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020463 |
[10] |
Kung-Ching Chang, Xuefeng Wang, Xie Wu. On the spectral theory of positive operators and PDE applications. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3171-3200. doi: 10.3934/dcds.2020054 |
[11] |
Gökhan Mutlu. On the quotient quantum graph with respect to the regular representation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020295 |
[12] |
Wai-Ki Ching, Jia-Wen Gu, Harry Zheng. On correlated defaults and incomplete information. Journal of Industrial & Management Optimization, 2021, 17 (2) : 889-908. doi: 10.3934/jimo.2020003 |
[13] |
Nicolas Rougerie. On two properties of the Fisher information. Kinetic & Related Models, 2021, 14 (1) : 77-88. doi: 10.3934/krm.2020049 |
[14] |
Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032 |
[15] |
Bopeng Rao, Zhuangyi Liu. A spectral approach to the indirect boundary control of a system of weakly coupled wave equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 399-414. doi: 10.3934/dcds.2009.23.399 |
[16] |
Shengxin Zhu, Tongxiang Gu, Xingping Liu. AIMS: Average information matrix splitting. Mathematical Foundations of Computing, 2020, 3 (4) : 301-308. doi: 10.3934/mfc.2020012 |
[17] |
Bo Tan, Qinglong Zhou. Approximation properties of Lüroth expansions. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020389 |
[18] |
Kha Van Huynh, Barbara Kaltenbacher. Some application examples of minimization based formulations of inverse problems and their regularization. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020074 |
[19] |
Helin Guo, Huan-Song Zhou. Properties of the minimizers for a constrained minimization problem arising in Kirchhoff equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1023-1050. doi: 10.3934/dcds.2020308 |
[20] |
Tengfei Yan, Qunying Liu, Bowen Dou, Qing Li, Bowen Li. An adaptive dynamic programming method for torque ripple minimization of PMSM. Journal of Industrial & Management Optimization, 2021, 17 (2) : 827-839. doi: 10.3934/jimo.2019136 |
2019 Impact Factor: 1.366
Tools
Metrics
Other articles
by authors
[Back to Top]