
-
Previous Article
Maximizing reliability of the capacity vector for multi-source multi-sink stochastic-flow networks subject to an assignment budget
- JIMO Home
- This Issue
-
Next Article
Robust equilibrium control-measure policy for a DC pension plan with state-dependent risk aversion under mean-variance criterion
Side-information-induced reweighted sparse subspace clustering
1. | School of Mathematics and Statistics, Xidian University, Xi'an 710171, Shaanxi, China |
2. | Key Laboratory of Group & Graph Theories and Applications, Chongqing University of Arts and Sciences, Chongqing 402160, China |
Subspace clustering segments a collection of data from a union of several subspaces into clusters with each cluster corresponding to one subspace. The geometric information of the dataset reflects its intrinsic structure and can be utilized to assist the segmentation. In this paper, we propose side-information-induced reweighted sparse subspace clustering (SRSSC) for high-dimensional data clustering. In our method, the geometric information of the high-dimensional data points in a target space is utilized to induce subspace clustering as side-information. We solve the method by iterating the reweighted $ l_1 $-norm minimization to obtain the self-representation coefficients of the data and segment the data using the spectral clustering framework. We compare the performance of our proposed algorithm with some state-of-the-art algorithms using synthetic data and three famous real datasets. Our proposed SRSSC algorithm is the simplest but the most effective. In the experiments, the results of these clustering algorithms verify the effectiveness of our proposed algorithm.
References:
[1] |
R. G. Baraniuk and et al., Applications of sparse representation and compressive sensing [scanning the issue], Proceedings of the IEEE 98, 6 (2010), 906–909. |
[2] |
S. Boyd and et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations & Trends in Machine Learning, 3 (2010), 1–122. |
[3] |
S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004.
doi: 10.1017/CBO9780511804441.![]() ![]() ![]() |
[4] |
M. Brbić and I. Kopriva, $\ell_0$-Motivated Low-Rank Sparse Subspace Clustering, IEEE Transactions on Cybernetics, 2018. |
[5] |
E. J. Candes, M. B. Wakin and S. P. Boyd,
Enhancing sparsity by reweighted $l_1$minimization, Journal of Fourier Analysis & Applications, 14 (2008), 877-905.
doi: 10.1007/s00041-008-9045-x. |
[6] |
H. Z. Chen and et al., Discriminative and coherent subspace clustering, Neurocomputing, 284 (2018), 177–186. |
[7] |
H. Z. Chen, W. Wang and X. Feng, Structured sparse subspace clustering with grouping-effect-within-cluster, Pattern Recognition, (2018), S0031320318301948. |
[8] |
E. Elhamifar and R. Vidal,
Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2765-2781.
|
[9] |
M. Fazel, H. Hindi and S. P. Boyd, Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices., American Control Conference, Proceedings of the 2003 IEEE, (2003). |
[10] |
A. S. Georghiades, P. N. Belhumeur and D. J. Kriegman,
From few to many: Illumination cone models for face recognition under variable lighting and pose, IEEE Transactions on Pattern Analysis and Machine Intelligence, 23 (2001), 643-660.
|
[11] |
I. F. Gorodnitsky and B. D. Rao,
Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm, IEEE Transactions on Signal Processing, 45 (2002), 600-616.
|
[12] |
M. Grant, S. Boyd and Y. Ye, CVX: Matlab software for disciplined convex programming, (2008). |
[13] |
H. Hu, Z. Lin, J. Feng and et al., Smooth representation clustering, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), 3834–3841. |
[14] |
J. J. Hull, A database for handwritten text recognition research, IEEE Trans. Pattern Anal. Mach. Intell., 16 (1994). |
[15] |
V. C. Klema and A. J. Laub,
The singular value decomposition: Its computation and some applications, IEEE Transactions on Automatic Control, 25 (1980), 164-176.
doi: 10.1109/TAC.1980.1102314. |
[16] |
V. P. Kshirsagar, M. R. Baviskar and M. E. Gaikwad, Face Recognition Using Eigenfaces, International Conference on Computer Research & Development, 2011. |
[17] |
K. C. Lee, J. Ho and D. J. Kriegman,
Acquiring linear subspaces for face recognition under variable lighting, IEEE Transactions on Pattern Analysis & Machine Intelligence, 27 (2005), 684-698.
|
[18] |
C.-G. Li, C. You and R. Vidal.,
Structured sparse subspace clustering: A joint affinity learning and subspace clustering framework, IEEE Transactions on Image Processing, 26 (2017), 2988-3001.
doi: 10.1109/TIP.2017.2691557. |
[19] |
C. L. Liu and et al., Handwritten digit recognition: Benchmarking of state-of-the-art techniques, Pattern Recognition, 36 (2003), 2271–2285. |
[20] |
G. C. Liu and et al., Robust recovery of subspace structures by low-rank representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 171–184. |
[21] |
C. Y. Lu, H. Min, Z. Q. Zhao and et al., Robust and efficient subspace segmentation via least squares regression, European Conference on Computer Vision. Springer, Berlin, Heidelberg, (2012), 347–360. |
[22] |
U. von Luxburg,
A tutorial on spectral clustering, Statistics and Computing, 17 (2007), 395-416.
doi: 10.1007/s11222-007-9033-z. |
[23] |
S. A. Nene, S. K. Nayar and H. Murase, Columbia Object Image Library (COIL-20) (CUCS-005-96), Department of Computer Science, Columbia University, 1996. |
[24] |
N. R. Pal and K. P. Sankar,
A review on image segmentation techniques, Pattern Recognition, 26 (1993), 1277-1294.
|
[25] |
P. Wang and et al., Structural Reweight Sparse Subspace Clustering, Neural Processing Letters, 2018. |
[26] |
J. B. Shi and J. Malik,
Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (2000), 888-905.
|
[27] |
R. Tibshirani,
Regression Shrinkage and Selection via the LASSO: A retrospective., Journal of the Royal Statistical Society. Series B: Methodological, 73 (2011), 273-282.
doi: 10.1111/j.1467-9868.2011.00771.x. |
[28] |
C. Tomasi and T. Kanade,
Shape and motion from image streams under orthography: A factorization method, International Journal of Computer Vision, 9 (1992), 137-154.
|
[29] |
R. Tron and R. Vidal, A Benchmark for the Comparison of 3-D Motion Segmentation Algorithms, 2007 IEEE Conference on Computer Vision and Pattern Recognition IEEE Computer Society, 2007. |
[30] |
R. Vidal,
Subspace clustering, Signal Processing Magazine IEEE, 28 (2011), 52-68.
|
[31] |
W. W. Wang, C. Y. Chen, H. Z. Chen and X. C. Feng,
Unified discriminative and coherent semi-supervised subspace clustering, IEEE Transactions on Image Processing, 27 (2018), 2461-2470.
doi: 10.1109/TIP.2018.2806278. |
[32] |
W. W. Wang and C. L. Wu,
Image segmentation by correlation adaptive weighted regression, Neurocomputing, 267 (2017), 426-435.
|
[33] |
W. W. Wang, B. Zhang and X. Feng,
Subspace segmentation by correlation adaptive regression, IEEE Transactions on Circuits & Systems for Video Technology, 99 (2017), 1-1.
|
[34] |
J. W. Wong, Signal-to-Noise Ratio (SNR), Encyclopedia of Radiation Oncology, 2013. |
[35] |
J. Xu, K. Xu, K. Chen and et al., Reweighted sparse subspace clustering, Computer Vision and Image Understanding, 138 (2015), 25–37. |
[36] |
J. Y. Yan and M. Pollefeys, A General Framework for Motion Segmentation: Independent, Articulated, Rigid, Non-rigid, Degenerate and Non-degenerate, Computer Vision-ECCV 2006, 2006. |
[37] |
Y. Yang, J. Feng, N. Jojic and et al., $\ell^{0} $-sparse subspace clustering, European Conference on Computer Vision, Springer, Cham, (2016), 731–747. |
[38] |
T. Zhang, A. Szlam, Y. Wang and G. Lerman,
Hybrid linear modeling via local best-fit flats, International Journal of Computer Vision, 100 (2012), 217-240.
doi: 10.1007/s11263-012-0535-6. |
show all references
References:
[1] |
R. G. Baraniuk and et al., Applications of sparse representation and compressive sensing [scanning the issue], Proceedings of the IEEE 98, 6 (2010), 906–909. |
[2] |
S. Boyd and et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations & Trends in Machine Learning, 3 (2010), 1–122. |
[3] |
S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, Cambridge, 2004.
doi: 10.1017/CBO9780511804441.![]() ![]() ![]() |
[4] |
M. Brbić and I. Kopriva, $\ell_0$-Motivated Low-Rank Sparse Subspace Clustering, IEEE Transactions on Cybernetics, 2018. |
[5] |
E. J. Candes, M. B. Wakin and S. P. Boyd,
Enhancing sparsity by reweighted $l_1$minimization, Journal of Fourier Analysis & Applications, 14 (2008), 877-905.
doi: 10.1007/s00041-008-9045-x. |
[6] |
H. Z. Chen and et al., Discriminative and coherent subspace clustering, Neurocomputing, 284 (2018), 177–186. |
[7] |
H. Z. Chen, W. Wang and X. Feng, Structured sparse subspace clustering with grouping-effect-within-cluster, Pattern Recognition, (2018), S0031320318301948. |
[8] |
E. Elhamifar and R. Vidal,
Sparse subspace clustering: Algorithm, theory, and applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2765-2781.
|
[9] |
M. Fazel, H. Hindi and S. P. Boyd, Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices., American Control Conference, Proceedings of the 2003 IEEE, (2003). |
[10] |
A. S. Georghiades, P. N. Belhumeur and D. J. Kriegman,
From few to many: Illumination cone models for face recognition under variable lighting and pose, IEEE Transactions on Pattern Analysis and Machine Intelligence, 23 (2001), 643-660.
|
[11] |
I. F. Gorodnitsky and B. D. Rao,
Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm, IEEE Transactions on Signal Processing, 45 (2002), 600-616.
|
[12] |
M. Grant, S. Boyd and Y. Ye, CVX: Matlab software for disciplined convex programming, (2008). |
[13] |
H. Hu, Z. Lin, J. Feng and et al., Smooth representation clustering, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), 3834–3841. |
[14] |
J. J. Hull, A database for handwritten text recognition research, IEEE Trans. Pattern Anal. Mach. Intell., 16 (1994). |
[15] |
V. C. Klema and A. J. Laub,
The singular value decomposition: Its computation and some applications, IEEE Transactions on Automatic Control, 25 (1980), 164-176.
doi: 10.1109/TAC.1980.1102314. |
[16] |
V. P. Kshirsagar, M. R. Baviskar and M. E. Gaikwad, Face Recognition Using Eigenfaces, International Conference on Computer Research & Development, 2011. |
[17] |
K. C. Lee, J. Ho and D. J. Kriegman,
Acquiring linear subspaces for face recognition under variable lighting, IEEE Transactions on Pattern Analysis & Machine Intelligence, 27 (2005), 684-698.
|
[18] |
C.-G. Li, C. You and R. Vidal.,
Structured sparse subspace clustering: A joint affinity learning and subspace clustering framework, IEEE Transactions on Image Processing, 26 (2017), 2988-3001.
doi: 10.1109/TIP.2017.2691557. |
[19] |
C. L. Liu and et al., Handwritten digit recognition: Benchmarking of state-of-the-art techniques, Pattern Recognition, 36 (2003), 2271–2285. |
[20] |
G. C. Liu and et al., Robust recovery of subspace structures by low-rank representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 171–184. |
[21] |
C. Y. Lu, H. Min, Z. Q. Zhao and et al., Robust and efficient subspace segmentation via least squares regression, European Conference on Computer Vision. Springer, Berlin, Heidelberg, (2012), 347–360. |
[22] |
U. von Luxburg,
A tutorial on spectral clustering, Statistics and Computing, 17 (2007), 395-416.
doi: 10.1007/s11222-007-9033-z. |
[23] |
S. A. Nene, S. K. Nayar and H. Murase, Columbia Object Image Library (COIL-20) (CUCS-005-96), Department of Computer Science, Columbia University, 1996. |
[24] |
N. R. Pal and K. P. Sankar,
A review on image segmentation techniques, Pattern Recognition, 26 (1993), 1277-1294.
|
[25] |
P. Wang and et al., Structural Reweight Sparse Subspace Clustering, Neural Processing Letters, 2018. |
[26] |
J. B. Shi and J. Malik,
Normalized cuts and image segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (2000), 888-905.
|
[27] |
R. Tibshirani,
Regression Shrinkage and Selection via the LASSO: A retrospective., Journal of the Royal Statistical Society. Series B: Methodological, 73 (2011), 273-282.
doi: 10.1111/j.1467-9868.2011.00771.x. |
[28] |
C. Tomasi and T. Kanade,
Shape and motion from image streams under orthography: A factorization method, International Journal of Computer Vision, 9 (1992), 137-154.
|
[29] |
R. Tron and R. Vidal, A Benchmark for the Comparison of 3-D Motion Segmentation Algorithms, 2007 IEEE Conference on Computer Vision and Pattern Recognition IEEE Computer Society, 2007. |
[30] |
R. Vidal,
Subspace clustering, Signal Processing Magazine IEEE, 28 (2011), 52-68.
|
[31] |
W. W. Wang, C. Y. Chen, H. Z. Chen and X. C. Feng,
Unified discriminative and coherent semi-supervised subspace clustering, IEEE Transactions on Image Processing, 27 (2018), 2461-2470.
doi: 10.1109/TIP.2018.2806278. |
[32] |
W. W. Wang and C. L. Wu,
Image segmentation by correlation adaptive weighted regression, Neurocomputing, 267 (2017), 426-435.
|
[33] |
W. W. Wang, B. Zhang and X. Feng,
Subspace segmentation by correlation adaptive regression, IEEE Transactions on Circuits & Systems for Video Technology, 99 (2017), 1-1.
|
[34] |
J. W. Wong, Signal-to-Noise Ratio (SNR), Encyclopedia of Radiation Oncology, 2013. |
[35] |
J. Xu, K. Xu, K. Chen and et al., Reweighted sparse subspace clustering, Computer Vision and Image Understanding, 138 (2015), 25–37. |
[36] |
J. Y. Yan and M. Pollefeys, A General Framework for Motion Segmentation: Independent, Articulated, Rigid, Non-rigid, Degenerate and Non-degenerate, Computer Vision-ECCV 2006, 2006. |
[37] |
Y. Yang, J. Feng, N. Jojic and et al., $\ell^{0} $-sparse subspace clustering, European Conference on Computer Vision, Springer, Cham, (2016), 731–747. |
[38] |
T. Zhang, A. Szlam, Y. Wang and G. Lerman,
Hybrid linear modeling via local best-fit flats, International Journal of Computer Vision, 100 (2012), 217-240.
doi: 10.1007/s11263-012-0535-6. |








Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours | |
Accuracy | Mean | 93.57% | 94.35% | 92.46% | 94.04% | 94.08% | 96.60% | 97.00% |
Median | 92.85% | 94.07% | 92.13% | 93.60% | 93.70% | 96.10% | 96.90% |
Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours | |
Accuracy | Mean | 93.57% | 94.35% | 92.46% | 94.04% | 94.08% | 96.60% | 97.00% |
Median | 92.85% | 94.07% | 92.13% | 93.60% | 93.70% | 96.10% | 96.90% |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 7.35 | 1.75 | 2.13 | 1.87 | 1.23 | 0.57 | 0.48 |
Median | 7.03 | 0.78 | 0.78 | 0 | 0 | 0 | 0 | |
3 Subjects | Mean | 9.92 | 3.03 | 3.49 | 3.29 | 2.84 | 1.08 | 0.78 |
Median | 10.41 | 2.08 | 2.08 | 0.52 | 0.52 | 0 | 0.52 | |
4 Subjects | Mean | 13.66 | 3.25 | 4.86 | 3.80 | 2.95 | 1.65 | 1.07 |
Median | 14.07 | 2.34 | 3.91 | 1.95 | 0.78 | 0.39 | 0.39 | |
5 Subjects | Mean | 17.56 | 3.91 | 5.92 | 4.33 | 3.31 | 2.21 | 1.40 |
Median | 17.81 | 2.50 | 4.99 | 2.50 | 1.25 | 0.62 | 0.62 | |
6 Subjects | Mean | 20.95 | 5.28 | 6.83 | 4.87 | 3.73 | 2.79 | 1.68 |
Median | 21.07 | 2.86 | 5.99 | 3.39 | 2.08 | 1.30 | 1.04 | |
7 Subjects | Mean | 24.31 | 6.38 | 7.75 | 5.40 | 4.01 | 3.43 | 2.03 |
Median | 24.10 | 3.13 | 7.14 | 4.46 | 2.68 | 1.79 | 1.34 | |
8 Subjects | Mean | 27.52 | 6.83 | 11.05 | 5.92 | 4.38 | 3.98 | 2.59 |
Median | 27.83 | 3.71 | 7.42 | 4.69 | 3.13 | 1.86 | 1.56 | |
9 Subjects | Mean | 31.01 | 7.14 | 10.32 | 6.46 | 4.56 | 4.55 | 2.84 |
Median | 31.42 | 4.51 | 7.81 | 4.77 | 3.65 | 2.43 | 1.65 | |
10 Subjects | Mean | 33.49 | 7.81 | 16.95 | 7.40 | 4.74 | 4.90 | 3.33 |
Median | 32.81 | 7.03 | 18.91 | 5.63 | 4.22 | 3.59 | 2.03 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 7.35 | 1.75 | 2.13 | 1.87 | 1.23 | 0.57 | 0.48 |
Median | 7.03 | 0.78 | 0.78 | 0 | 0 | 0 | 0 | |
3 Subjects | Mean | 9.92 | 3.03 | 3.49 | 3.29 | 2.84 | 1.08 | 0.78 |
Median | 10.41 | 2.08 | 2.08 | 0.52 | 0.52 | 0 | 0.52 | |
4 Subjects | Mean | 13.66 | 3.25 | 4.86 | 3.80 | 2.95 | 1.65 | 1.07 |
Median | 14.07 | 2.34 | 3.91 | 1.95 | 0.78 | 0.39 | 0.39 | |
5 Subjects | Mean | 17.56 | 3.91 | 5.92 | 4.33 | 3.31 | 2.21 | 1.40 |
Median | 17.81 | 2.50 | 4.99 | 2.50 | 1.25 | 0.62 | 0.62 | |
6 Subjects | Mean | 20.95 | 5.28 | 6.83 | 4.87 | 3.73 | 2.79 | 1.68 |
Median | 21.07 | 2.86 | 5.99 | 3.39 | 2.08 | 1.30 | 1.04 | |
7 Subjects | Mean | 24.31 | 6.38 | 7.75 | 5.40 | 4.01 | 3.43 | 2.03 |
Median | 24.10 | 3.13 | 7.14 | 4.46 | 2.68 | 1.79 | 1.34 | |
8 Subjects | Mean | 27.52 | 6.83 | 11.05 | 5.92 | 4.38 | 3.98 | 2.59 |
Median | 27.83 | 3.71 | 7.42 | 4.69 | 3.13 | 1.86 | 1.56 | |
9 Subjects | Mean | 31.01 | 7.14 | 10.32 | 6.46 | 4.56 | 4.55 | 2.84 |
Median | 31.42 | 4.51 | 7.81 | 4.77 | 3.65 | 2.43 | 1.65 | |
10 Subjects | Mean | 33.49 | 7.81 | 16.95 | 7.40 | 4.74 | 4.90 | 3.33 |
Median | 32.81 | 7.03 | 18.91 | 5.63 | 4.22 | 3.59 | 2.03 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 15.05 | 13.75 | 14.86 | 8.13 | 0.76 | 1.35 | 0.43 |
Median | 13.91 | 10.42 | 13.07 | 0 | 0 | 0 | 0 | |
3 Subjects | Mean | 22.16 | 21.97 | 21.81 | 10.87 | 1.86 | 1.37 | 0.72 |
Median | 20.22 | 19.44 | 19.65 | 1.85 | 0 | 0 | 0 | |
20 Subjects | Mean(Mdian) | 25.49 | 24.72 | 24.67 | 20.07 | 15.73 | 16.32 | 7.78 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 15.05 | 13.75 | 14.86 | 8.13 | 0.76 | 1.35 | 0.43 |
Median | 13.91 | 10.42 | 13.07 | 0 | 0 | 0 | 0 | |
3 Subjects | Mean | 22.16 | 21.97 | 21.81 | 10.87 | 1.86 | 1.37 | 0.72 |
Median | 20.22 | 19.44 | 19.65 | 1.85 | 0 | 0 | 0 | |
20 Subjects | Mean(Mdian) | 25.49 | 24.72 | 24.67 | 20.07 | 15.73 | 16.32 | 7.78 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 15.49 | 15.25 | 14.76 | 11.80 | 11.48 | 10.90 | 10.80 |
Median | 13.91 | 13.42 | 13.07 | 9.00 | 8.50 | 9.50 | 7.00 | |
3 Subjects | Mean | 29.13 | 28.91 | 28.81 | 27.59 | 27.88 | 27.44 | 26.67 |
Median | 27.02 | 16.34 | 26.37 | 21.83 | 25.67 | 23.82 | 21.33 | |
10 Subjects | Mean(Median) | 46.25 | 45.97 | 45.53 | 44.89 | 44.25 | 44.03 | 43.95 |
No. of subject | Algorithms | LSR | SMR | LRR | SSC | StrSSC | RSSC | Ours |
2 Subjects | Mean | 15.49 | 15.25 | 14.76 | 11.80 | 11.48 | 10.90 | 10.80 |
Median | 13.91 | 13.42 | 13.07 | 9.00 | 8.50 | 9.50 | 7.00 | |
3 Subjects | Mean | 29.13 | 28.91 | 28.81 | 27.59 | 27.88 | 27.44 | 26.67 |
Median | 27.02 | 16.34 | 26.37 | 21.83 | 25.67 | 23.82 | 21.33 | |
10 Subjects | Mean(Median) | 46.25 | 45.97 | 45.53 | 44.89 | 44.25 | 44.03 | 43.95 |
[1] |
Lei Wu, Zhe Sun. A new spectral method for $l_1$-regularized minimization. Inverse Problems and Imaging, 2015, 9 (1) : 257-272. doi: 10.3934/ipi.2015.9.257 |
[2] |
Adela DePavia, Stefan Steinerberger. Spectral clustering revisited: Information hidden in the Fiedler vector. Foundations of Data Science, 2021, 3 (2) : 225-249. doi: 10.3934/fods.2021015 |
[3] |
Zhaohui Guo, Stanley Osher. Template matching via $l_1$ minimization and its application to hyperspectral data. Inverse Problems and Imaging, 2011, 5 (1) : 19-35. doi: 10.3934/ipi.2011.5.19 |
[4] |
Burak Ordin, Adil Bagirov, Ehsan Mohebi. An incremental nonsmooth optimization algorithm for clustering using $ L_1 $ and $ L_\infty $ norms. Journal of Industrial and Management Optimization, 2020, 16 (6) : 2757-2779. doi: 10.3934/jimo.2019079 |
[5] |
Antonio Rieser. A topological approach to spectral clustering. Foundations of Data Science, 2021, 3 (1) : 49-66. doi: 10.3934/fods.2021005 |
[6] |
Yingying Li, Stanley Osher, Richard Tsai. Heat source identification based on $l_1$ constrained minimization. Inverse Problems and Imaging, 2014, 8 (1) : 199-221. doi: 10.3934/ipi.2014.8.199 |
[7] |
Haixia Liu, Jian-Feng Cai, Yang Wang. Subspace clustering by (k,k)-sparse matrix factorization. Inverse Problems and Imaging, 2017, 11 (3) : 539-551. doi: 10.3934/ipi.2017025 |
[8] |
Baolan Yuan, Wanjun Zhang, Yubo Yuan. A Max-Min clustering method for $k$-means algorithm of data clustering. Journal of Industrial and Management Optimization, 2012, 8 (3) : 565-575. doi: 10.3934/jimo.2012.8.565 |
[9] |
Jiying Liu, Jubo Zhu, Fengxia Yan, Zenghui Zhang. Compressive sampling and $l_1$ minimization for SAR imaging with low sampling rate. Inverse Problems and Imaging, 2013, 7 (4) : 1295-1305. doi: 10.3934/ipi.2013.7.1295 |
[10] |
Vladimir Gaitsgory, Tanya Tarnopolskaya. Threshold value of the penalty parameter in the minimization of $L_1$-penalized conditional value-at-risk. Journal of Industrial and Management Optimization, 2013, 9 (1) : 191-204. doi: 10.3934/jimo.2013.9.191 |
[11] |
Guojun Gan, Kun Chen. A soft subspace clustering algorithm with log-transformed distances. Big Data & Information Analytics, 2016, 1 (1) : 93-109. doi: 10.3934/bdia.2016.1.93 |
[12] |
Daniel Mckenzie, Steven Damelin. Power weighted shortest paths for clustering Euclidean data. Foundations of Data Science, 2019, 1 (3) : 307-327. doi: 10.3934/fods.2019014 |
[13] |
Michael Herty, Lorenzo Pareschi, Giuseppe Visconti. Mean field models for large data–clustering problems. Networks and Heterogeneous Media, 2020, 15 (3) : 463-487. doi: 10.3934/nhm.2020027 |
[14] |
Jia Chen, Ioannis D. Schizas. Multimodal correlations-based data clustering. Foundations of Data Science, 2022 doi: 10.3934/fods.2022011 |
[15] |
Elissar Nasreddine. Two-dimensional individual clustering model. Discrete and Continuous Dynamical Systems - S, 2014, 7 (2) : 307-316. doi: 10.3934/dcdss.2014.7.307 |
[16] |
Takashi Hara and Gordon Slade. The incipient infinite cluster in high-dimensional percolation. Electronic Research Announcements, 1998, 4: 48-55. |
[17] |
Satoshi Ito, Soon-Yi Wu, Ting-Jang Shiu, Kok Lay Teo. A numerical approach to infinite-dimensional linear programming in $L_1$ spaces. Journal of Industrial and Management Optimization, 2010, 6 (1) : 15-28. doi: 10.3934/jimo.2010.6.15 |
[18] |
Wayne B. Hayes, Kenneth R. Jackson, Carmen Young. Rigorous high-dimensional shadowing using containment: The general case. Discrete and Continuous Dynamical Systems, 2006, 14 (2) : 329-342. doi: 10.3934/dcds.2006.14.329 |
[19] |
Johannes Lankeit, Yulan Wang. Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption. Discrete and Continuous Dynamical Systems, 2017, 37 (12) : 6099-6121. doi: 10.3934/dcds.2017262 |
[20] |
Hansol Park. Generalization of the Winfree model to the high-dimensional sphere and its emergent dynamics. Discrete and Continuous Dynamical Systems, 2022, 42 (2) : 707-735. doi: 10.3934/dcds.2021134 |
2021 Impact Factor: 1.411
Tools
Article outline
Figures and Tables
[Back to Top]