# American Institute of Mathematical Sciences

July  2017, 13(3): 1587-1599. doi: 10.3934/jimo.2017008

## A fast continuous method for the extreme eigenvalue problem

 1 College of Applied Sciences, Beijing University of Technology, Beijing 100124, China 2 School of Mathematics and Statistics, Central South University, Changsha, Hunan 410083, China

* Corresponding author: zbli@lsec.cc.ac.cn

Received  July 2015 Revised  April 2016 Published  December 2016

Fund Project: This research was supported by the National Science Foundation of China (61179033), Collaborative Innovation Center on Beijing Society-building and Social Governance, and Shandong Provincial Natural Science Foundation of China (ZR2013FL032).

Matrix eigenvalue problems play a significant role in many areas of computational science and engineering. In this paper, we propose a fast continuous method for the extreme eigenvalue problem. We first convert such a nonconvex optimization problem into a minimization problem with concave objective function and convex constraints based on the continuous methods developed by Golub and Liao. Then, we propose a continuous method for solving such a minimization problem. To accelerate the convergence of this method, a self-adaptive step length strategy is adopted. Under mild conditions, we prove the global convergence of this method. Some preliminary numerical results are presented to verify the effectiveness of the proposed method eventually.

Citation: Huan Gao, Zhibao Li, Haibin Zhang. A fast continuous method for the extreme eigenvalue problem. Journal of Industrial and Management Optimization, 2017, 13 (3) : 1587-1599. doi: 10.3934/jimo.2017008
##### References:
 [1] A. Auslender, Optimisation: Méthodes Numériques, Masson, Paris, 1976. [2] M. T. Chu, On the continuous realization of iterative processes, SIAM Review, 30 (1988), 375-387.  doi: 10.1137/1030090. [3] M. T. Chu, Matrix differential equations: a continuous realization process for linear algebra problems, Nonlinear Analysis, Theory, Methods and Applications, 18 (1992), 1125-1146.  doi: 10.1016/0362-546X(92)90157-A. [4] A. Cichocki and R. Unbehauen, Neural networks for computing eigenvalues and eigenvectors, Biological Cybernetics, 68 (1992), 155-164.  doi: 10.1007/BF00201437. [5] F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer-Verlag, New York, 2003. [6] H. R. Fang and Y. Saad, A filtered Lanczos procedure for extreme and interior eigenvalue problems, SIAM Journal on Scientific Computing, 34 (2012), A2220-A2246.  doi: 10.1137/110836535. [7] M. Fukushima, Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems, Mathematical Programming, 53 (1992), 99-110.  doi: 10.1007/BF01585696. [8] H. Gao, Y. H. Dai and X. J. Tong, Barzilai-Borwein-like methods for the extreme eigenvalue problem, Journal of Industrial and Management Optimization, 11 (2015), 999-1019.  doi: 10.3934/jimo.2015.11.999. [9] X. B. Gao, G. H. Golub and L. Z. Liao, Continuous methods for symmetric generalized eigenvalue problems, Linear Algebra and its Applications, 428 (2008), 676-696.  doi: 10.1016/j.laa.2007.08.034. [10] G. H. Golub and C. F. Van Loan, Matrix Computation, 3rd ed., The John Hopkins University Press, 1996. [11] G. H. Golub and L. Z. Liao, Continuous methods for extreme and interior eigenvalue problems, Linear Algebra and its Applications, 415 (2006), 31-51.  doi: 10.1016/j.laa.2005.01.009. [12] D. R. Han and H. K. Lo, Solving non-additive traffic assignment problems: A descent method for co-coercive variational inequalities, European Journal of Operational Research, 159 (2004), 529-544.  doi: 10.1016/S0377-2217(03)00423-5. [13] B. He, A projection and contraction method for a class of linear complementarity problems and its application in convex quadratic programming, Applied Mathematics and Optimization, 25 (1992), 247-262.  doi: 10.1007/BF01182323. [14] J. J. Hopfield, Neural networks and physical systems with emergent collective computational ability, Proc. National Academy of Sciences of the United States of America, 79 (1982), 2554-2558.  doi: 10.1073/pnas.79.8.2554. [15] J. J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. National Academy of Sciences of the United States of America, 81 (1984), 3088-3092.  doi: 10.1073/pnas.81.10.3088. [16] J. J. Hopfield and D. W. Tank, Neural computation of decisions in optimization problems, Biological Cybernetics, 52 (1985), 141-152. [17] L. Z. Liao, A Study of the Dual Affine Scaling Continuous Trajectories for Linear Programming, Journal of Optimization Theory and Applications, 163 (2014), 548-568.  doi: 10.1007/s10957-013-0495-1. [18] E. E. Ovtchinnikov, Jacobi correction equation, line search, and conjugate gradients in Hermitian eigenvalue computation Ⅰ: Computing an extreme eigenvalue, SIAM Journal on Numerical Analysis, 46 (2014), 2567-2592.  doi: 10.1137/070688742. [19] W. Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw-Hill Book Company, 1976. [20] J. J. E. Slotine and W. Li, Applied Nonlinear Control, Prentice Hall, New Jersey, 1991. [21] H. A. van der Vorst and G. H. Golub, 150 years old and still alive: Eigenproblems, In: I. S. Duff, Editor, The State of the Art in Numerical Analysis, 63 (1997), 93-119. [22] T. Zhu and Z. G. Yu, A simple proof for some important properties of the projection mapping, Mathematical Inequalities and Applications, 7 (2004), 453-456.  doi: 10.7153/mia-07-45.

show all references

##### References:
 [1] A. Auslender, Optimisation: Méthodes Numériques, Masson, Paris, 1976. [2] M. T. Chu, On the continuous realization of iterative processes, SIAM Review, 30 (1988), 375-387.  doi: 10.1137/1030090. [3] M. T. Chu, Matrix differential equations: a continuous realization process for linear algebra problems, Nonlinear Analysis, Theory, Methods and Applications, 18 (1992), 1125-1146.  doi: 10.1016/0362-546X(92)90157-A. [4] A. Cichocki and R. Unbehauen, Neural networks for computing eigenvalues and eigenvectors, Biological Cybernetics, 68 (1992), 155-164.  doi: 10.1007/BF00201437. [5] F. Facchinei and J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems, Springer-Verlag, New York, 2003. [6] H. R. Fang and Y. Saad, A filtered Lanczos procedure for extreme and interior eigenvalue problems, SIAM Journal on Scientific Computing, 34 (2012), A2220-A2246.  doi: 10.1137/110836535. [7] M. Fukushima, Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems, Mathematical Programming, 53 (1992), 99-110.  doi: 10.1007/BF01585696. [8] H. Gao, Y. H. Dai and X. J. Tong, Barzilai-Borwein-like methods for the extreme eigenvalue problem, Journal of Industrial and Management Optimization, 11 (2015), 999-1019.  doi: 10.3934/jimo.2015.11.999. [9] X. B. Gao, G. H. Golub and L. Z. Liao, Continuous methods for symmetric generalized eigenvalue problems, Linear Algebra and its Applications, 428 (2008), 676-696.  doi: 10.1016/j.laa.2007.08.034. [10] G. H. Golub and C. F. Van Loan, Matrix Computation, 3rd ed., The John Hopkins University Press, 1996. [11] G. H. Golub and L. Z. Liao, Continuous methods for extreme and interior eigenvalue problems, Linear Algebra and its Applications, 415 (2006), 31-51.  doi: 10.1016/j.laa.2005.01.009. [12] D. R. Han and H. K. Lo, Solving non-additive traffic assignment problems: A descent method for co-coercive variational inequalities, European Journal of Operational Research, 159 (2004), 529-544.  doi: 10.1016/S0377-2217(03)00423-5. [13] B. He, A projection and contraction method for a class of linear complementarity problems and its application in convex quadratic programming, Applied Mathematics and Optimization, 25 (1992), 247-262.  doi: 10.1007/BF01182323. [14] J. J. Hopfield, Neural networks and physical systems with emergent collective computational ability, Proc. National Academy of Sciences of the United States of America, 79 (1982), 2554-2558.  doi: 10.1073/pnas.79.8.2554. [15] J. J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons, Proc. National Academy of Sciences of the United States of America, 81 (1984), 3088-3092.  doi: 10.1073/pnas.81.10.3088. [16] J. J. Hopfield and D. W. Tank, Neural computation of decisions in optimization problems, Biological Cybernetics, 52 (1985), 141-152. [17] L. Z. Liao, A Study of the Dual Affine Scaling Continuous Trajectories for Linear Programming, Journal of Optimization Theory and Applications, 163 (2014), 548-568.  doi: 10.1007/s10957-013-0495-1. [18] E. E. Ovtchinnikov, Jacobi correction equation, line search, and conjugate gradients in Hermitian eigenvalue computation Ⅰ: Computing an extreme eigenvalue, SIAM Journal on Numerical Analysis, 46 (2014), 2567-2592.  doi: 10.1137/070688742. [19] W. Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw-Hill Book Company, 1976. [20] J. J. E. Slotine and W. Li, Applied Nonlinear Control, Prentice Hall, New Jersey, 1991. [21] H. A. van der Vorst and G. H. Golub, 150 years old and still alive: Eigenproblems, In: I. S. Duff, Editor, The State of the Art in Numerical Analysis, 63 (1997), 93-119. [22] T. Zhu and Z. G. Yu, A simple proof for some important properties of the projection mapping, Mathematical Inequalities and Applications, 7 (2004), 453-456.  doi: 10.7153/mia-07-45.
Sketch three trajectories of two different orders
Sketch three trajectories of two different orders
Results for Example 1
 n 100 500 1000 2000 3000 4000 5000 ODE45(Sec.) 0.328 5.297 17.2199 86.188 130.016 389.016 642.234 Iterations 31 55 43 51 39 55 60 Time (Sec.) 0.016 0.531 1.484 6.734 11.594 29.125 48.094
 n 100 500 1000 2000 3000 4000 5000 ODE45(Sec.) 0.328 5.297 17.2199 86.188 130.016 389.016 642.234 Iterations 31 55 43 51 39 55 60 Time (Sec.) 0.016 0.531 1.484 6.734 11.594 29.125 48.094
Results for Example 2
 n 100 500 1000 2000 3000 4000 5000 ODE45(Sec.) 0.234 5.359 44.343 80.469 149.234 314.516 452.859 Iterations 43 61 99 47 39 51 51 Time (Sec.) 0.031 0.578 3.438 6.328 11.594 26.781 41.234
 n 100 500 1000 2000 3000 4000 5000 ODE45(Sec.) 0.234 5.359 44.343 80.469 149.234 314.516 452.859 Iterations 43 61 99 47 39 51 51 Time (Sec.) 0.031 0.578 3.438 6.328 11.594 26.781 41.234

2021 Impact Factor: 1.411