December  2020, 10(4): 451-461. doi: 10.3934/naco.2020044

An improved algorithm for generalized least squares estimation

1. 

School of Computer Science, McGill University, Montreal, Quebec, Canada H3A 0E9

2. 

Department of Bioresource Engineering, McGill University, Ste-Anne-de-Bellevue, Quebec, Canada, H9X 3V9

* Corresponding author

Received  May 2020 Revised  September 2020 Published  September 2020

Fund Project: This work was supported in part by NSERC of Canada Grant RGPIN-2017-0513

The textbook direct method for generalized least squares estimation was developed by Christopher C. Paige about 40 years ago. He proposed two algorithms. Suppose that the noise covariance matrix, rather than its factor, is available. Both of the Paige's algorithms involve three matrix factorizations. The first does not exploit the matrix structure of the problem, but it can be implemented by blocking techniques to reduce data communication time on modern computer processors. The second takes advantage of the matrix structure, but its main part cannot be implemented by blocking techniques. In this paper, we propose an improved algorithm. The new algorithm involves only two matrix factorizations, instead of three, and can be implemented by blocking techniques. We show that, in terms of flop counts, the improved algorithm costs less than Paige's first algorithm in any case and less than his second algorithm in some cases. Numerical tests show that in terms of CPU running time, our improved algorithm is faster than both of the existing algorithms when blocking techniques are used.

Citation: Xiao-Wen Chang, David Titley-Peloquin. An improved algorithm for generalized least squares estimation. Numerical Algebra, Control and Optimization, 2020, 10 (4) : 451-461. doi: 10.3934/naco.2020044
References:
[1]

E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney and D. Sorensen, LAPACK Users' Guide, SIAM, Philadelphia, PA, Third Edition, 1999. doi: 10.1137/1.9780898718201.

[2]

Jeff BezansonAlan EdelmanStefan Karpinski and Viral B Shah, Julia: A fresh approach to numerical computing, SIAM Review, 59 (2017), 65-98.  doi: 10.1137/141000671.

[3]

Ǻ. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, 1996.

[4]

Ǻ. Björck, Numerical Methods in Matrix Computations, Springer, Philadelphia, PA, 2015.

[5] G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, Maryland, 4th edition, 2013. 
[6]

S. Kourouklis and C. C. Paige, A constrained least squares approach to the general Gauss-Markov linear model, Journal of the American Statistical Association, 76 (1981), 620-625. 

[7]

C. C. Paige, Computer solution and perturbation analysis of generalized linear least squares problems, Math. Comput., 33 (1979), 171-183.  doi: 10.2307/2006034.

[8]

C. C. Paige, Fast numerically stable computations for generalized linear least squares problems, SIAM J. Num. Anal., 16 (1979), 165-171.  doi: 10.1137/0716012.

[9]

R. Schreiber and C. Van Loan, A storage efficient WY representation for products of Householder transformations, SIAM J. of Sci. Stat. Computing, 10 (1991), 53-57.  doi: 10.1137/0910005.

show all references

References:
[1]

E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney and D. Sorensen, LAPACK Users' Guide, SIAM, Philadelphia, PA, Third Edition, 1999. doi: 10.1137/1.9780898718201.

[2]

Jeff BezansonAlan EdelmanStefan Karpinski and Viral B Shah, Julia: A fresh approach to numerical computing, SIAM Review, 59 (2017), 65-98.  doi: 10.1137/141000671.

[3]

Ǻ. Björck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, PA, 1996.

[4]

Ǻ. Björck, Numerical Methods in Matrix Computations, Springer, Philadelphia, PA, 2015.

[5] G. H. Golub and C. F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, Maryland, 4th edition, 2013. 
[6]

S. Kourouklis and C. C. Paige, A constrained least squares approach to the general Gauss-Markov linear model, Journal of the American Statistical Association, 76 (1981), 620-625. 

[7]

C. C. Paige, Computer solution and perturbation analysis of generalized linear least squares problems, Math. Comput., 33 (1979), 171-183.  doi: 10.2307/2006034.

[8]

C. C. Paige, Fast numerically stable computations for generalized linear least squares problems, SIAM J. Num. Anal., 16 (1979), 165-171.  doi: 10.1137/0716012.

[9]

R. Schreiber and C. Van Loan, A storage efficient WY representation for products of Householder transformations, SIAM J. of Sci. Stat. Computing, 10 (1991), 53-57.  doi: 10.1137/0910005.

Figure 6.1.  Average CPU running time per solve over $10$ solves for $ \mathbf{A}\in {\mathbb{R}}^{m \times n}$ with $n = 80$, $m = 100:200:3500$
Algorithm 1:

1: Compute the QL factorization of $ \mathbf{A} $: $ \mathbf{H}_n\cdots \mathbf{H}_1 \mathbf{A}= \begin{bmatrix} {\boldsymbol{0}} \mathbf{L}_{ \mathbf{A}} \end{bmatrix} $
2: Compute $ \bar {\boldsymbol{\Sigma}}= \mathbf{H}_1\cdots \mathbf{H}_n {\boldsymbol{\Sigma}} \mathbf{H}_n\cdots \mathbf{H}_1 $
3: Compute $ \tilde {\mathbf{y}} = \mathbf{H}_1\cdots \mathbf{H}_n {\mathbf{y}} $
4: Compute the Cholesky factorization of $ \bar {\boldsymbol{\Sigma}} $: $ \bar {\boldsymbol{\Sigma}}= \mathbf{L} \mathbf{L}^ {\mkern-1.5mu\mathsf{T}} $
5: Solve $ \mathbf{L}(1\!:\!n,1\!:\!n) {\mathbf{z}}_1 = \tilde {\mathbf{y}}(1\!:\!n) $ for $ {\mathbf{z}}_1 $
6: Solve $ \mathbf{L}_{ \mathbf{A}} {\hat{\mathbf{x}}}=\tilde {\mathbf{y}}(n+1:m)- \mathbf{L}(n+1\!:\!m,1\!:\!n) {\mathbf{z}}_1 $ for $ {\hat{\mathbf{x}}} $
Algorithm 1:

1: Compute the QL factorization of $ \mathbf{A} $: $ \mathbf{H}_n\cdots \mathbf{H}_1 \mathbf{A}= \begin{bmatrix} {\boldsymbol{0}} \mathbf{L}_{ \mathbf{A}} \end{bmatrix} $
2: Compute $ \bar {\boldsymbol{\Sigma}}= \mathbf{H}_1\cdots \mathbf{H}_n {\boldsymbol{\Sigma}} \mathbf{H}_n\cdots \mathbf{H}_1 $
3: Compute $ \tilde {\mathbf{y}} = \mathbf{H}_1\cdots \mathbf{H}_n {\mathbf{y}} $
4: Compute the Cholesky factorization of $ \bar {\boldsymbol{\Sigma}} $: $ \bar {\boldsymbol{\Sigma}}= \mathbf{L} \mathbf{L}^ {\mkern-1.5mu\mathsf{T}} $
5: Solve $ \mathbf{L}(1\!:\!n,1\!:\!n) {\mathbf{z}}_1 = \tilde {\mathbf{y}}(1\!:\!n) $ for $ {\mathbf{z}}_1 $
6: Solve $ \mathbf{L}_{ \mathbf{A}} {\hat{\mathbf{x}}}=\tilde {\mathbf{y}}(n+1:m)- \mathbf{L}(n+1\!:\!m,1\!:\!n) {\mathbf{z}}_1 $ for $ {\hat{\mathbf{x}}} $
[1]

Yanyan Hu, Fubao Xi, Min Zhu. Least squares estimation for distribution-dependent stochastic differential delay equations. Communications on Pure and Applied Analysis, 2022, 21 (4) : 1505-1536. doi: 10.3934/cpaa.2022027

[2]

Ya-Xiang Yuan. Recent advances in numerical methods for nonlinear equations and nonlinear least squares. Numerical Algebra, Control and Optimization, 2011, 1 (1) : 15-34. doi: 10.3934/naco.2011.1.15

[3]

Yun Cai, Song Li. Convergence and stability of iteratively reweighted least squares for low-rank matrix recovery. Inverse Problems and Imaging, 2017, 11 (4) : 643-661. doi: 10.3934/ipi.2017030

[4]

Chengjin Li. Parameter-related projection-based iterative algorithm for a kind of generalized positive semidefinite least squares problem. Numerical Algebra, Control and Optimization, 2020, 10 (4) : 511-520. doi: 10.3934/naco.2020048

[5]

Enrique Fernández-Cara, Arnaud Münch. Numerical null controllability of semi-linear 1-D heat equations: Fixed point, least squares and Newton methods. Mathematical Control and Related Fields, 2012, 2 (3) : 217-246. doi: 10.3934/mcrf.2012.2.217

[6]

Zhuoyi Xu, Yong Xia, Deren Han. On box-constrained total least squares problem. Numerical Algebra, Control and Optimization, 2020, 10 (4) : 439-449. doi: 10.3934/naco.2020043

[7]

Hassan Mohammad, Mohammed Yusuf Waziri, Sandra Augusta Santos. A brief survey of methods for solving nonlinear least-squares problems. Numerical Algebra, Control and Optimization, 2019, 9 (1) : 1-13. doi: 10.3934/naco.2019001

[8]

Mila Nikolova. Analytical bounds on the minimizers of (nonconvex) regularized least-squares. Inverse Problems and Imaging, 2008, 2 (1) : 133-149. doi: 10.3934/ipi.2008.2.133

[9]

Enrico Gerlach, Charlampos Skokos. Comparing the efficiency of numerical techniques for the integration of variational equations. Conference Publications, 2011, 2011 (Special) : 475-484. doi: 10.3934/proc.2011.2011.475

[10]

Giuseppe Marino, Hong-Kun Xu. Convergence of generalized proximal point algorithms. Communications on Pure and Applied Analysis, 2004, 3 (4) : 791-808. doi: 10.3934/cpaa.2004.3.791

[11]

Pengyu Yan, Shi Qiang Liu, Cheng-Hu Yang, Mahmoud Masoud. A comparative study on three graph-based constructive algorithms for multi-stage scheduling with blocking. Journal of Industrial and Management Optimization, 2019, 15 (1) : 221-233. doi: 10.3934/jimo.2018040

[12]

Yanfei Lu, Qingfei Yin, Hongyi Li, Hongli Sun, Yunlei Yang, Muzhou Hou. Solving higher order nonlinear ordinary differential equations with least squares support vector machines. Journal of Industrial and Management Optimization, 2020, 16 (3) : 1481-1502. doi: 10.3934/jimo.2019012

[13]

JaEun Ku. Maximum norm error estimates for Div least-squares method for Darcy flows. Discrete and Continuous Dynamical Systems, 2010, 26 (4) : 1305-1318. doi: 10.3934/dcds.2010.26.1305

[14]

Zhou Sheng, Gonglin Yuan, Zengru Cui, Xiabin Duan, Xiaoliang Wang. An adaptive trust region algorithm for large-residual nonsmooth least squares problems. Journal of Industrial and Management Optimization, 2018, 14 (2) : 707-718. doi: 10.3934/jimo.2017070

[15]

Lucian Coroianu, Danilo Costarelli, Sorin G. Gal, Gianluca Vinti. Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4213-4225. doi: 10.3934/cpaa.2020189

[16]

Peter Frolkovič, Karol Mikula, Jooyoung Hahn, Dirk Martin, Branislav Basara. Flux balanced approximation with least-squares gradient for diffusion equation on polyhedral mesh. Discrete and Continuous Dynamical Systems - S, 2021, 14 (3) : 865-879. doi: 10.3934/dcdss.2020350

[17]

H. D. Scolnik, N. E. Echebest, M. T. Guardarucci. Extensions of incomplete oblique projections method for solving rank-deficient least-squares problems. Journal of Industrial and Management Optimization, 2009, 5 (2) : 175-191. doi: 10.3934/jimo.2009.5.175

[18]

Karl Peter Hadeler. Michaelis-Menten kinetics, the operator-repressor system, and least squares approaches. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1541-1560. doi: 10.3934/mbe.2013.10.1541

[19]

Runchang Lin, Huiqing Zhu. A discontinuous Galerkin least-squares finite element method for solving Fisher's equation. Conference Publications, 2013, 2013 (special) : 489-497. doi: 10.3934/proc.2013.2013.489

[20]

Hsueh-Chen Lee, Hyesuk Lee. An a posteriori error estimator based on least-squares finite element solutions for viscoelastic fluid flows. Electronic Research Archive, 2021, 29 (4) : 2755-2770. doi: 10.3934/era.2021012

 Impact Factor: 

Metrics

  • PDF downloads (215)
  • HTML views (212)
  • Cited by (0)

Other articles
by authors

[Back to Top]