Documentation

lscov

Least-squares solution in presence of known covariance

Syntax

x = lscov(A,B)
x = lscov(A,B,w)
x = lscov(A,B,V)
x = lscov(A,B,V,alg)
[x,stdx] = lscov(...)
[x,stdx,mse] = lscov(...)
[x,stdx,mse,S] = lscov(...)

Description

x = lscov(A,B)returns the ordinary least squares solution to the linear system of equationsA*x = B, i.e.,xis the n-by-1 vector that minimizes the sum of squared errors(B - A*x)'*(B - A*x), whereAis m-by-n, andBis m-by-1.Bcan also be an m-by-k matrix, andlscovreturns one solution for each column ofB. Whenrank(A) < n,lscovsets the maximum possible number of elements ofxto zero to obtain a "basic solution".

x = lscov(A,B,w), wherewis a vector length m of real positive weights, returns the weighted least squares solution to the linear systemA*x = B, that is,xminimizes(B - A*x)'*diag(w)*(B - A*x).wtypically contains either counts or inverse variances.

x = lscov(A,B,V), whereVis an m-by-m real symmetric positive definite matrix, returns the generalized least squares solution to the linear systemA*x = Bwith covariance matrix proportional toV, that is,xminimizes(B - A*x)'*inv(V)*(B - A*x).

More generally,Vcan be positive semidefinite, andlscovreturnsxthat minimizese'*e, subject toA*x + T*e = B, where the minimization is overxande, andT*T' = V. WhenVis semidefinite, this problem has a solution only ifBis consistent withAandV(that is,Bis in the column space of[A T]), otherwiselscovreturns an error.

By default,lscovcomputes the Cholesky decomposition ofVand, in effect, inverts that factor to transform the problem into ordinary least squares. However, iflscovdetermines thatVis semidefinite, it uses an orthogonal decomposition algorithm that avoids invertingV.

x = lscov(A,B,V,alg)specifies the algorithm used to computexwhenVis a matrix.algcan have the following values:

  • 'chol'uses the Cholesky decomposition ofV.

  • 'orth'uses orthogonal decompositions, and is more appropriate whenVis ill-conditioned or singular, but is computationally more expensive.

[x,stdx] = lscov(...)returns the estimated standard errors ofx. WhenAis rank deficient,stdxcontains zeros in the elements corresponding to the necessarily zero elements ofx.

[x,stdx,mse] = lscov(...)returns the mean squared error. IfBis assumed to have covariance matrix σ2V(or (σ2diag(1./W)), thenmseis an estimate of σ2.

[x,stdx,mse,S] = lscov(...)returns the estimated covariance matrix ofx. WhenAis rank deficient,Scontains zeros in the rows and columns corresponding to the necessarily zero elements ofx.lscovcannot returnSif it is called with multiple right-hand sides, that is, ifsize(B,2) > 1.

The standard formulas for these quantities, whenAandVare full rank, are

  • x = inv(A'*inv(V)*A)*A'*inv(V)*B

  • mse = B'*(inv(V) - inv(V)*A*inv(A'*inv(V)*A)*A'*inv(V))*B./(m-n)

  • S =发票(一个“*发票(V) * * mse

  • stdx = sqrt(diag(S))

However,lscovuses methods that are faster and more stable, and are applicable to rank deficient cases.

lscovassumes that the covariance matrix ofBis known only up to a scale factor.mseis an estimate of that unknown scale factor, andlscovscales the outputsSandstdxappropriately. However, ifVis known to be exactly the covariance matrix ofB,然后unnecessa缩放ry. To get the appropriate estimates in this case, you should rescaleSandstdxby1/mseandsqrt(1/mse), respectively.

Examples

Example 1 — Computing Ordinary Least Squares

The MATLAB®backslash operator (\) enables you to perform linear regression by computing ordinary least-squares (OLS) estimates of the regression coefficients. You can also uselscovto compute the same OLS estimates. By usinglscov, you can also compute estimates of the standard errors for those coefficients, and an estimate of the standard deviation of the regression error term:

x1 = [.2 .5 .6 .8 1.0 1.1]'; x2 = [.1 .3 .4 .9 1.1 1.4]'; X = [ones(size(x1)) x1 x2]; y = [.17 .26 .28 .23 .27 .34]'; a = X\y a = 0.1203 0.3284 -0.1312 [b,se_b,mse] = lscov(X,y) b = 0.1203 0.3284 -0.1312 se_b = 0.0643 0.2267 0.1488 mse = 0.0015

Example 2 — Computing Weighted Least Squares

Uselscovto compute a weighted least-squares (WLS) fit by providing a vector of relative observation weights. For example, you might want to downweight the influence of an unreliable observation on the fit:

w = [1 1 1 1 1 .1]'; [bw,sew_b,msew] = lscov(X,y,w) bw = 0.1046 0.4614 -0.2621 sew_b = 0.0309 0.1152 0.0814 msew = 3.4741e-004

Example 3 — Computing General Least Squares

Uselscovto compute a general least-squares (GLS) fit by providing an observation covariance matrix. For example, your data may not be independent:

V = .2*ones(length(x1)) + .8*diag(ones(size(x1))); [bg,sew_b,mseg] = lscov(X,y,V) bg = 0.1203 0.3284 -0.1312 sew_b = 0.0672 0.2267 0.1488 mseg = 0.0019

Example 4 — Estimating the Coefficient Covariance Matrix

Compute an estimate of the coefficient covariance matrix for either OLS, WLS, or GLS fits. The coefficient standard errors are equal to the square roots of the values on the diagonal of this covariance matrix:

[b,se_b,mse,S] = lscov(X,y); S S = 0.0041 -0.0130 0.0075 -0.0130 0.0514 -0.0328 0.0075 -0.0328 0.0221 [se_b sqrt(diag(S))] ans = 0.0643 0.0643 0.2267 0.2267 0.1488 0.1488

Algorithms

The vectorxminimizes the quantity(A*x-B)'*inv(V)*(A*x-B). The classical linear algebra solution to this problem is

x = inv(A'*inv(V)*A)*A'*inv(V)*B

but thelscovfunction instead computes the QR decomposition ofAand then modifiesQbyV.

References

[1] Strang, G.,Introduction to Applied Mathematics, Wellesley-Cambridge, 1986, p. 398.

Extended Capabilities

Introduced before R2006a