lscov
Least-squares solution in presence of known covariance
Syntax
x = lscov(A,B)
x = lscov(A,B,w)
x = lscov(A,B,V)
x = lscov(A,B,V,alg)
[x,stdx] = lscov(...)
[x,stdx,mse] = lscov(...)
[x,stdx,mse,S] = lscov(...)
Description
x = lscov(A,B)
returns the ordinary least squares solution to the linear system of equationsA*x = B
, i.e.,x
is the n-by-1 vector that minimizes the sum of squared errors(B - A*x)'*(B - A*x)
, whereA
is m-by-n, andB
is m-by-1.B
can also be an m-by-k matrix, andlscov
returns one solution for each column ofB
. Whenrank(A) < n
,lscov
sets the maximum possible number of elements ofx
to zero to obtain a "basic solution".
x = lscov(A,B,w)
, wherew
is a vector length m of real positive weights, returns the weighted least squares solution to the linear systemA*x = B
, that is,x
minimizes(B - A*x)'*diag(w)*(B - A*x)
.w
typically contains either counts or inverse variances.
x = lscov(A,B,V)
, whereV
is an m-by-m real symmetric positive definite matrix, returns the generalized least squares solution to the linear systemA*x = B
with covariance matrix proportional toV
, that is,x
minimizes(B - A*x)'*inv(V)*(B - A*x)
.
More generally,V
can be positive semidefinite, andlscov
returnsx
that minimizese'*e
, subject toA*x + T*e = B
, where the minimization is overx
ande
, andT*T' = V
. WhenV
is semidefinite, this problem has a solution only ifB
is consistent withA
andV
(that is,B
is in the column space of[A T]
), otherwiselscov
returns an error.
By default,lscov
computes the Cholesky decomposition ofV
and, in effect, inverts that factor to transform the problem into ordinary least squares. However, iflscov
determines thatV
is semidefinite, it uses an orthogonal decomposition algorithm that avoids invertingV
.
x = lscov(A,B,V,alg)
specifies the algorithm used to computex
whenV
is a matrix.alg
can have the following values:
'chol'
uses the Cholesky decomposition ofV
.'orth'
uses orthogonal decompositions, and is more appropriate whenV
is ill-conditioned or singular, but is computationally more expensive.
[x,stdx] = lscov(...)
returns the estimated standard errors ofx
. WhenA
is rank deficient,stdx
contains zeros in the elements corresponding to the necessarily zero elements ofx
.
[x,stdx,mse] = lscov(...)
returns the mean squared error. IfB
is assumed to have covariance matrix σ2V
(or (σ2)×diag
(1./W
)), thenmse
is an estimate of σ2.
[x,stdx,mse,S] = lscov(...)
returns the estimated covariance matrix ofx
. WhenA
is rank deficient,S
contains zeros in the rows and columns corresponding to the necessarily zero elements ofx
.lscov
cannot returnS
if it is called with multiple right-hand sides, that is, ifsize(B,2) > 1
.
The standard formulas for these quantities, whenA
andV
are full rank, are
x = inv(A'*inv(V)*A)*A'*inv(V)*B
mse = B'*(inv(V) - inv(V)*A*inv(A'*inv(V)*A)*A'*inv(V))*B./(m-n)
S =发票(一个“*发票(V) * * mse
stdx = sqrt(diag(S))
However,lscov
uses methods that are faster and more stable, and are applicable to rank deficient cases.
lscov
assumes that the covariance matrix ofB
is known only up to a scale factor.mse
is an estimate of that unknown scale factor, andlscov
scales the outputsS
andstdx
appropriately. However, ifV
is known to be exactly the covariance matrix ofB
,然后unnecessa缩放ry. To get the appropriate estimates in this case, you should rescaleS
andstdx
by1/mse
andsqrt(1/mse)
, respectively.
Examples
Example 1 — Computing Ordinary Least Squares
The MATLAB®backslash operator (\) enables you to perform linear regression by computing ordinary least-squares (OLS) estimates of the regression coefficients. You can also uselscov
to compute the same OLS estimates. By usinglscov
, you can also compute estimates of the standard errors for those coefficients, and an estimate of the standard deviation of the regression error term:
x1 = [.2 .5 .6 .8 1.0 1.1]'; x2 = [.1 .3 .4 .9 1.1 1.4]'; X = [ones(size(x1)) x1 x2]; y = [.17 .26 .28 .23 .27 .34]'; a = X\y a = 0.1203 0.3284 -0.1312 [b,se_b,mse] = lscov(X,y) b = 0.1203 0.3284 -0.1312 se_b = 0.0643 0.2267 0.1488 mse = 0.0015
Example 2 — Computing Weighted Least Squares
Uselscov
to compute a weighted least-squares (WLS) fit by providing a vector of relative observation weights. For example, you might want to downweight the influence of an unreliable observation on the fit:
w = [1 1 1 1 1 .1]'; [bw,sew_b,msew] = lscov(X,y,w) bw = 0.1046 0.4614 -0.2621 sew_b = 0.0309 0.1152 0.0814 msew = 3.4741e-004
Example 3 — Computing General Least Squares
Uselscov
to compute a general least-squares (GLS) fit by providing an observation covariance matrix. For example, your data may not be independent:
V = .2*ones(length(x1)) + .8*diag(ones(size(x1))); [bg,sew_b,mseg] = lscov(X,y,V) bg = 0.1203 0.3284 -0.1312 sew_b = 0.0672 0.2267 0.1488 mseg = 0.0019
Example 4 — Estimating the Coefficient Covariance Matrix
Compute an estimate of the coefficient covariance matrix for either OLS, WLS, or GLS fits. The coefficient standard errors are equal to the square roots of the values on the diagonal of this covariance matrix:
[b,se_b,mse,S] = lscov(X,y); S S = 0.0041 -0.0130 0.0075 -0.0130 0.0514 -0.0328 0.0075 -0.0328 0.0221 [se_b sqrt(diag(S))] ans = 0.0643 0.0643 0.2267 0.2267 0.1488 0.1488
Algorithms
The vectorx
minimizes the quantity(A*x-B)'*inv(V)*(A*x-B)
. The classical linear algebra solution to this problem is
x = inv(A'*inv(V)*A)*A'*inv(V)*B
but thelscov
function instead computes the QR decomposition ofA
and then modifiesQ
byV
.
References
[1] Strang, G.,Introduction to Applied Mathematics, Wellesley-Cambridge, 1986, p. 398.