Apakah sampel matriks kovarians selalu simetris dan pasti positif?


33

Ketika menghitung matriks kovarians sampel, adakah yang dijamin mendapatkan matriks simetris dan pasti positif?

Saat ini masalah saya memiliki sampel 4600 vektor pengamatan dan 24 dimensi.


Untuk pengambilan sampel, matriks kovarians saya menggunakan rumus: Qn=1ni=1n(xix¯)(xix¯)mananadalah jumlah sampel danx¯ adalah mean sampel.
Morten

4
Itu biasanya disebut 'menghitung matriks kovarians sampel', atau 'memperkirakan matriks kovarians' daripada 'mengambil sampel matriks kovarians'.
Glen_b -Reinstate Monica

1
Situasi umum di mana matriks kovarians tidak pasti adalah ketika 24 "dimensi" mencatat komposisi campuran yang berjumlah 100%.
whuber

Jawaban:


41

Untuk sampel vektor xi=(xi1,,xik) , dengan i=1,,n , vektor mean sampel adalah

x¯=1ni=1nxi,
dan matriks kovarian sampel adalah
Q=1ni=1n(xix¯)(xix¯).
Untuk vektor bukan-nolyRk , kita memiliki
yQy=y(1ni=1n(xix¯)(xix¯))y
=1ni=1ny(xix¯)(xix¯)y
=1ni=1n((xix¯)y)20.()
Therefore, Q is always positive semi-definite.

The additional condition for Q to be positive definite was given in whuber's comment bellow. It goes as follows.

Define zi=(xix¯), for i=1,,n. For any nonzero yRk, () is zero if and only if ziy=0, for each i=1,,n. Suppose the set {z1,,zn} spans Rkα1,,αny=α1z1++αnznyy=α1z1y++αnzny=0, yielding that y=0, a contradiction. Hence, if the zi's span Rk, then Q is positive definite. This condition is equivalent to rank[z1zn]=k.


2
I like this approach, but would advise some care: Q is not necessarily positive definite. The (necessary and sufficient) conditions for it to be so are described in my comment to Konstantin's answer.
whuber

1
Since the rank of [z1,z2,,zn] is less or equal to k, the condition can be simplified to the rank is equal to k.
an offer can't refuse

13

A correct covariance matrix is always symmetric and positive *semi*definite.

The covariance between two variables is defied as σ(x,y)=E[(xE(x))(yE(y))].

This equation doesn't change if you switch the positions of x and y. Hence the matrix has to be symmetric.

It also has to be positive *semi-*definite because:

You can always find a transformation of your variables in a way that the covariance-matrix becomes diagonal. On the diagonal, you find the variances of your transformed variables which are either zero or positive, it is easy to see that this makes the transformed matrix positive semidefinite. However, since the definition of definity is transformation-invariant, it follows that the covariance-matrix is positive semidefinite in any chosen coordinate system.

When you estimate your covariance matrix (that is, when you calculate your sample covariance) with the formula you stated above, it will obv. still be symmetric. It also has to be positive semidefinite (I think), because for each sample, the pdf that gives each sample point equal probability has the sample covariance as its covariance (somebody please verify this), so everything stated above still applies.


1
PS: I am starting to think that this wasn't your question...
Konstantin Schubert

But if you want to know whether your sampling algorithm guarantees it, you will have to state how you are sampling.
Konstantin Schubert

1
Morten, the symmetry is immediate from the formula. To show semi-definiteness, you need to establish that uQnu0 for any vector u. But Qn is 1/n times a sum of vivi (where vi=xix¯), whence nuQnu is a sum of u(vivi)u = (uvi)(uvi), which is the squared length of the vector uvi. Because n>0 and a sum of squares cannot ever be negative, uQnu0, QED. This also shows that uQnu=0 precisely for those vectors u which are orthogonal to all the vi (i.e., uvi=0 for all i). When the vi span, then u=0 and Qn is definite.
whuber

1
@Morten The transformation-invariance is pretty clear if you understand a matrix multiplication geometrically. Think of your vector as an arrow. The numbers that describe your vector change with the coordinate system, but the direction and length of your vector doesnt. Now, a multiplication with a matrix means that you change length and direction of that arrow, but again the effect is geometrically the same in each coordinate system. The same is with a scalar product: It is defined geometrically and Geometriy is transformation-invariant. So your equation has the same result in all systems.
Konstantin Schubert

1
@Morten When you think in coordinates, the argument goes like this: When A is your transformation matrix then: v=Av with v as the transformed coordinate-vector, M=AMAT, so when you transform each element in the equation vTMv>0, you get vTMv=(Av)TAMATAv>0, which equals vTATAMATAv>0, and, because A is orthogonal, ATA is the unit matrix and we again get vTMv>0, which means that the transformed and the untransformed equation have the same scalar as result, so their are either both or both not greater zero.
Konstantin Schubert

0

Variance-Covariance matrices are always symmetric, as it can be proven from the actual equation to calculate each term of said matrix.

Also, Variance-Covariance matrices are always square matrices of size n, where n is the number of variables in your experiment.

Eigenvectors of symmetric matrices are always orthogonal.

With PCA, you determine the eigenvalues of the matrix to see if you could reduce the number of variables used in your experiment.


1
Welcome Gen. Note that your username, identicon, & a link to your user page are automatically added to every post you make, so there is no need to sign your posts.
Antoine Vernet

3
This answer could be improved by addressing the issue of positive definiteness
Silverfish

This doesn't really answer the question: it's just a collection of unsupported assertions that might or might not be relevant. Could you reframe it in a way that shows how the question is answered and explains the reasoning?
whuber

0

I would add to the nice argument of Zen the following which explains why we often say that the covariance matrix is positive definite if n1k.

If x1,x2,...,xn are a random sample of a continuous probability distribution then x1,x2,...,xn are almost surely (in the probability theory sense) linearly independent. Now, z1,z2,...,zn are not linearly independent because i=1nzi=0, but because of x1,x2,...,xn being a.s. linearly independent, z1,z2,...,zn a.s. span Rn1. If n1k, they also span Rk.

To conclude, if x1,x2,...,xn are a random sample of a continuous probability distribution and n1k, the covariance matrix is positive definite.


0

For those with a non-mathematical background like me who don't quickly catch the abstract mathematical formulae, this is a worked out example excel for the most upvoted answer. The covariance matrix can be derived in other ways also.

enter image description here

enter image description here


Could you explain how this spreadsheet demonstrates positive-definiteness of the covariance matrix?
whuber

It does not. I had a hard time visualizing the covariance matrix in its notational form itself. So i created this sheet for myself and thought it could help someone.
Parikshit Bhinde

Please, then, edit it to include an answer to the question.
whuber

Done :) Thanks for suggesting.
Parikshit Bhinde

The question is "is one then guaranteed to get a symmetric and positive-definite matrix?" I am unable to perceive any element of your post that addresses this, because (1) it never identifies a covariance matrix; (2) it does not demonstrate positive-definiteness of anything.
whuber
Dengan menggunakan situs kami, Anda mengakui telah membaca dan memahami Kebijakan Cookie dan Kebijakan Privasi kami.
Licensed under cc by-sa 3.0 with attribution required.