Apa artinya R-squared negatif?


17

Katakanlah saya memiliki beberapa data, dan kemudian saya mencocokkan data dengan model (regresi non-linear). Lalu saya menghitung R-squared ( R2 ).

Ketika R-squared negatif, apa artinya itu? Apakah itu berarti model saya buruk? Saya tahu kisaran R2 bisa [-1,1]. Ketika R2 adalah 0, apa artinya itu juga?


4
Ini berarti Anda telah melakukan kesalahan karena R2 terletak pada [0,1] menurut definisi. R2 disesuaikan di sisi lain bisa negatif, yang dapat Anda asumsikan dengan aman bahwa model Anda sangat tidak sesuai dengan data. Ketika R2 persis nol ini berarti bahwa y¯ sama baiknya dengan prediksi y sebagai garis regresi kuadrat terkecil itu sendiri.
dsaxton

1
Ini dimungkinkan untuk regresi tanpa intersep, mis. Stats.stackexchange.com/questions/164586/…



@ung, saya akan menyarankan ini mungkin duplikat dari pertanyaan itu ... apakah Anda pikir mereka cukup berbeda? (Jika ada pertanyaan, pertanyaan ini tampaknya lebih baik daripada yang lain karena tidak ada sintaks SPSS yang mengganggu, tetapi jawaban di utas lainnya sangat bagus dan sepertinya juga mencakup pertanyaan ini.)
Silverfish

Jawaban:


37

bisa negatif, itu hanya berarti bahwa:R2

  1. Model ini sangat cocok dengan data Anda
  2. Anda tidak mengatur intersep

Untuk orang-orang yang mengatakan bahwa adalah antara 0 dan 1, ini bukan masalahnya. Sementara nilai negatif untuk sesuatu dengan kata 'kuadrat' di dalamnya mungkin terdengar seperti melanggar aturan matematika, itu bisa terjadi dalam model R 2 tanpa intersep. Untuk memahami alasannya, kita perlu melihat bagaimana R 2 dihitung.R2R2R2

Ini agak panjang - Jika Anda ingin jawabannya tanpa memahaminya, maka lewati sampai akhir. Kalau tidak, saya sudah mencoba menulis ini dengan kata-kata sederhana.

Pertama, mari kita mendefinisikan 3 variabel: , T S S dan E S S .RSSTSSESS

Menghitung RSS :

Untuk setiap variabel independen , kami memiliki variabel dependen y . Kami memplot garis linier yang paling sesuai, yang memprediksi nilai y untuk setiap nilai x . Mari kita sebut nilai-nilai y garis memprediksi y . Kesalahan antara apa yang diprediksi garis Anda dan apa nilai y aktual dapat dikurangkan menjadi pengurangan. Semua perbedaan ini kuadrat dan ditambahkan, yang memberikan Residual Sum of Squares R S S .xyyxyy^yRSS

Puting yang menjadi persamaan, RSS=(yy^)2

Menghitung TSS :

Kita dapat menghitung nilai rata-rata , yang disebut ˉ y . Jika kita memplot ˉ y , itu hanya garis horizontal melalui data karena konstan. Apa yang bisa kita lakukan dengannya adalah mengurangi ˉ y (nilai rata-rata y ) dari setiap nilai aktual y . Hasilnya adalah kuadrat dan ditambahkan bersama-sama, yang memberikan total jumlah kotak T S S .yy¯y¯y¯yyTSS

Puting yang menjadi persamaan TSS=(yy¯)2

Menghitung ESS :

Perbedaan antara y (nilai y yang diprediksi oleh garis) dan nilai rata-rata ˉ y dikuadratkan dan ditambahkan. Ini adalah jumlah Dijelaskan kuadrat, yang sama dengan Σ ( y - ˉ y ) 2y^yy¯(y^y¯)2

Ingat, , tapi kita bisa menambahkan + y - y ke dalamnya, karena membatalkan sendirinya. Oleh karena itu, T S S = Σ ( y - y + y - ˉ y ) 2 . Memperluas kurung ini, kita mendapatkan T S S = Σ ( y - y ) 2 +TSS=(yy¯)2+y^y^TSS=(yy^+y^y¯)2TSS=(yy^)2+2(yy^)(y^y¯)+(y^y¯)2

Jika, dan hanya ketika garis diplot dengan intercept, berikut adalah selalu benar: . Oleh karena itu, T S S = Σ ( y - y ) 2 + Σ ( y - ˉ y ) 2 , yang dapat Anda perhatikan hanya berarti bahwa T S S = R S S +2(yy^)(y^y¯)=0TSS=(yy^)2+(y^y¯)2TSS=RSS+ESS. If we divide all terms by TSS and rearrange, we get 1RSSTSS=ESSTSS.

Here's the important part:

R2 is defined as how much of the variance is explained by your model (how good your model is). In equation form, that's R2=1RSSTSS. Look familiar? When the line is plotted with an intercept, we can substitute this as R2=ESSTSS. Since both the numerator and demoninator are sums of squares, R2 must be positive.

BUT

When we don't specify an intercept, 2(yy^)(y^y¯) does not necessarily equal 0. This means that TSS=RSS+ESS+2(yy^)(y^y¯).

Dividing all terms by TSS, we get 1RSSTSS=ESS+2(yy^)(y^y¯)TSS.

Finally, we substitute to get R2=ESS+2(yy^)(y^y¯)TSS. This time, the numerator has a term in it which is not a sum of squares, so it can be negative. This would make R2 negative. When would this happen? 2(yy^)(y^y¯) would be negative when yy^ is negative and y^y¯ is positive, or vice versa. This occurs when the horizontal line of y¯ actually explains the data better than the line of best fit.

Here's an exaggerated example of when R2 is negative (Source: University of Houston Clear Lake)

An exaggerated example of when R^2 is negative (Source: University of Houston Clear Lake)

Put simply:

  • When R2<0, a horizontal line explains the data better than your model.

You also asked about R2=0.

  • When R2=0, a horizontal line explains the data equally as well as your model.

I commend you for making it through that. If you found this helpful, you should also upvote fcop's answer here which I had to refer to, because it's been a while.


5
Seriously fantastic answer! The only thing missing for me is the intuition behind why 2(yy^)(y^y¯)=0 when, and only when, there is an intercept set?
Owen

6

Neither answer so far is entirely correct, so I will try to give my understanding of R-Squared. I have given a more detailed explanation of this on my blog post here "What is R-Squared"

Sum Squared Error

The objective of ordinary least squared regression is to get a line which minimized the sum squared error. The default line with minimum sum squared error is a horizontal line through the mean. Basically, if you can't do better, you can just predict the mean value and that will give you the minimum sum squared error

horizontal line through the mean

R-Squared is a way of measuring how much better than the mean line you have done based on summed squared error. The equation for R-Squared is

equation for r-squared

Now SS Regression and SS Total are both sums of squared terms. Both of those are always positive. This means we are taking 1, and subtracting a positive value. So the maximum R-Squared value is positive 1, but the minimum is negative infinity. Yes, that is correct, the range of R-squared is between -infinity and 1, not -1 and 1 and not 0 and 1

What Is Sum Squared Error

Sum squared error is taking the error at every point, squaring it, and adding all the squares. For total error, it uses the horizontal line through the mean, because that gives the lowest sum squared error if you don't have any other information, i.e. can't do a regression.

enter image description here

As an equation it is this

sum squared total error equation

Now with regression, our objective is to do better than the mean. For instance this regression line will give a lower sum squared error than using the horizontal line.

enter image description here

The equation for regression sum squared error is this

enter image description here

Ideally, you would have zero regression error, i.e. your regression line would perfectly match the data. In that case you would get an R-Squared value of 1

r squared value of 1

Negative R Squared

All the information above is pretty standard. Now what about negative R-Squared ?

Well it turns out that there is not reason that your regression equation must give lower sum squared error than the mean value. It is generally thought that if you can't make a better prediction than the mean value, you would just use the mean value, but there is nothing forcing that to be the cause. You could for instance predict the median for everything.

In actual practice, with ordinary least squared regression, the most common time to get a negative R-Squared value is when you force a point that the regression line must go through. This is typically done by setting the intercept, but you can force the regression line through any point.

When you do that the regression line goes through that point, and attempts to get the minimum sum squared error while still going through that point.

fixed point

By default, the regression equations use average x and average y as the point that the regression line goes through. But if you force it through a point that is far away from where the regression line would normally be you can get sum squared error that is higher than using the horizontal line

In the image below, both regression lines were forced to have a y intercept of 0. This caused a negative R-squared for the data that is far offset from the origin.

negative r squared

For the top set of points, the red ones, the regression line is the best possible regression line that also passes through the origin. It just happens that that regression line is worse than using a horizontal line, and hence gives a negative R-Squared.

Undefined R-Squared

There is one special case no one mentioned, where you can get an undefined R-Squared. That is if your data is completely horizontal, then your total sum squared error is zero. As a result you would have a zero divided by zero in the R-squared equation, which is undefined.

enter image description here

enter image description here


a very vivid answer, would like to see much more answers of this type!
Ben

0

As the previous commenter notes, r^2 is between [0,1], not [-1,+1], so it is impossible to be negative. You cannot square a value and get a negative number. Perhaps you are looking at r, the correlation? It can be between [-1,+1], where zero means there is no relationship between the variables, -1 means there is a perfect negative relationship (as one variable increases, the other decreases), and +1 is a perfect positive relationship (both variables go up or down concordantly).

If indeed you are looking at r^2, then, as the previous commenter describes, you are probably seeing the adjusted r^2, not the actual r^2. Consider what the statistic means: I teach behavioral science statistics, and the easiest way that I've learned to teach my students about the meaning of r^2 is " % variance explained." So if you have r^2=0.5, the model explains 50% of the variation of the dependent (outcome) variable. If you have a negative r^2, it would mean that the model explains a negative % of the outcome variable, which is not an intuitively reasonable suggestion. However, adjusted r^2 takes the sample size (n) and number of predictors (p) into consideration. A formula for calculating it is here. If you have a very low r^2, then it is reasonably easy to get negative values. Granted, a negative adjusted r^2 does not have any more intuitive meaning than regular r^2, but as the previous commenter says, it just means your model is very poor, if not just plain useless.


3
Regarding percentage of variance explained, perhaps if the model is so poor as to increase the variance (ESS > TSS), one may get a negative R2, where R2 is defined as % of variance explained rather than squared correlation between the actual and the fitted values. This might not happen in a regression with an intercept estimated by OLS, but it could happen in a regression without intercept or perhaps other cases.
Richard Hardy

4
R2 is impossible to be <0 in sample but can be negative when computed out of sample, i.e. on a holdout sample after fixing all the regression coefficients. As explained above this represents worse than random predictions.
Frank Harrell

@FrankHarrell, are you sure that it needs to be in sample? Granted, you'd have to ignore the data pretty strongly to generate a model which is worse than the mean, but I'm not seeing why you can't do this only with in-sample data.
Matt Krause

I'm assume in sample means sample on which coefficients were estimated. Then can't be negative.
Frank Harrell

1
@FrankHarrell, Suppose the model is really atrocious--you fit some intercept-less function like sin(ωx+ϕ) to a diagonal line. Shouldn't the R2 be negative here too, even for the in-sample data? Matlab does give me a reasonably large negative number when I do that...
Matt Krause
Dengan menggunakan situs kami, Anda mengakui telah membaca dan memahami Kebijakan Cookie dan Kebijakan Privasi kami.
Licensed under cc by-sa 3.0 with attribution required.