29

Say I fit a model in statsmodels

mod = smf.ols('dependent ~ first_category + second_category + other', data=df).fit()

When I do mod.summary() I may see the following:

Warnings:
[1] The condition number is large, 1.59e+05. This might indicate that there are
strong multicollinearity or other numerical problems.

Sometimes the warning is different (e.g. based on eigenvalues of the design matrix). How can I capture high-multi-collinearity conditions in a variable? Is this warning stored somewhere in the model object?

Also, where can I find a description of the fields in summary()?

1
  • Good News is that "Multicollinearity only affects the coefficients and p-values, but it does not influence the model’s ability to predict the dependent variable" (I hope, it's true)
    – JeeyCi
    Commented Dec 5, 2023 at 13:54

2 Answers 2

59

You can detect high-multi-collinearity by inspecting the eigen values of correlation matrix. A very low eigen value shows that the data are collinear, and the corresponding eigen vector shows which variables are collinear.

If there is no collinearity in the data, you would expect that none of the eigen values are close to zero:

>>> xs = np.random.randn(100, 5)      # independent variables
>>> corr = np.corrcoef(xs, rowvar=0)  # correlation matrix
>>> w, v = np.linalg.eig(corr)        # eigen values & eigen vectors
>>> w
array([ 1.256 ,  1.1937,  0.7273,  0.9516,  0.8714])

However, if say x[4] - 2 * x[0] - 3 * x[2] = 0, then

>>> noise = np.random.randn(100)                      # white noise
>>> xs[:,4] = 2 * xs[:,0] + 3 * xs[:,2] + .5 * noise  # collinearity
>>> corr = np.corrcoef(xs, rowvar=0)
>>> w, v = np.linalg.eig(corr)
>>> w
array([ 0.0083,  1.9569,  1.1687,  0.8681,  0.9981])

one of the eigen values (here the very first one), is close to zero. The corresponding eigen vector is:

>>> v[:,0]
array([-0.4077,  0.0059, -0.5886,  0.0018,  0.6981])

Ignoring almost zero coefficients, above basically says x[0], x[2] and x[4] are colinear (as expected). If one standardizes xs values and multiplies by this eigen vector, the result will hover around zero with small variance:

>>> std_xs = (xs - xs.mean(axis=0)) / xs.std(axis=0)  # standardized values
>>> ys = std_xs.dot(v[:,0])
>>> ys.mean(), ys.var()
(0, 0.0083)

Note that ys.var() is basically the eigen value which was close to zero.

So, in order to capture high multi-linearity, look at the eigen values of correlation matrix.

4
  • Thanks. This approach isn't mentioned in Wikipedia. Would you mind explaining, or do you know of any sources that explain why small eigen values of the correlation matrix are indicative of multi-collinearity? Commented Sep 14, 2014 at 13:42
  • 9
    @user815423426 the theory is longer than what would fit in here, but look into PCA. Basically, each eigen vector explains the variation in the data orthogonal to other eigen vectors, and the eigen value shows how much variation is in that direction. An almost zero eigen value shows a direction with zero variation, hence collinearity. Commented Sep 14, 2014 at 13:49
  • 1
    I'm confused. eigenvalues close to zero indication collinearity. The first eigenvalue is close to zero so you then inspect the eigenvectors. From [-0.4077, 0.0059, -0.5886, 0.0018, 0.6981], you determined x[0], x[2] and x[4] were colinear. So in the eigenvectors, we're looking for numbers not close to zero?
    – Jarad
    Commented Nov 2, 2017 at 21:06
  • 3
    @Jarad Yes. If eigenvalues are close to zero, look at their corresponding eigenvectors for values that are not close to zero and the indices of those values represent the features that are collinear. As a side note, you can also standardize the data first (e.g. using StandardScaler in sklearn) and use the np.cov function (in place of np.corrcoef) as well which will return you a covariance matrix (cov_mat = np.cov(xs_standardized) and you can perform eigen-decomposition on that in the same manner demonstrated above and follow the same procedure to determine collinear features. Commented Feb 8, 2018 at 8:30
5

Based on a similar question for R, there are some other options that may help people. I was looking for a single number that captured the collinearity, and options include the determinant and condition number of the correlation matrix.

According to one of the R answers, determinant of the correlation matrix will "range from 0 (Perfect Collinearity) to 1 (No Collinearity)". I found the bounded range helpful.

Translated example for determinant:

import numpy as np
import pandas as pd

# Create a sample random dataframe
np.random.seed(321)
x1 = np.random.rand(100)
x2 = np.random.rand(100)
x3 = np.random.rand(100)
df = pd.DataFrame({'x1': x1, 'x2': x2, 'x3': x3})

# Now create a dataframe with multicollinearity
multicollinear_df = df.copy()
multicollinear_df['x3'] = multicollinear_df['x1'] + multicollinear_df['x2']

# Compute both correlation matrices
corr = np.corrcoef(df, rowvar=0)
multicollinear_corr = np.corrcoef(multicollinear_df, rowvar=0)

# Compare the determinants
print np.linalg.det(corr) . # 0.988532159861
print np.linalg.det(multicollinear_corr) . # 2.97779797328e-16

And similarly, the condition number of the covariance matrix will approach infinity with perfect linear dependence.

print np.linalg.cond(corr) . # 1.23116253259
print np.linalg.cond(multicollinear_corr) . # 6.19985218873e+15
2
  • ? why np.linalg.cond from corr ? here np.linalg.cond(results.model.exog) from OLS, where x is exogenous to the model
    – JeeyCi
    Commented Dec 5, 2023 at 13:48
  • answer about np.linalg.cond(corr) given here (How do you interpret the condition number of a correlation matrix)
    – JeeyCi
    Commented Apr 9, 2024 at 15:38

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.