Ordinary (Linear) Least Squares.
Solves the equation X beta = y by computing a vector beta that minimize ||y - X beta||^2 where ||.|| is the L^2 norm This function uses numpy.linalg.lstsq().
X must be centered by columns.
Parameters : |
|
---|---|
Returns : |
|
Ordinary (Linear) Least Squares Regression (OLS).
Initialization.
Parameters : |
|
---|
Return b1, ..., bp.
Return b0.
Learning method.
Parameters : |
|
---|
Compute the predicted response.
Parameters : |
|
---|---|
Returns : |
|
Rank of matrix x.
Example:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> import mlpy
>>> np.random.seed(0)
>>> mean, cov, n = [1, 5], [[1,1],[1,2]], 200
>>> d = np.random.multivariate_normal(mean, cov, n)
>>> x, y = d[:, 0].reshape(-1, 1), d[:, 1]
>>> x.shape
(200, 1)
>>> ols = mlpy.OLS()
>>> ols.learn(x, y)
>>> xx = np.arange(np.min(x), np.max(x), 0.01).reshape(-1, 1)
>>> yy = ols.pred(xx)
>>> fig = plt.figure(1) # plot
>>> plot = plt.plot(x, y, 'o', xx, yy, '--k')
>>> plt.show()
See [Hoerl70]. Ridge regression is also known as regularized least squares. It avoids overfitting by controlling the size of the model vector , measured by its -norm.
Solves the equation X beta = y by computing a vector beta that minimize ||y - X beta||^2 + ||lambda beta||^2 where ||.|| is the L^2 norm (X is a NxP matrix). When if N >= P the function solves the normal equation (primal solution), when N < P the function solves the dual solution.
X must be centered by columns.
Parameters : |
|
---|---|
Returns : |
|
Ridge Regression.
Solves the equation X beta = y by computing a vector beta that minimize ||y - X beta||^2 + ||lambda beta||^2 where ||.|| is the L^2 norm (X is a NxP matrix). When if N >= P the function solves the normal equation (primal solution), when N < P the function solves the dual solution.
Initialization.
Parameters : |
|
---|
Return b1, ..., bp.
Return b0.
Compute the regression coefficients.
Compute the predicted response.
Parameters : |
|
---|---|
Returns : |
|
Example:
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> import mlpy
>>> np.random.seed(0)
>>> mean, cov, n = [1, 5], [[1,1],[1,2]], 200
>>> d = np.random.multivariate_normal(mean, cov, n)
>>> x, y = d[:, 0].reshape(-1, 1), d[:, 1]
>>> x.shape
(200, 1)
>>> ridge = mlpy.Ridge()
>>> ridge.learn(x, y)
>>> xx = np.arange(np.min(x), np.max(x), 0.01).reshape(-1, 1)
>>> yy = ridge.pred(xx)
>>> fig = plt.figure(1) # plot
>>> plot = plt.plot(x, y, 'o', xx, yy, '--k')
>>> plt.show()
Multivariate primal Partial Least Squares (PLS) algorithm as described in [Taylor04].
[Taylor04] | J Shawe-Taylor and N Cristianini. Kernel Methods for Pattern Analysis. |
Initialization.
Parameters : |
|
---|
Returns the regression coefficients.
beta is a (P) vector in the univariate case and a (P, M) matrix in the multivariate case, where M is the number of target outputs.
Returns offset(s).
beta is a float in the univariate case, and a (M) vector in the multivariate case, where M is the number of target outputs.
Compute the regression coefficients.
Compute the predicted response(s).
Parameters : |
|
---|---|
Returns : |
|
Least Angle Regression.
x should be centered and normalized by columns, and y should be centered.
Parameters:
- x : 2d array_like object (N x P)
matrix of regressors
- y : 1d array_like object (N)
response
- maxsteps : int (> 0) or None
maximum number of steps. If maxsteps is None, the maximum number of steps is min(N-1, P), where N is the number of variables and P is the number of features.
Returns:
- active, est, steps : 1d numpy array, 2d numpy array, int
active features, all LARS estimates, number of steps performed
Least Angle Regression.
Initialization.
Parameters : |
|
---|
Returns the active features.
Return b_1, ..., b_p.
Return b_0.
Returns all LARS estimates.
Compute the regression coefficients.
Parameters : |
|
---|
Compute the predicted response.
Parameters : |
|
---|---|
Returns : |
|
Return the number of steps performed.
This example replicates the Figure 3 in [Efron04]. The diabetes data can be downloaded from http://www.stanford.edu/~hastie/Papers/LARS/diabetes.data
>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> import mlpy
>>> diabetes = np.loadtxt("diabetes.data", skiprows=1)
>>> x = diabetes[:, :-1]
>>> y = diabetes[:, -1]
>>> x -= np.mean(x, axis=0) # center x
>>> x /= np.sqrt(np.sum((x)**2, axis=0)) # normalize x
>>> y -= np.mean(y) # center y
>>> lars = mlpy.LARS()
>>> lars.learn(x, y)
>>> lars.steps() # number of steps performed
10
>>> lars.beta()
array([ -10.0098663 , -239.81564367, 519.84592005, 324.3846455 ,
-792.17563855, 476.73902101, 101.04326794, 177.06323767,
751.27369956, 67.62669218])
>>> lars.beta0()
4.7406304540474682e-14
>>> est = lars.est() # returns all LARS estimates
>>> beta_sum = np.sum(np.abs(est), axis=1)
>>> fig = plt.figure(1)
>>> plot1 = plt.plot(beta_sum, est)
>>> xl = plt.xlabel(r'$\sum{|\beta_j|}$')
>>> yl = plt.ylabel(r'$\beta_j$')
>>> plt.show()
Documentation and implementation is taken from http://web.mit.edu/lrosasco/www/contents/code/ENcode.html
Computes the coefficient vector which solves the elastic-net regularization problem
Elastic Net Regularization is an algorithm for learning and variable selection. It is based on a regularized least square procedure with a penalty which is the sum of an L1 penalty (like Lasso) and an L2 penalty (like ridge regression). The first term enforces the sparsity of the solution, whereas the second term ensures democracy among groups of correlated variables. The second term has also a smoothing effect that stabilizes the obtained solution.
Elastic Net Regularization via Iterative Soft Thresholding.
x should be centered and normalized by columns, and y should be centered.
Computes the coefficient vector which solves the elastic-net regularization problem min {|| X beta - Y ||^2 + lambda(|beta|^2_2 + eps |beta|_1}. The solution beta is computed via iterative soft-thresholding, with damping factor 1/(1+eps*lambda), thresholding factor eps*lambda, null initialization vector and step 1 / (eig_max(XX^T)*1.1).
Parameters : |
|
---|---|
Returns : |
|
Elastic Net Regularization via Iterative Soft Thresholding.
Computes the coefficient vector which solves the elastic-net regularization problem min {|| X beta - Y ||^2 + lambda(|beta|^2_2 + eps |beta|_1}. The solution beta is computed via iterative soft-thresholding, with damping factor 1/(1+eps*lambda), thresholding factor eps*lambda, null initialization vector and step 1 / (eig_max(XX^T)*1.1).
Initialization.
Parameters : |
|
---|
Return b_1, ..., b_p.
Return b_0.
Return the number of iterations performed.
Compute the regression coefficients.
Parameters : |
|
---|
Compute the predicted response.
Parameters : |
|
---|---|
Returns : |
|
[DeMol08] | C De Mol, E De Vito and L Rosasco. Elastic Net Regularization in Learning Theory,CBCL paper #273/ CSAILTechnical Report #TR-2008-046, Massachusetts Institute of Technology, Cambridge, MA, July 24, 2008. arXiv:0807.3423 (to appear in the Journal of Complexity). |
[Efron04] | Bradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani. Least Angle Regression. Annals of Statistics, 2004, volume 32, pages 407-499. |
[Hoerl70] | A E Hoerl and R W Kennard. Ridge Regression: Biased Estimation for Nonorthogonal Problems. Technometrics. Vol. 12, No. 1, 1970, pp. 55–67. |