Kalpana Kalpana (Editor)

De sparsified lasso

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

De-sparsified lasso contributes to construct confidence intervals and statistical tests for single or low-dimensional components of a large parameter vector in high-dimensional model.

Contents

1 High-dimensional linear model

Y = X β 0 + ϵ with n × p design matrix X =: [ X 1 , . . . , X p ] ( n × p vectors X j ), ϵ N n ( 0 , σ ϵ 2 I ) independent of X and unknown regression p × 1 vector β 0 .

The usual method to find the parameter is by Lasso: β ^ n ( λ ) = a r g m i n β R p   1 2 n Y X β 2 2 + λ β 1

The de-sparsified lasso is a method modified from the Lasso estimator which fulfills the Karush-Kuhn-Tucker conditions is as follows:

β ^ n ( λ , M ) = β ^ n ( λ ) + 1 n M X T ( Y X β ^ n ( λ ) )

where M R p × p is an arbitrary matrix. The matrix M is generated using a surrogate inverse covariance matrix.

2 Generalized linear model

Desparsifying l 1 -norm penalized estimators and corresponding theory can also be applied to models with convex loss functions such as generalized linear models.

Consider the following 1 × p vectors of covariables x i χ R p and univariate responses y i Y R for i = 1 , . . . , n

we have a loss function ρ β ( y , x ) = ρ ( y , x β ) ( β R p ) which is assumed to be strictly convex function in β R p

The l 1 -norm regularized estimator is β ^ = a r g m i n β ( P n ρ β + λ β 1 )

Similarly, the Lasso for node wise regression with matrix input is defined as follows: Denote by Σ ^ a matrix which we want to approximately invert using nodewise lasso.

The de-sparsified l 1 -norm regularized estimator is as follows: γ j ^ := a r g m i n γ R p 1 ( Σ ^ j , j 2 Σ ^ j , / j γ + γ T Σ ^ / j , / j γ + 2 λ j γ 1

where Σ ^ j , / j denotes the j th row of Σ ^ without the diagonal element ( j , j ) , and Σ ^ / j , / j is the sub matrix without the j th row and j th column.

References

De-sparsified lasso Wikipedia


Similar Topics