Adjusting inverse regression for predictors with clustered distribution

Abstract

A major family of sufficient dimension reduction (SDR) methods, called inverse regression, commonly require the distribution of the predictor XX to have a linear E(X∣βTX)E(X|\beta^\mathsf{T}X) and a degenerate var(X∣βTX)\mathrm{var}(X|\beta^\mathsf{T}X) for the desired reduced predictor βTX\beta^\mathsf{T}X. In this paper, we adjust the first and second-order inverse regression methods by modeling E(X∣βTX)E(X|\beta^\mathsf{T}X) and var(X∣βTX)\mathrm{var}(X|\beta^\mathsf{T}X) under the mixture model assumption on XX, which allows these terms to convey more complex patterns and is most suitable when XX has a clustered sample distribution. The proposed SDR methods build a natural path between inverse regression and the localized SDR methods, and in particular inherit the advantages of both; that is, they are n\sqrt{n}-consistent, efficiently implementable, directly adjustable under the high-dimensional settings, and fully recovering the desired reduced predictor. These findings are illustrated by simulation studies and a real data example at the end, which also suggest the effectiveness of the proposed methods for nonclustered data

    Similar works

    Full text

    thumbnail-image

    Available Versions