2,564 research outputs found

    On non-asymptotic bounds for estimation in generalized linear models with highly correlated design

    Full text link
    We study a high-dimensional generalized linear model and penalized empirical risk minimization with â„“1\ell_1 penalty. Our aim is to provide a non-trivial illustration that non-asymptotic bounds for the estimator can be obtained without relying on the chaining technique and/or the peeling device.Comment: Published at http://dx.doi.org/10.1214/074921707000000319 in the IMS Lecture Notes Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    The Smooth-Lasso and other â„“1+â„“2\ell_1+\ell_2-penalized methods

    Full text link
    We consider a linear regression problem in a high dimensional setting where the number of covariates pp can be much larger than the sample size nn. In such a situation, one often assumes sparsity of the regression vector, \textit i.e., the regression vector contains many zero components. We propose a Lasso-type estimator β^Quad\hat{\beta}^{Quad} (where 'QuadQuad' stands for quadratic) which is based on two penalty terms. The first one is the ℓ1\ell_1 norm of the regression coefficients used to exploit the sparsity of the regression as done by the Lasso estimator, whereas the second is a quadratic penalty term introduced to capture some additional information on the setting of the problem. We detail two special cases: the Elastic-Net β^EN\hat{\beta}^{EN}, which deals with sparse problems where correlations between variables may exist; and the Smooth-Lasso β^SL\hat{\beta}^{SL}, which responds to sparse problems where successive regression coefficients are known to vary slowly (in some situations, this can also be interpreted in terms of correlations between successive variables). From a theoretical point of view, we establish variable selection consistency results and show that β^Quad\hat{\beta}^{Quad} achieves a Sparsity Inequality, \textit i.e., a bound in terms of the number of non-zero components of the 'true' regression vector. These results are provided under a weaker assumption on the Gram matrix than the one used by the Lasso. In some situations this guarantees a significant improvement over the Lasso. Furthermore, a simulation study is conducted and shows that the S-Lasso β^SL\hat{\beta}^{SL} performs better than known methods as the Lasso, the Elastic-Net β^EN\hat{\beta}^{EN}, and the Fused-Lasso with respect to the estimation accuracy. This is especially the case when the regression vector is 'smooth', \textit i.e., when the variations between successive coefficients of the unknown parameter of the regression are small. The study also reveals that the theoretical calibration of the tuning parameters and the one based on 10 fold cross validation imply two S-Lasso solutions with close performance

    On the conditions used to prove oracle results for the Lasso

    Full text link
    Oracle inequalities and variable selection properties for the Lasso in linear models have been established under a variety of different assumptions on the design matrix. We show in this paper how the different conditions and concepts relate to each other. The restricted eigenvalue condition (Bickel et al., 2009) or the slightly weaker compatibility condition (van de Geer, 2007) are sufficient for oracle results. We argue that both these conditions allow for a fairly general class of design matrices. Hence, optimality of the Lasso for prediction and estimation holds for more general situations than what it appears from coherence (Bunea et al, 2007b,c) or restricted isometry (Candes and Tao, 2005) assumptions.Comment: 33 pages, 1 figur

    Line scan imagery interpretation

    Get PDF

    Performance of the MIND detector at a Neutrino Factory using realistic muon reconstruction

    Get PDF
    A Neutrino Factory producing an intense beam composed of nu_e(nubar_e) and nubar_mu(nu_mu) from muon decays has been shown to have the greatest sensitivity to the two currently unmeasured neutrino mixing parameters, theta_13 and delta_CP . Using the `wrong-sign muon' signal to measure nu_e to nu_mu(nubar_e to nubar_mu) oscillations in a 50 ktonne Magnetised Iron Neutrino Detector (MIND) sensitivity to delta_CP could be maintained down to small values of theta_13. However, the detector efficiencies used in previous studies were calculated assuming perfect pattern recognition. In this paper, MIND is re-assessed taking into account, for the first time, a realistic pattern recognition for the muon candidate. Reoptimisation of the analysis utilises a combination of methods, including a multivariate analysis similar to the one used in MINOS, to maintain high efficiency while suppressing backgrounds, ensuring that the signal selection efficiency and the background levels are comparable or better than the ones in previous analyses

    Matter profile effect in neutrino factory

    Get PDF
    We point out that the matter profile effect --- the effect of matter density fluctuation on the baseline --- is very important to estimate the parameters in a neutrino factory with a very long baseline. To make it clear, we propose the method of the Fourier series expansion of the matter profile. By using this method, we can take account of both the matter profile effect and its ambiguity. For very long baseline experiment, such as L=7332km, in the analysis of the oscillation phenomena we need to introduce a new parameter a1 a_{1} --- the Fourier coefficient of the matter profile --- as a theoretical parameter to deal with the matter profile effects.Comment: 21 pages, 15 figure

    Including Limited Partners in the Diversity Jurisdiction Analysis

    Get PDF
    This paper presents the results of the Dynamic Pricing Challenge, held on the occasion of the 17th INFORMS Revenue Management and Pricing Section Conference on June 29–30, 2017 in Amsterdam, The Netherlands. For this challenge, participants submitted algorithms for pricing and demand learning of which the numerical performance was analyzed in simulated market environments. This allows consideration of market dynamics that are not analytically tractable or can not be empirically analyzed due to practical complications. Our findings implicate that the relative performance of algorithms varies substantially across different market dynamics, which confirms the intrinsic complexity of pricing and learning in the presence of competition
    • …
    corecore