1,668 research outputs found

    Boosting performance for 2D linear discriminant analysis via regression

    Full text link
    Two Dimensional Linear Discriminant Analysis (2DLDA) has received much interest in recent years. However, 2DLDA could make pairwise distances between any two classes become significantly unbalanced, which may affect its performance. Moreover 2DLDA could also suffer from the small sample size problem. Based on these observations, we propose two novel algorithms called Regularized 2DLDA and Ridge Regression for 2DLDA (RR-2DLDA). Regularized 2DLDA is an extension of 2DLDA with the introduction of a regularization parameter to deal with the small sample size problem. RR-2DLDA integrates ridge regression into Regularized 2DLDA to balance the distances among different classes after the transformation. These proposed algorithms overcome the limitations of 2DLDA and boost recognition accuracy. The experimental results on the Yale, PIE and FERET databases showed that RR-2DLDA is superior not only to 2DLDA but also other state-of-the-art algorithms

    Classifiers for centrality determination in proton-nucleus and nucleus-nucleus collisions

    Get PDF
    Centrality, as a geometrical property of the collision, is crucial for the physical interpretation of nucleus-nucleus and proton-nucleus experimental data. However, it cannot be directly accessed in event-by-event data analysis. Common methods for centrality estimation in A-A and p-A collisions usually rely on a single detector (either on the signal in zero-degree calorimeters or on the multiplicity in some semi-central rapidity range). In the present work, we made an attempt to develop an approach for centrality determination that is based on machine-learning techniques and utilizes information from several detector subsystems simultaneously. Different event classifiers are suggested and evaluated for their selectivity power in terms of the number of nucleons-participants and the impact parameter of the collision. Finer centrality resolution may allow to reduce impact from so-called volume fluctuations on physical observables being studied in heavy-ion experiments like ALICE at the LHC and fixed target experiment NA61/SHINE on SPS.Comment: To be published in proceedings of the "XIIth Quark Confinement and the Hadron Spectrum" conference (Thessaloniki, 2016

    An adaptive ensemble learner function via bagging and rank aggregation with applications to high dimensional data.

    Get PDF
    An ensemble consists of a set of individual predictors whose predictions are combined. Generally, different classification and regression models tend to work well for different types of data and also, it is usually not know which algorithm will be optimal in any given application. In this thesis an ensemble regression function is presented which is adapted from Datta et al. 2010. The ensemble function is constructed by combining bagging and rank aggregation that is capable of changing its performance depending on the type of data that is being used. In the classification approach, the results can be optimized with respect to performance measures such as accuracy, sensitivity, specificity and area under the curve (AUC) whereas in the regression approach, it can be optimized with respect to measures such as mean square error and mean absolute error. The ensemble classifier and ensemble regressor performs at the level of the best individual classifier or regression model. For complex high-dimensional datasets, it may be advisable to combine a number of classification algorithms or regression algorithms rather than using one specific algorithm
    corecore