245 research outputs found

    Multi-Classifiers And Decision Fusion For Robust Statistical Pattern Recognition With Applications To Hyperspectral Classification

    Get PDF
    In this dissertation, a multi-classifier, decision fusion framework is proposed for robust classification of high dimensional data in small-sample-size conditions. Such datasets present two key challenges. (1) The high dimensional feature spaces compromise the classifiers’ generalization ability in that the classifier tends to overit decision boundaries to the training data. This phenomenon is commonly known as the Hughes phenomenon in the pattern classification community. (2) The small-sample-size of the training data results in ill-conditioned estimates of its statistics. Most classifiers rely on accurate estimation of these statistics for modeling training data and labeling test data, and hence ill-conditioned statistical estimates result in poorer classification performance. This dissertation tests the efficacy of the proposed algorithms to classify primarily remotely sensed hyperspectral data and secondarily diagnostic digital mammograms, since these applications naturally result in very high dimensional feature spaces and often do not have sufficiently large training datasets to support the dimensionality of the feature space. Conventional approaches, such as Stepwise LDA (S-LDA) are sub-optimal, in that they utilize a small subset of the rich spectral information provided by hyperspectral data for classification. In contrast, the approach proposed in this dissertation utilizes the entire high dimensional feature space for classification by identifying a suitable partition of this space, employing a bank-of-classifiers to perform “local” classification over this partition, and then merging these local decisions using an appropriate decision fusion mechanism. Adaptive classifier weight assignment and nonlinear pre-processing (in kernel induced spaces) are also proposed within this framework to improve its robustness over a wide range of fidelity conditions. Experimental results demonstrate that the proposed framework results in significant improvements in classification accuracies (as high as a 12% increase) over conventional approaches

    Non-Linear and Linear Transformations of Features for Robust Speech Recognition and Speaker Identification

    Get PDF
    Automatic speech recognizers perform poorly when training and test data are systematically different in terms of noise and channel characteristics. One manifestation of such differences is variations in the probability density functions (pdfs) between training and test features. Consequently, both automatic speech recognition and automatic speaker identification may be severely degraded. Previous attempts to mm1m1ze this problem include Cepstral Mean and Variance Normalization and transforming all speech features to a uni-variate Gaussian pdf. In this thesis, two techniques are presented for non-linearly scaling speech features to fit them to a target pdf - the first is based on the principles of Histogram matching (a commonly employed algorithm in image contrast enhancement applications) and the second is based on principles of quantile based Cumulative Density Function (CDF) matching for data drawn from different distributions. These methods can be used to compensate for the systematic marginal (i.e. each feature considered individually) differences between training and test features. For a more complete, multi-dimensional restoration of feature statistics, a linear (matrix) transformation is proposed, mapping the noisy feature space to the corresponding clean space. The matrix used for this global transformation is learned in a least squares sense from stereo training data - comprised of speech recorded simultaneously in clean and noisy conditions. We further propose a linear covariance normalization technique to compensate for differences in covariance properties between training and test data. Experimental results are given that illustrate the benefits of these algorithms for speech recognition and automatic speaker identification

    Multi-Classifiers And Decision Fusion For Robust Statistical Pattern Recognition With Applications To Hyperspectral Classification

    Get PDF
    In this dissertation, a multi-classifier, decision fusion framework is proposed for robust classification of high dimensional data in small-sample-size conditions. Such datasets present two key challenges. (1) The high dimensional feature spaces compromise the classifiers’ generalization ability in that the classifier tends to overit decision boundaries to the training data. This phenomenon is commonly known as the Hughes phenomenon in the pattern classification community. (2) The small-sample-size of the training data results in ill-conditioned estimates of its statistics. Most classifiers rely on accurate estimation of these statistics for modeling training data and labeling test data, and hence ill-conditioned statistical estimates result in poorer classification performance. This dissertation tests the efficacy of the proposed algorithms to classify primarily remotely sensed hyperspectral data and secondarily diagnostic digital mammograms, since these applications naturally result in very high dimensional feature spaces and often do not have sufficiently large training datasets to support the dimensionality of the feature space. Conventional approaches, such as Stepwise LDA (S-LDA) are sub-optimal, in that they utilize a small subset of the rich spectral information provided by hyperspectral data for classification. In contrast, the approach proposed in this dissertation utilizes the entire high dimensional feature space for classification by identifying a suitable partition of this space, employing a bank-of-classifiers to perform “local” classification over this partition, and then merging these local decisions using an appropriate decision fusion mechanism. Adaptive classifier weight assignment and nonlinear pre-processing (in kernel induced spaces) are also proposed within this framework to improve its robustness over a wide range of fidelity conditions. Experimental results demonstrate that the proposed framework results in significant improvements in classification accuracies (as high as a 12% increase) over conventional approaches

    Statistical process control approach to reduce the bullwhip effect

    Get PDF
    Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2007.Includes bibliographical references (leaves 66-68).The bullwhip effect is a pervasive problem in multi echelon supply chains that results in inefficient production operations and higher inventory levels. The causes of the bullwhip effect are well understood in industry and academia. Quantitative and qualitative solutions to attenuate this effect have been proposed in various research studies. In this research a quantitative solution in the form of a Statistical Process Control (SPC) based inventory management system is proposed that reduces the bullwhip effect while reducing inventory without compromising service level requirements for a variety of products. The strength of this methodology is in its effectiveness in reducing bullwhip for fast moving products in the mature phase of their lifecycles where improving production efficiency and lowering inventory investment are critical. However, fill rate issues are observed for slow moving products and therefore, the methodology is not recommended for such products. Finally, the application of this methodology to reduce the bullwhip effect is illustrated for a product family of a medical devices company. The results for the different classes of products in this family are discussed.by Harikumar Iyer [and] Saurabh Prasad.M.Eng.in Logistic
    • …
    corecore