2 research outputs found

    A principal component analysis-based feature dimensionality reduction scheme for content-based image retrieval system

    Get PDF
    In Content-Based Image Retrieval (CBIR) system, one approach of image representation is to employ combination of low-level visual features cascaded together into a flat vector. While this presents more descriptive information, it however poses serious challenges in terms of high dimensionality and high computational cost of feature extraction algorithms to deployment of CBIR on platforms (devices) with limited computational and storage resources. Hence, in this work a feature dimensionality reduction technique based on Principal Component Analysis (PCA) is implemented. Each image in a database is indexed using 174 dimensional feature vector comprising of 54-dimensional Colour Moments (CM54), 32-bin HSV-histogram (HIST32), 48-dimensional Gabor Wavelet (GW48) and 40-dimensional Wavelet Moments (MW40). The PCA scheme was incorporated into a CBIR system that utilized the entire feature vector space. The k-largest Eigenvalues that yielded a not more than 5% degradation in mean precision were retained for dimensionality reduction. Three image databases (DB10, DB20 and DB100) were used for testing. The result obtained showed that with 80% reduction in feature dimensions, tolerable loss of 3.45, 4.39 and 7.40% in mean precision value were achieved on DB10, DB20 and DB100

    A STUDY OF DISCRIMINATIVE FEATURE EXTRACTION FOR I-VECTOR BASED ACOUSTIC SNIFFING IN IVN ACOUSTIC MODEL TRAINING

    Get PDF
    ABSTRACT Recently, we proposed an i-vector approach to acoustic sniffing for irrelevant variability normalization based acoustic model training in large vocabulary continuous speech recognition (LVCSR). Its effectiveness has been confirmed by experimental results on Switchboard-1 conversational telephone speech transcription task. In this paper, we study several discriminative feature extraction approaches in ivector space to improve both recognition accuracy and run-time efficiency. New experimental results are reported on a much larger scale LVCSR task with about 2000 hours training data
    corecore