164,358 research outputs found

    A new feature extraction approach based on non linear source separation

    Get PDF
    A new feature extraction approach is proposed in this paper to improve the classification performance in remotely sensed data. The proposed method is based on a primary sources subset (PSS) obtained by nonlinear transform that provides lower space for land pattern recognition. First, the underlying sources are approximated using multilayer neural networks. Given that, Bayesian inferences update unknown sources’ knowledge and model parameters with information’s data. Then, a source dimension minimizing technique is adopted to provide more efficient land cover description. The support vector machine (SVM) scheme is developed by using feature extraction. The experimental results on real multispectral imagery demonstrates that the proposed approach ensures efficient feature extraction by using several descriptors for texture identification and multiscale analysis. In a pixel based approach, the reduced PSS space improved the overall classification accuracy by 13% and reaches 82%. Using texture and multi resolution descriptors, the overall accuracy is 75.87% for the original observations, while using the reduced source space the overall accuracy reaches 81.67% when using jointly wavelet and Gabor transform and 86.67% when using Gabor transform. Thus, the source space enhanced the feature extraction process and allow more land use discrimination than the multispectral observations

    A feature extraction software tool for agricultural object-based image analysis

    Full text link
    A software application for automatic descriptive feature extraction from image-objects, FETEX 2.0, is presented and described in this paper. The input data include a multispectral high resolution digital image and a vector file in shapefile format containing the polygons or objects, usually extracted from a geospatial database. The design of the available descriptive features or attributes has been mainly focused on the description of agricultural parcels, providing a variety of information: spectral information from the different image bands; textural descriptors of the distribution of the intensity values based on the grey level co-occurrence matrix, the wavelet transform and a factor of edgeness; structural features describing the spatial arrangement of the elements inside the objects, based on the semivariogram curve and the Hough transform; and several descriptors of the object shape. The output file is a table that can be produced in four alternative formats, containing a vector of features for every object processed. This table of numeric values describing the objects from different points of view can be externally used as input data for any classification software. Additionally, several types of graphs and images describing the feature extraction procedure are produced, useful for interpretation and understanding the process. A test of the processing times is included, as well as an application of the program in a real parcel-based classification problem, providing some results and analyzing the applicability, the future improvement of the methodologies, and the use of additional types of data sets. This software is intended to be a dynamic tool, integrating further data and feature extraction algorithms for the progressive improvement of land use/land cover database classification and agricultural database updating processes. © 2011 Elsevier B.V.The authors appreciate the financial support provided by the Spanish Ministerio de Ciencia e Innovacion and the FEDER in the framework of the Project CGL2009-14220 and CGL2010-19591/BTE, the Spanish Institut Geografico Nacional (IGN), Institut Cartografico Valenciano (ICV), Institut Murciano de Investigacion y Desarrollo Agrario y Alimentario (IMIDA) and Banco de Terras de Galicia (Bantegal).Ruiz Fernández, LÁ.; Recio Recio, JA.; Fernández-Sarría, A.; Hermosilla, T. (2011). A feature extraction software tool for agricultural object-based image analysis. Computers and Electronics in Agriculture. 76(2):284-296. https://doi.org/10.1016/j.compag.2011.02.007S28429676

    Human object annotation for surveillance video forensics

    Get PDF
    A system that can automatically annotate surveillance video in a manner useful for locating a person with a given description of clothing is presented. Each human is annotated based on two appearance features: primary colors of clothes and the presence of text/logos on clothes. The annotation occurs after a robust foreground extraction stage employing a modified Gaussian mixture model-based approach. The proposed pipeline consists of a preprocessing stage where color appearance of an image is improved using a color constancy algorithm. In order to annotate color information for human clothes, we use the color histogram feature in HSV space and find local maxima to extract dominant colors for different parts of a segmented human object. To detect text/logos on clothes, we begin with the extraction of connected components of enhanced horizontal, vertical, and diagonal edges in the frames. These candidate regions are classified as text or nontext on the basis of their local energy-based shape histogram features. Further, to detect humans, a novel technique has been proposed that uses contourlet transform-based local binary pattern (CLBP) features. In the proposed method, we extract the uniform direction invariant LBP feature descriptor for contourlet transformed high-pass subimages from vertical and diagonal directional bands. In the final stage, extracted CLBP descriptors are classified by a trained support vector machine. Experimental results illustrate the superiority of our method on large-scale surveillance video data

    Facial Image Reconstruction from a Corrupted Image by Support Vector Data Description

    Get PDF
    This paper proposes a method of automatic facial reconstruction from a facial image partially corrupted by noise or occlusion. There are two key features of this method; the one is the automatic extraction of the correspondences between the corrupted input face and reference face without additional manual tasks; the other is the reconstruction of the complete facial information from corrupted facial information based on these correspondences. In this paper, we propose a non-iterative approach that can match multiple feature points in order to obtain the correspondences between the input image and the reference face. Furthermore, shape and texture of the whole face are reconstructed by SVDD (Support Vector Data Description) from the partial correspondences obtained by matching. The experimental results of facial image reconstructions show that the proposed SVDD-based reconstruction method gives smaller reconstruction errors for a facial image corrupted by Gaussian noise and occlusion than the existing linear projection reconstruction method with a regulation factor. The proposed method also reduces the mean intensity error per pixel by an average of 35 %, especially in the reconstruction of a facial image corrupted by Gaussian noise

    Hand gesture recognition with jointly calibrated Leap Motion and depth sensor

    Get PDF
    Novel 3D acquisition devices like depth cameras and the Leap Motion have recently reached the market. Depth cameras allow to obtain a complete 3D description of the framed scene while the Leap Motion sensor is a device explicitly targeted for hand gesture recognition and provides only a limited set of relevant points. This paper shows how to jointly exploit the two types of sensors for accurate gesture recognition. An ad-hoc solution for the joint calibration of the two devices is firstly presented. Then a set of novel feature descriptors is introduced both for the Leap Motion and for depth data. Various schemes based on the distances of the hand samples from the centroid, on the curvature of the hand contour and on the convex hull of the hand shape are employed and the use of Leap Motion data to aid feature extraction is also considered. The proposed feature sets are fed to two different classifiers, one based on multi-class SVMs and one exploiting Random Forests. Different feature selection algorithms have also been tested in order to reduce the complexity of the approach. Experimental results show that a very high accuracy can be obtained from the proposed method. The current implementation is also able to run in real-time

    Fault detection in operating helicopter drive train components based on support vector data description

    Get PDF
    The objective of the paper is to develop a vibration-based automated procedure dealing with early detection of mechanical degradation of helicopter drive train components using Health and Usage Monitoring Systems (HUMS) data. An anomaly-detection method devoted to the quantification of the degree of deviation of the mechanical state of a component from its nominal condition is developed. This method is based on an Anomaly Score (AS) formed by a combination of a set of statistical features correlated with specific damages, also known as Condition Indicators (CI), thus the operational variability is implicitly included in the model through the CI correlation. The problem of fault detection is then recast as a one-class classification problem in the space spanned by a set of CI, with the aim of a global differentiation between normal and anomalous observations, respectively related to healthy and supposedly faulty components. In this paper, a procedure based on an efficient one-class classification method that does not require any assumption on the data distribution, is used. The core of such an approach is the Support Vector Data Description (SVDD), that allows an efficient data description without the need of a significant amount of statistical data. Several analyses have been carried out in order to validate the proposed procedure, using flight vibration data collected from a H135, formerly known as EC135, servicing helicopter, for which micro-pitting damage on a gear was detected by HUMS and assessed through visual inspection. The capability of the proposed approach of providing better trade-off between false alarm rates and missed detection rates with respect to individual CI and to the AS obtained assuming jointly-Gaussian-distributed CI has been also analysed

    Emotion Recognition from Acted and Spontaneous Speech

    Get PDF
    Dizertační práce se zabývá rozpoznáním emočního stavu mluvčích z řečového signálu. Práce je rozdělena do dvou hlavních častí, první část popisuju navržené metody pro rozpoznání emočního stavu z hraných databází. V rámci této části jsou představeny výsledky rozpoznání použitím dvou různých databází s různými jazyky. Hlavními přínosy této části je detailní analýza rozsáhlé škály různých příznaků získaných z řečového signálu, návrh nových klasifikačních architektur jako je například „emoční párování“ a návrh nové metody pro mapování diskrétních emočních stavů do dvou dimenzionálního prostoru. Druhá část se zabývá rozpoznáním emočních stavů z databáze spontánní řeči, která byla získána ze záznamů hovorů z reálných call center. Poznatky z analýzy a návrhu metod rozpoznání z hrané řeči byly využity pro návrh nového systému pro rozpoznání sedmi spontánních emočních stavů. Jádrem navrženého přístupu je komplexní klasifikační architektura založena na fúzi různých systémů. Práce se dále zabývá vlivem emočního stavu mluvčího na úspěšnosti rozpoznání pohlaví a návrhem systému pro automatickou detekci úspěšných hovorů v call centrech na základě analýzy parametrů dialogu mezi účastníky telefonních hovorů.Doctoral thesis deals with emotion recognition from speech signals. The thesis is divided into two main parts; the first part describes proposed approaches for emotion recognition using two different multilingual databases of acted emotional speech. The main contributions of this part are detailed analysis of a big set of acoustic features, new classification schemes for vocal emotion recognition such as “emotion coupling” and new method for mapping discrete emotions into two-dimensional space. The second part of this thesis is devoted to emotion recognition using multilingual databases of spontaneous emotional speech, which is based on telephone records obtained from real call centers. The knowledge gained from experiments with emotion recognition from acted speech was exploited to design a new approach for classifying seven emotional states. The core of the proposed approach is a complex classification architecture based on the fusion of different systems. The thesis also examines the influence of speaker’s emotional state on gender recognition performance and proposes system for automatic identification of successful phone calls in call center by means of dialogue features.
    corecore