1,036 research outputs found

    Inexpensive fusion methods for enhancing feature detection

    Get PDF
    Recent successful approaches to high-level feature detection in image and video data have treated the problem as a pattern classification task. These typically leverage the techniques learned from statistical machine learning, coupled with ensemble architectures that create multiple feature detection models. Once created, co-occurrence between learned features can be captured to further boost performance. At multiple stages throughout these frameworks, various pieces of evidence can be fused together in order to boost performance. These approaches whilst very successful are computationally expensive, and depending on the task, require the use of significant computational resources. In this paper we propose two fusion methods that aim to combine the output of an initial basic statistical machine learning approach with a lower-quality information source, in order to gain diversity in the classified results whilst requiring only modest computing resources. Our approaches, validated experimentally on TRECVid data, are designed to be complementary to existing frameworks and can be regarded as possible replacements for the more computationally expensive combination strategies used elsewhere

    Video Event Recognition by Dempster-Shafer Theory

    Get PDF
    Abstract. This paper presents an event recognition framework, based on Dempster-Shafer theory, that combines evidence of events from low-level computer vision analytics. The proposed method em-ploying evidential network modelling of composite events, is able to represent uncertainty of event output from low level video analysis and infer high-level events with semantic meaning along with de-grees of belief. The method has been evaluated on videos taken of subjects entering and leaving a seated area. This has relevance to a number of transport scenarios, such as onboard buses and trains, and also in train stations and airports. Recognition results of 78 % and 100 % for four composite events are encouraging.

    AN INTELLIGENT CLASSIFIER FUSION TECHNIQUE FOR IMPROVED MULTIMODAL BIOMETRIC AUTHENTICATION USING MODIFIED DEMPSTER-SHAFER RULE OF COMBINATION

    Get PDF
    Multimodal biometric technology relatively is a technology developed to overcome those limitations imposed by unimodalbiometric systems. The paradigm consolidates evidence from multiple biometric sources offering considerableimprovements in reliability with reasonably overall performance in many applications. Meanwhile, the issue of efficient andeffective information fusion of these evidences obtained from different sources remains an obvious concept that attractsresearch attention. In this research paper, we consider a classical classifier fusion technique, Dempster’s rule of combinationproposed in Dempster-Shafer Theory (DST) of evidence. DST provides useful computational scheme for integratingaccumulative evidences and possesses the potential to update the prior every time a new data is added in the database.However, it has some shortcomings. Dempster Shafer evidence combination has this inability to respond adequately to thefusion of different basic belief assignments (bbas) of evidences, even when the level of conflict between sources is low. Italso has this tendency of completely ignoring plausibility in the measure of its belief. To solve these problems, this paperpresents a modified Dempster’s rule of combination for multimodal biometric authentication which integrates hyperbolictangent (tanh) estimators to overcome the inadequate normalization steps done in the original Dempster’s rule ofcombination. We also adopt a multi-level decision threshold to its measure of belief to model the modified Dempster Shaferrule of combination.Keywords: Information fusion, Multimodal Biometric Authentication, Normalization technique, Tanh Estimators

    Développement d’un système intelligent de reconnaissance automatisée pour la caractérisation des états de surface de la chaussée en temps réel par une approche multicapteurs

    Get PDF
    Le rôle d’un service dédié à l’analyse de la météo routière est d’émettre des prévisions et des avertissements aux usagers quant à l’état de la chaussée, permettant ainsi d’anticiper les conditions de circulations dangereuses, notamment en période hivernale. Il est donc important de définir l’état de chaussée en tout temps. L’objectif de ce projet est donc de développer un système de détection multicapteurs automatisée pour la caractérisation en temps réel des états de surface de la chaussée (neige, glace, humide, sec). Ce mémoire se focalise donc sur le développement d’une méthode de fusion de données images et sons par apprentissage profond basée sur la théorie de Dempster-Shafer. Les mesures directes pour l’acquisition des données qui ont servi à l’entrainement du modèle de fusion ont été effectuées à l’aide de deux capteurs à faible coût disponibles dans le commerce. Le premier capteur est une caméra pour enregistrer des vidéos de la surface de la route. Le second capteur est un microphone pour enregistrer le bruit de l’interaction pneu-chaussée qui caractérise chaque état de surface. La finalité de ce système est de pouvoir fonctionner sur un nano-ordinateur pour l’acquisition, le traitement et la diffusion de l’information en temps réel afin d’avertir les services d’entretien routier ainsi que les usagers de la route. De façon précise, le système se présente comme suit :1) une architecture d’apprentissage profond classifiant chaque état de surface à partir des images issues de la vidéo sous forme de probabilités ; 2) une architecture d’apprentissage profond classifiant chaque état de surface à partir du son sous forme de probabilités ; 3) les probabilités issues de chaque architecture ont été ensuite introduites dans le modèle de fusion pour obtenir la décision finale. Afin que le système soit léger et moins coûteux, il a été développé à partir d’architectures alliant légèreté et précision à savoir Squeeznet pour les images et M5 pour le son. Lors de la validation, le système a démontré une bonne performance pour la détection des états surface avec notamment 87,9 % pour la glace noire et 97 % pour la neige fondante

    Ear Identification by Fusion of Segmented Slice Regions using Invariant Features: An Experimental Manifold with Dual Fusion Approach

    Full text link
    This paper proposes a robust ear identification system which is developed by fusing SIFT features of color segmented slice regions of an ear. The proposed ear identification method makes use of Gaussian mixture model (GMM) to build ear model with mixture of Gaussian using vector quantization algorithm and K-L divergence is applied to the GMM framework for recording the color similarity in the specified ranges by comparing color similarity between a pair of reference ear and probe ear. SIFT features are then detected and extracted from each color slice region as a part of invariant feature extraction. The extracted keypoints are then fused separately by the two fusion approaches, namely concatenation and the Dempster-Shafer theory. Finally, the fusion approaches generate two independent augmented feature vectors which are used for identification of individuals separately. The proposed identification technique is tested on IIT Kanpur ear database of 400 individuals and is found to achieve 98.25% accuracy for identification while top 5 matched criteria is set for each subject.Comment: 12 pages, 3 figure
    corecore