1,218 research outputs found

    The DRIVE-SAFE project: signal processing and advanced information technologies for improving driving prudence and accidents

    Get PDF
    In this paper, we will talk about the Drivesafe project whose aim is creating conditions for prudent driving on highways and roadways with the purposes of reducing accidents caused by driver behavior. To achieve these primary goals, critical data is being collected from multimodal sensors (such as cameras, microphones, and other sensors) to build a unique databank on driver behavior. We are developing system and technologies for analyzing the data and automatically determining potentially dangerous situations (such as driver fatigue, distraction, etc.). Based on the findings from these studies, we will propose systems for warning the drivers and taking other precautionary measures to avoid accidents once a dangerous situation is detected. In order to address these issues a national consortium has been formed including Automotive Research Center (OTAM), Koç University, Istanbul Technical University, Sabancı University, Ford A.S., Renault A.S., and Fiat A. Ş

    Modeling of Performance Creative Evaluation Driven by Multimodal Affective Data

    Get PDF
    Performance creative evaluation can be achieved through affective data, and the use of affective featuresto evaluate performance creative is a new research trend. This paper proposes a “Performance Creative—Multimodal Affective (PC-MulAff)” model based on the multimodal affective features for performance creative evaluation. The multimedia data acquisition equipment is used to collect the physiological data of the audience, including the multimodal affective data such as the facial expression, heart rate and eye movement. Calculate affective features of multimodal data combined with director annotation, and defined “Performance Creative—Affective Acceptance (PC-Acc)” based on multimodal affective features to evaluate the quality of performance creative. This paper verifies the PC-MulAff model on different performance data sets. The experimental results show that the PC-MulAff model shows high evaluation quality in different performance forms. In the creative evaluation of dance performance, the accuracy of the model is 7.44% and 13.95% higher than that of the single textual and single video evaluation

    Multimodal biometric system for ECG, ear and iris recognition based on local descriptors

    Get PDF
    © 2019, Springer Science+Business Media, LLC, part of Springer Nature. Combination of multiple information extracted from different biometric modalities in multimodal biometric recognition system aims to solve the different drawbacks encountered in a unimodal biometric system. Fusion of many biometrics has proposed such as face, fingerprint, iris…etc. Recently, electrocardiograms (ECG) have been used as a new biometric technology in unimodal and multimodal biometric recognition system. ECG provides inherent the characteristic of liveness of a person, making it hard to spoof compared to other biometric techniques. Ear biometrics present a rich and stable source of information over an acceptable period of human life. Iris biometrics have been embedded with different biometric modalities such as fingerprint, face and palm print, because of their higher accuracy and reliability. In this paper, a new multimodal biometric system based ECG-ear-iris biometrics at feature level is proposed. Preprocessing techniques including normalization and segmentation are applied to ECG, ear and iris biometrics. Then, Local texture descriptors, namely 1D-LBP (One D-Local Binary Patterns), Shifted-1D-LBP and 1D-MR-LBP (Multi-Resolution) are used to extract the important features from the ECG signal and convert the ear and iris images to a 1D signals. KNN and RBF are used for matching to classify an unknown user into the genuine or impostor. The developed system is validated using the benchmark ID-ECG and USTB1, USTB2 and AMI ear and CASIA v1 iris databases. The experimental results demonstrate that the proposed approach outperforms unimodal biometric system. A Correct Recognition Rate (CRR) of 100% is achieved with an Equal Error Rate (EER) of 0.5%

    Design of a wearable sensor system for neonatal seizure monitoring

    Get PDF

    Design of a wearable sensor system for neonatal seizure monitoring

    Get PDF

    A survey of wearable biometric recognition systems

    Get PDF
    The growing popularity of wearable devices is leading to new ways to interact with the environment, with other smart devices, and with other people. Wearables equipped with an array of sensors are able to capture the owner's physiological and behavioural traits, thus are well suited for biometric authentication to control other devices or access digital services. However, wearable biometrics have substantial differences from traditional biometrics for computer systems, such as fingerprints, eye features, or voice. In this article, we discuss these differences and analyse how researchers are approaching the wearable biometrics field. We review and provide a categorization of wearable sensors useful for capturing biometric signals. We analyse the computational cost of the different signal processing techniques, an important practical factor in constrained devices such as wearables. Finally, we review and classify the most recent proposals in the field of wearable biometrics in terms of the structure of the biometric system proposed, their experimental setup, and their results. We also present a critique of experimental issues such as evaluation and feasibility aspects, and offer some final thoughts on research directions that need attention in future work.This work was partially supported by the MINECO grant TIN2013-46469-R (SPINY) and the CAM Grant S2013/ICE-3095 (CIBERDINE

    FAF: A novel multimodal emotion recognition approach integrating face, body and text

    Full text link
    Multimodal emotion analysis performed better in emotion recognition depending on more comprehensive emotional clues and multimodal emotion dataset. In this paper, we developed a large multimodal emotion dataset, named "HED" dataset, to facilitate the emotion recognition task, and accordingly propose a multimodal emotion recognition method. To promote recognition accuracy, "Feature After Feature" framework was used to explore crucial emotional information from the aligned face, body and text samples. We employ various benchmarks to evaluate the "HED" dataset and compare the performance with our method. The results show that the five classification accuracy of the proposed multimodal fusion method is about 83.75%, and the performance is improved by 1.83%, 9.38%, and 21.62% respectively compared with that of individual modalities. The complementarity between each channel is effectively used to improve the performance of emotion recognition. We had also established a multimodal online emotion prediction platform, aiming to provide free emotion prediction to more users
    • …
    corecore