33 research outputs found

    Robust Face Analysis using Convolutional Neural Networks

    Get PDF
    Automatic face analysis has to cope with pose and lighting variations. Especially pose variations are difficult to tackle and many face analysis methods require the use of sophisticated normalization procedures. We propose a data-driven face analysis approach that is not only capable of extracting features relevant to a given face analysis task, but is also robust with regard to face location changes and scale variations. This is achieved by deploying convolutional neural networks, which are either trained for facial expression recognition or face identity recognition. Combining the outputs of these networks allows us to obtain a subject dependent or personalized recognition of facial expressions

    Spike Events Processing for Vision Systems

    Get PDF
    In this paper we briefly summarize the fundamental properties of spike events processing applied to artificial vision systems. This sensing and processing technology is capable of very high speed throughput, because it does not rely on sensing and processing sequences of frames, and because it allows for complex hierarchically structured cortical-like layers for sophisticated processing. The paper includes a few examples that have demonstrated the potential of this technology for highspeed vision processing, such as a multilayer event processing network of 5 sequential cortical-like layers, and a recognition system capable of discriminating propellers of different shape rotating at 5000 revolutions per second (300000 revolutions per minute)

    Emotion Based Music Player

    Get PDF
    Listening to music affects the human brain activities. Emotion based music player with automated playlist can help users to maintain a particular emotional state. This research proposes an emotion based music player that creates a playlists based on captured photos of the user. Manual sorting of a playlist and annotation of songs, in accordance with the current emotion, is more time consuming and quite tedious. Numerous algorithms have been implemented to automate this process. However, existing algorithms are slow, increase cost of the system by using additional hardware and have quite very less accuracy. This paper presents an algorithm that not only automates the process of generating an audio playlist, but also to classify those songs which are newly added and the main task is to capture current mood of person and to play song accordingly. This enhances the system’s efficiency, faster and automatic. The main goal is to reduce the overall computational time and the cost of the designed system. It also aims at increasing the accuracy of the system. The most important goal is to make change the mood of person if it is a negative one such as sad, depressed. This model is validated by testing the system against user dependent and user independent dataset

    IoT-Enabled flood severity prediction via ensemble machine learning models

    Get PDF
    © 2013 IEEE. River flooding is a natural phenomenon that can have a devastating effect on human life and economic losses. There have been various approaches in studying river flooding; however, insufficient understanding and limited knowledge about flooding conditions hinder the development of prevention and control measures for this natural phenomenon. This paper entails a new approach for the prediction of water level in association with flood severity using the ensemble model. Our approach leverages the latest developments in the Internet of Things (IoT) and machine learning for the automated analysis of flood data that might be useful to prevent natural disasters. Research outcomes indicate that ensemble learning provides a more reliable tool to predict flood severity levels. The experimental results indicate that the ensemble learning using the Long-Short Term memory model and random forest outperformed individual models with a sensitivity, specificity and accuracy of 71.4%, 85.9%, 81.13%, respectively
    corecore