2,811 research outputs found

    Using data visualization to deduce faces expressions

    Get PDF
    Conferência Internacional, realizada na Turquia, de 6-8 de setembro de 2018.Collect and examine in real time multi modal sensor data of a human face, is an important problem in computer vision, with applications in medical and monitoring analysis, entertainment and security. Although its advances, there are still many open issues in terms of the identification of the facial expression. Different algorithms and approaches have been developed to find out patterns and characteristics that can help the automatic expression identification. One way to study data is through data visualizations. Data visualization turns numbers and letters into aesthetically pleasing visuals, making it easy to recognize patterns and find exceptions. In this article, we use information visualization as a tool to analyse data points and find out possible existing patterns in four different facial expressions.info:eu-repo/semantics/publishedVersio

    Gait Velocity Estimation using time interleaved between Consecutive Passive IR Sensor Activations

    Full text link
    Gait velocity has been consistently shown to be an important indicator and predictor of health status, especially in older adults. It is often assessed clinically, but the assessments occur infrequently and do not allow optimal detection of key health changes when they occur. In this paper, we show that the time gap between activations of a pair of Passive Infrared (PIR) motion sensors installed in the consecutively visited room pair carry rich latent information about a person's gait velocity. We name this time gap transition time and show that despite a six second refractory period of the PIR sensors, transition time can be used to obtain an accurate representation of gait velocity. Using a Support Vector Regression (SVR) approach to model the relationship between transition time and gait velocity, we show that gait velocity can be estimated with an average error less than 2.5 cm/sec. This is demonstrated with data collected over a 5 year period from 74 older adults monitored in their own homes. This method is simple and cost effective and has advantages over competing approaches such as: obtaining 20 to 100x more gait velocity measurements per day and offering the fusion of location-specific information with time stamped gait estimates. These advantages allow stable estimates of gait parameters (maximum or average speed, variability) at shorter time scales than current approaches. This also provides a pervasive in-home method for context-aware gait velocity sensing that allows for monitoring of gait trajectories in space and time

    Linguistically-driven framework for computationally efficient and scalable sign recognition

    Full text link
    We introduce a new general framework for sign recognition from monocular video using limited quantities of annotated data. The novelty of the hybrid framework we describe here is that we exploit state-of-the art learning methods while also incorporating features based on what we know about the linguistic composition of lexical signs. In particular, we analyze hand shape, orientation, location, and motion trajectories, and then use CRFs to combine this linguistically significant information for purposes of sign recognition. Our robust modeling and recognition of these sub-components of sign production allow an efficient parameterization of the sign recognition problem as compared with purely data-driven methods. This parameterization enables a scalable and extendable time-series learning approach that advances the state of the art in sign recognition, as shown by the results reported here for recognition of isolated, citation-form, lexical signs from American Sign Language (ASL)

    Motion Based Learning for Preschools Mathematics via Kinect (Counting Number)

    Get PDF
    Kids start learning numbers since they in preschools. Every kids have difficulties to start learn about mathematics. They do not recognize and might be difficult for them to remember the numbers at first. Teacher will help them to teach and assist them to solve this problem and it usually happens after some stages and time. The only problem is education system use currently is not helping much in increasing the speed of children's learning. With current education system that sometime bored and only use whiteboard as medium of teaching, kids cannot give their full commitment in class. In Malaysia, traditional and conventional teaching still widely being practiced. However, with the advancement of technology nowadays, it can help to overcome this problem. By using current technology, it can help to complement current education system. Education system can utilize the advancement of technology to overcome problem such as boring and not interactive learning. Example of technology that can be use is Kinect. Kinect is a device that can track human body to interact with an application. It can help make learning become active, interactive and at it also involving all body parts. The objective of introducing kinect in education is to develop a motion based learning application for preschools' mathematics. To achieve this objective, the author has decided to test the application for preschools kid age from 3 to 6 years old. It will focus on teaching simple numbers. Kinect in mathematic will cover on basic numbers among preschool kids. Since the time given to develop this project is very limited, the methodology chosen for this development is throwaway prototype methodology. By using throwaway prototyping methodology this project have improved and increase user involvement. Results showing that kids enjoy the kinect technology in mathematics and teachers have found one of interactive ways of teaching. From data collected and analyzed, results showed that kids enjoy this kinect technology in mathematics and at same time teachers have found one of interactive ways of teaching

    A Framework for Students Profile Detection

    Get PDF
    Some of the biggest problems tackling Higher Education Institutions are students’ drop-out and academic disengagement. Physical or psychological disabilities, social-economic or academic marginalization, and emotional and affective problems, are some of the factors that can lead to it. This problematic is worsened by the shortage of educational resources, that can bridge the communication gap between the faculty staff and the affective needs of these students. This dissertation focus in the development of a framework, capable of collecting analytic data, from an array of emotions, affects and behaviours, acquired either by human observations, like a teacher in a classroom or a psychologist, or by electronic sensors and automatic analysis software, such as eye tracking devices, emotion detection through facial expression recognition software, automatic gait and posture detection, and others. The framework establishes the guidance to compile the gathered data in an ontology, to enable the extraction of patterns outliers via machine learning, which assist the profiling of students in critical situations, like disengagement, attention deficit, drop-out, and other sociological issues. Consequently, it is possible to set real-time alerts when these profiles conditions are detected, so that appropriate experts could verify the situation and employ effective procedures. The goal is that, by providing insightful real-time cognitive data and facilitating the profiling of the students’ problems, a faster personalized response to help the student is enabled, allowing academic performance improvements

    Lokalizácia objektov za pomoci 3D optického zariadenia MS Windows Kinect s využitím hĺbkových máp obrazu

    Get PDF
    The paper deals with the problem of object recognition for the needs of mobile robotic systems (MRS). The emphasis was placed on the segmentation of an in-depth image and noise filtration. MS Kinect was used to evaluate the potential of object location taking advantage of the in-depth image. This tool, being an affordable alternative to expensive devices based on 3D laser scanning, was deployed in series of experiments focused on object location in its field of vision. In our case, balls with fixed diameter were used as objects for 3D location.Príspevok sa zaoberá problémom rozoznávania objektov pre potreby mobilnej robotiky a bol vypracovaný s dôrazom na hĺbkový obraz pri segmentácii a filtrácii rušivých informácií. Pre posúdenie potenciálu hľadania objektu na hĺbkovom obrázku bol použitý MS Kinect, nakoľko je to lacná alternatíva k drahým laserovým 3D snímačom. Za pomoci neho boli vykonane experimenty orientované na vyhľadávanie objektov v jeho zornom poli. V konkrétnom prípade riešeného v tomto príspevku sa jedná o loptičky s pevne stanoveným priemerom

    Facial analysis with depth maps and deep learning

    Get PDF
    Tese de Doutoramento em Ciência e Tecnologia Web em associação com a Universidade de Trás-os-Montes e Alto Douro, apresentada à Universidade AbertaA recolha e análise sequencial de dados multimodais do rosto humano é um problema importante em visão por computador, com aplicações variadas na análise e monitorização médica, entretenimento e segurança. No entanto, devido à natureza do problema, há uma falta de sistemas acessíveis e fáceis de usar, em tempo real, com capacidade de anotações, análise 3d, capacidade de reanalisar e com uma velocidade capaz de detetar padrões faciais em ambientes de trabalho. No âmbito de um esforço contínuo, para desenvolver ferramentas de apoio à monitorização e avaliação de emoções/sinais em ambiente de trabalho, será realizada uma investigação relativa à aplicabilidade de uma abordagem de análise facial para mapear e avaliar os padrões faciais humanos. O objetivo consiste em investigar um conjunto de sistemas e técnicas que possibilitem responder à questão de como usar dados de sensores multimodais para obter um sistema de classificação para identificar padrões faciais. Com isso em mente, foi planeado desenvolver ferramentas para implementar um sistema em tempo real de forma a reconhecer padrões faciais. O desafio é interpretar esses dados de sensores multimodais para classificá-los com algoritmos de aprendizagem profunda e cumprir os seguintes requisitos: capacidade de anotações, análise 3d e capacidade de reanalisar. Além disso, o sistema tem que ser capaze de melhorar continuamente o resultado do modelo de classificação para melhorar e avaliar diferentes padrões do rosto humano. A FACE ANALYSYS, uma ferramenta desenvolvida no contexto desta tese de doutoramento, será complementada por várias aplicações para investigar as relações de vários dados de sensores com estados emocionais/sinais. Este trabalho é útil para desenvolver um sistema de análise adequado para a perceção de grandes quantidades de dados comportamentais.Collecting and analyzing in real time multimodal sensor data of a human face is an important problem in computer vision, with applications in medical and monitoring analysis, entertainment, and security. However, due to the exigent nature of the problem, there is a lack of affordable and easy to use systems, with real time annotations capability, 3d analysis, replay capability and with a frame speed capable of detecting facial patterns in working behavior environments. In the context of an ongoing effort to develop tools to support the monitoring and evaluation of human affective state in working environments, this research will investigate the applicability of a facial analysis approach to map and evaluate human facial patterns. Our objective consists in investigating a set of systems and techniques that make it possible to answer the question regarding how to use multimodal sensor data to obtain a classification system in order to identify facial patterns. With that in mind, it will be developed tools to implement a real-time system in a way that it will be able to recognize facial patterns from 3d data. The challenge is to interpret this multi-modal sensor data to classify it with deep learning algorithms and fulfill the follow requirements: annotations capability, 3d analysis and replay capability. In addition, the system will be able to enhance continuously the output result of the system with a training process in order to improve and evaluate different patterns of the human face. FACE ANALYSYS is a tool developed in the context of this doctoral thesis, in order to research the relations of various sensor data with human facial affective state. This work is useful to develop an appropriate visualization system for better insight of a large amount of behavioral data.N/
    corecore