5,796 research outputs found

    Multimodal analysis of synchronization data from patients with dementia

    Get PDF
    Little is known about the abilities of people with dementia to synchronize bodily movements to music. The lack of non-intrusive tools that do not hinder patients, and the absence of appropriate analysis methods may explain why such investigations remain challenging. This paper discusses the development of an analysis framework for processing sensorimotor synchronization data obtained from multiple measuring devices. The data was collected during an explorative study, carried out at the University Hospital of Reims (F), involving 16 individuals with dementia. The study aimed at testing new methods and measurement tools developed to investigate sensorimotor synchronization capacities in people with dementia. An analysis framework was established for the extraction of quantity of motion and synchronization parameters from the multimodal dataset composed of sensor, audio, and video data. A user-friendly monitoring tool and analysis framework has been established and tested that holds potential to respond to the needs of complex movement data handling. The study enabled improving of the hardware and software robustness. It provides a strong framework for future experiments involving people with dementia interacting with music

    Mirror mirror on the wall... an unobtrusive intelligent multisensory mirror for well-being status self-assessment and visualization

    Get PDF
    A person’s well-being status is reflected by their face through a combination of facial expressions and physical signs. The SEMEOTICONS project translates the semeiotic code of the human face into measurements and computational descriptors that are automatically extracted from images, videos and 3D scans of the face. SEMEOTICONS developed a multisensory platform in the form of a smart mirror to identify signs related to cardio-metabolic risk. The aim was to enable users to self-monitor their well-being status over time and guide them to improve their lifestyle. Significant scientific and technological challenges have been addressed to build the multisensory mirror, from touchless data acquisition, to real-time processing and integration of multimodal data

    The Multimodal Tutor: Adaptive Feedback from Multimodal Experiences

    Get PDF
    This doctoral thesis describes the journey of ideation, prototyping and empirical testing of the Multimodal Tutor, a system designed for providing digital feedback that supports psychomotor skills acquisition using learning and multimodal data capturing. The feedback is given in real-time with machine-driven assessment of the learner's task execution. The predictions are tailored by supervised machine learning models trained with human annotated samples. The main contributions of this thesis are: a literature survey on multimodal data for learning, a conceptual model (the Multimodal Learning Analytics Model), a technological framework (the Multimodal Pipeline), a data annotation tool (the Visual Inspection Tool) and a case study in Cardiopulmonary Resuscitation training (CPR Tutor). The CPR Tutor generates real-time, adaptive feedback using kinematic and myographic data and neural networks

    A deep learning approach to monitoring workers stress at office

    Get PDF
    Identifying stress in people is not a trivial or straightforward task, as several factors are involved in detecting the presence or absence of stress. Since there are few tools on the market that companies can use, new models have been created and developed that can be used to detect stress. In this study, we propose developing a stress detection application using deep learning models to analyze images obtained in the workplace. It will provide information from these analyses to the company so they can use it for occupational health management. The proposed solution uses deep learning algorithms to create prediction models and analyze images. The new non-invasive application is designed to help detect stress and educate people to control their health conditions. The model trained achieved an F1=79.9% with a binary dataset of stress/non-stress that have an imbalanced ratio of 0.49Identificar o estresse nas pessoas não é uma tarefa trivial ou simples, pois vários fatores estão envolvidos na detecção da presença ou ausência de estresse. Como existem poucas ferramentas no mercado que as empresas podem utilizar, foram criados e desenvolvidos novos modelos que podem ser utilizados para detectar o estresse. Neste estudo, propomos desenvolver um aplicativo de detecção de estresse usando modelos de aprendizado profundo para analisar imagens obtidas no local de trabalho. Ele fornecerá informações dessas análises para a empresa para que possa utilizá-las para a gestão da saúde ocupacional. A solução proposta usa algoritmos de aprendizado profundo para criar modelos de previsão e analisar imagens. O novo aplicativo não invasivo foi projetado para ajudar a detectar o estresse e educar as pessoas para controlar suas condições de saúde. O modelo treinado alcançou um F1=79,9% com um conjunto de dados binários de estresse/não estresse que continha um ratio de desbalanceamento de 0.4

    A Survey of Multimodal Information Fusion for Smart Healthcare: Mapping the Journey from Data to Wisdom

    Full text link
    Multimodal medical data fusion has emerged as a transformative approach in smart healthcare, enabling a comprehensive understanding of patient health and personalized treatment plans. In this paper, a journey from data to information to knowledge to wisdom (DIKW) is explored through multimodal fusion for smart healthcare. We present a comprehensive review of multimodal medical data fusion focused on the integration of various data modalities. The review explores different approaches such as feature selection, rule-based systems, machine learning, deep learning, and natural language processing, for fusing and analyzing multimodal data. This paper also highlights the challenges associated with multimodal fusion in healthcare. By synthesizing the reviewed frameworks and theories, it proposes a generic framework for multimodal medical data fusion that aligns with the DIKW model. Moreover, it discusses future directions related to the four pillars of healthcare: Predictive, Preventive, Personalized, and Participatory approaches. The components of the comprehensive survey presented in this paper form the foundation for more successful implementation of multimodal fusion in smart healthcare. Our findings can guide researchers and practitioners in leveraging the power of multimodal fusion with the state-of-the-art approaches to revolutionize healthcare and improve patient outcomes.Comment: This work has been submitted to the ELSEVIER for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl
    • …
    corecore