70 research outputs found

    Designing a gamified social platform for people living with dementia and their live-in family caregivers

    Get PDF
    In the current paper, a social gamified platform for people living with dementia and their live-in family caregivers, integrating a broader diagnostic approach and interactive interventions is presented. The CAREGIVERSPRO-MMD (C-MMD) platform constitutes a support tool for the patient and the informal caregiver - also referred to as the dyad - that strengthens self-care, and builds community capacity and engagement at the point of care. The platform is implemented to improve social collaboration, adherence to treatment guidelines through gamification, recognition of progress indicators and measures to guide management of patients with dementia, and strategies and tools to improve treatment interventions and medication adherence. Moreover, particular attention was provided on guidelines, considerations and user requirements for the design of a User-Centered Design (UCD) platform. The design of the platform has been based on a deep understanding of users, tasks and contexts in order to improve platform usability, and provide adaptive and intuitive User Interfaces with high accessibility. In this paper, the architecture and services of the C-MMD platform are presented, and specifically the gamification aspects. © 2018 Association for Computing Machinery.Peer ReviewedPostprint (author's final draft

    Two-Dimensional Convolutional Recurrent Neural Networks for Speech Activity Detection

    Get PDF
    Speech Activity Detection (SAD) plays an important role in mobile communications and automatic speech recognition (ASR). Developing efficient SAD systems for real-world applications is a challenging task due to the presence of noise. We propose a new approach to SAD where we treat it as a two-dimensional multilabel image classification problem. To classify the audio segments, we compute their Short-time Fourier Transform spectrograms and classify them with a Convolutional Recurrent Neural Network (CRNN), traditionally used in image recognition. Our CRNN uses a sigmoid activation function, max-pooling in the frequency domain, and a convolutional operation as a moving average filter to remove misclassified spikes. On the development set of Task 1 of the 2019 Fearless Steps Challenge, our system achieved a decision cost function (DCF) of 2.89%, a 66.4% improvement over the baseline. Moreover, it achieved a DCF score of 3.318% on the evaluation dataset of the challenge, ranking first among all submissions

    Comparing CNN and Human Crafted Features for Human Activity Recognition

    Get PDF
    Deep learning techniques such as Convolutional Neural Networks (CNNs) have shown good results in activity recognition. One of the advantages of using these methods resides in their ability to generate features automatically. This ability greatly simplifies the task of feature extraction that usually requires domain specific knowledge, especially when using big data where data driven approaches can lead to anti-patterns. Despite the advantage of this approach, very little work has been undertaken on analyzing the quality of extracted features, and more specifically on how model architecture and parameters affect the ability of those features to separate activity classes in the final feature space. This work focuses on identifying the optimal parameters for recognition of simple activities applying this approach on both signals from inertial and audio sensors. The paper provides the following contributions: (i) a comparison of automatically extracted CNN features with gold standard Human Crafted Features (HCF) is given, (ii) a comprehensive analysis on how architecture and model parameters affect separation of target classes in the feature space. Results are evaluated using publicly available datasets. In particular, we achieved a 93.38% F-Score on the UCI-HAR dataset, using 1D CNNs with 3 convolutional layers and 32 kernel size, and a 90.5% F-Score on the DCASE 2017 development dataset, simplified for three classes (indoor, outdoor and vehicle), using 2D CNNs with 2 convolutional layers and a 2x2 kernel size

    Audio Content Analysis for Unobtrusive Event Detection in Smart Homes

    Get PDF
    Institute of Engineering Sciences The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Environmental sound signals are multi-source, heterogeneous, and varying in time. Many systems have been proposed to process such signals for event detection in ambient assisted living applications. Typically, these systems use feature extraction, selection, and classification. However, despite major advances, several important questions remain unanswered, especially in real-world settings. This paper contributes to the body of knowledge in the field by addressing the following problems for ambient sounds recorded in various real-world kitchen environments: 1) which features and which classifiers are most suitable in the presence of background noise? 2) what is the effect of signal duration on recognition accuracy? 3) how do the signal-to-noise-ratio and the distance between the microphone and the audio source affect the recognition accuracy in an environment in which the system was not trained? We show that for systems that use traditional classifiers, it is beneficial to combine gammatone frequency cepstral coefficients and discrete wavelet transform coefficients and to use a gradient boosting classifier. For systems based on deep learning, we consider 1D and 2D Convolutional Neural Networks (CNN) using mel-spectrogram energies and mel-spectrograms images, as inputs, respectively and show that the 2D CNN outperforms the 1D CNN. We obtained competitive classification results for two such systems. The first one, which uses a gradient boosting classifier, achieved an F1-Score of 90.2% and a recognition accuracy of 91.7%. The second one, which uses a 2D CNN with mel-spectrogram images, achieved an F1-Score of 92.7% and a recognition accuracy of 96%

    Image-based Text Classification using 2D Convolutional Neural Networks

    Get PDF
    We propose a new approach to text classification in which we consider the input text as an image and apply 2D Convolutional Neural Networks to learn the local and global semantics of the sentences from the variations of the visual patterns of words. Our approach demonstrates that it is possible to get semantically meaningful features from images with text without using optical character recognition and sequential processing pipelines, techniques that traditional natural language processing algorithms require. To validate our approach, we present results for two applications: text classification and dialog modeling. Using a 2D Convolutional Neural Network, we were able to outperform the state-ofart accuracy results for a Chinese text classification task and achieved promising results for seven English text classification tasks. Furthermore, our approach outperformed the memory networks without match types when using out of vocabulary entities from Task 4 of the bAbI dialog dataset

    Two examples of online eHealth platforms for supporting people living with cognitive impairments and their caregivers

    Get PDF
    This paper compares two methodological approaches derived from the EU Horizon 2020 funded projects CAREGIVERSPRO-MMD (C-MMD)1 and ICT4LIFE2. Both approaches were initiated in 2016 with the ambition to provide new integrated care services to people living with cognitive impairments, including Dementia, Alzheimer and Parkinson disease, as well as to their home caregivers towards a long-term increase in quality of life and autonomy at home. An outline of the disparities and similarities related to non-pharmacological interventions introduced by the two projects to foster treatment adherence was made. Both approaches have developed software solutions, including social platforms, notifications, Serious Games, user monitoring and support services aimed at developing the concepts of self-care, active patients and integrated care. Besides their differences, both projects can be benefited by knowledge and technology exchange, pilot results sharing and possible user's exchange if possible in the near future.Peer ReviewedPostprint (published version

    Audio-Based Event Detection at Different SNR Settings Using Two-Dimensional Spectrogram Magnitude Representations

    Get PDF
    Audio-based event detection poses a number of different challenges that are not encountered in other fields, such as image detection. Challenges such as ambient noise, low Signal-to-Noise Ratio (SNR) and microphone distance are not yet fully understood. If the multimodal approaches are to become better in a range of fields of interest, audio analysis will have to play an integral part. Event recognition in autonomous vehicles (AVs) is such a field at a nascent stage that can especially leverage solely on audio or can be part of the multimodal approach. In this manuscript, an extensive analysis focused on the comparison of different magnitude representations of the raw audio is presented. The data on which the analysis is carried out is part of the publicly available MIVIA Audio Events dataset. Single channel Short-Time Fourier Transform (STFT), mel-scale and Mel-Frequency Cepstral Coefficients (MFCCs) spectrogram representations are used. Furthermore, aggregation methods of the aforementioned spectrogram representations are examined; the feature concatenation compared to the stacking of features as separate channels. The effect of the SNR on recognition accuracy and the generalization of the proposed methods on datasets that were both seen and not seen during training are studied and reported. Document type: Articl
    corecore