92 research outputs found

    A user-centered approach for detecting emotions with low-cost sensors

    Get PDF
    AbstractDetecting emotions is very useful in many fields, from health-care to human-computer interaction. In this paper, we propose an iterative user-centered methodology for supporting the development of an emotion detection system based on low-cost sensors. Artificial Intelligence techniques have been adopted for emotion classification. Different kind of Machine Learning classifiers have been experimentally trained on the users' biometrics data, such as hearth rate, movement and audio. The system has been developed in two iterations and, at the end of each of them, the performance of classifiers (MLP, CNN, LSTM, Bidirectional-LSTM and Decision Tree) has been compared. After the experiment, the SAM questionnaire is proposed to evaluate the user's affective state when using the system. In the first experiment we gathered data from 47 participants, in the second one an improved version of the system has been trained and validated by 107 people. The emotional analysis conducted at the end of each iteration suggests that reducing the device invasiveness may affect the user perceptions and also improve the classification performance

    A mobile augmented reality application for supporting real-time skin lesion analysis based on deep learning

    Get PDF
    AbstractMelanoma is considered the deadliest skin cancer and when it is in an advanced state it is difficult to treat. Diagnoses are visually performed by dermatologists, by naked-eye observation. This paper proposes an augmented reality smartphone application for supporting the dermatologist in the real-time analysis of a skin lesion. The app augments the camera view with information related to the lesion features generally measured by the dermatologist for formulating the diagnosis. The lesion is also classified by a deep learning approach for identifying melanoma. The real-time process adopted for generating the augmented content is described. The real-time performances are also evaluated and a user study is also conducted. Results revealed that the real-time process may be entirely executed on the Smartphone and that the support provided is well judged by the target users

    Supporting Elderly People by Ad Hoc Generated Mobile Applications Based on Vocal Interaction

    No full text
    Mobile devices can be exploited for enabling people to interact with Internet of Things (IoT) services. The MicroApp Generator [1] is a service-composition tool for supporting the generation of mobile applications directly on the mobile device. The user interacts with the generated app by using the traditional touch-based interaction. This kind of interaction often is not suitable for elderly and special needs people that cannot see or touch the screen. In this paper, we extend the MicroApp Generator with an interaction approach enabling a user to interact with the generated app only by using his voice, which can be very useful to let special needs people live at home. To this aim, once the mobile app has been generated and executed, the system analyses and describes the user interface, listens to the user speech and performs the associated actions. A preliminary analysis has been conducted to assess the user experience of the proposed approach by a sample composed of elderly users by using a questionnaire as a research instrument

    Supporting elderly people by ad hoc generated mobile applications based on vocal interaction

    No full text
    Mobile devices can be exploited for enabling people to interact with Internet of Things (IoT) services. The MicroApp Generator [1] is a service-composition tool for supporting the generation of mobile applications directly on the mobile device. The user interacts with the generated app by using the traditional touch-based interaction. This kind of interaction often is not suitable for elderly and special needs people that cannot see or touch the screen. In this paper, we extend the MicroApp Generator with an interaction approach enabling a user to interact with the generated app only by using his voice, which can be very useful to let special needs people live at home. To this aim, once the mobile app has been generated and executed, the system analyses and describes the user interface, listens to the user speech and performs the associated actions. A preliminary analysis has been conducted to assess the user experience of the proposed approach by a sample composed of elderly users by using a questionnaire as a research instrument

    An augmented reality application to gather participant feedback during a meeting

    No full text
    The new features offered by top-of-the-range mobile devices can be exploited to support Face-to-Face collaboration, providing collaborative interfaces that go beyond being there perception. In particular, Augmented Reality interfaces appear to be a natural medium for Computer Supported Collaborative Work. In this paper, we present an Augmented Reality system, named Augmented Reality Mind Scanner System (ARMS), aiming at supporting the speaker of a meeting in receiving feedback from the audience concerning their agreement with and the clarity of the presentation. The ARMS system is a phase of deeper ongoing research on the possibilities of applying Augmented Reality technology on mobile devices, and is anticipating future technological developments that will bring more powerful devices on current customer markets
    • …
    corecore