6,757 research outputs found

    Dheergayu: Clinical Depression Monitoring Assistant

    Get PDF
    Depression is identified as one of the most common mental health disorders in the world. Depression not only impacts the patient but also their families and relatives. If not properly treated, due to these reasons it leads people to hazardous situations. Nonetheless existing clinical diagnosis tools for monitoring illness trajectory are inadequate. Traditionally, psychiatrists use one to one interaction assessments to diagnose depression levels. However, these cliniccentered services can pose several operational challenges. In order to monitor clinical depressive disorders, patients are required to travel regularly to a clinical center within its limited operating hours. These procedures are highly resource intensive because they require skilled clinician and laboratories. To address these issues, we propose a personal and ubiquitous sensing technologies, such as fitness trackers and smartphones, which can monitor human vitals in an unobtrusive manner

    Predicting depression using deep learning and ensemble algorithms on raw twitter data

    Get PDF
    Social network and microblogging sites such as Twitter are widespread amongst all generations nowadays where people connect and share their feelings, emotions, pursuits etc. Depression, one of the most common mental disorder, is an acute state of sadness where person loses interest in all activities. If not treated immediately this can result in dire consequences such as death. In this era of virtual world, people are more comfortable in expressing their emotions in such sites as they have become a part and parcel of everyday lives. The research put forth thus, employs machine learning classifiers on the twitter data set to detect if a person’s tweet indicates any sign of depression or not

    Logging Stress and Anxiety Using a Gamified Mobile-based EMA Application, and Emotion Recognition Using a Personalized Machine Learning Approach

    Get PDF
    According to American Psychological Association (APA) more than 9 in 10 (94 percent) adults believe that stress can contribute to the development of major health problems, such as heart disease, depression, and obesity. Due to the subjective nature of stress, and anxiety, it has been demanding to measure these psychological issues accurately by only relying on objective means. In recent years, researchers have increasingly utilized computer vision techniques and machine learning algorithms to develop scalable and accessible solutions for remote mental health monitoring via web and mobile applications. To further enhance accuracy in the field of digital health and precision diagnostics, there is a need for personalized machine-learning approaches that focus on recognizing mental states based on individual characteristics, rather than relying solely on general-purpose solutions. This thesis focuses on conducting experiments aimed at recognizing and assessing levels of stress and anxiety in participants. In the initial phase of the study, a mobile application with broad applicability (compatible with both Android and iPhone platforms) is introduced (we called it STAND). This application serves the purpose of Ecological Momentary Assessment (EMA). Participants receive daily notifications through this smartphone-based app, which redirects them to a screen consisting of three components. These components include a question that prompts participants to indicate their current levels of stress and anxiety, a rating scale ranging from 1 to 10 for quantifying their response, and the ability to capture a selfie. The responses to the stress and anxiety questions, along with the corresponding selfie photographs, are then analyzed on an individual basis. This analysis focuses on exploring the relationships between self-reported stress and anxiety levels and potential facial expressions indicative of stress and anxiety, eye features such as pupil size variation and eye closure, and specific action units (AUs) observed in the frames over time. In addition to its primary functions, the mobile app also gathers sensor data, including accelerometer and gyroscope readings, on a daily basis. This data holds potential for further analysis related to stress and anxiety. Furthermore, apart from capturing selfie photographs, participants have the option to upload video recordings of themselves while engaging in two neuropsychological games. These recorded videos are then subjected to analysis in order to extract pertinent features that can be utilized for binary classification of stress and anxiety (i.e., stress and anxiety recognition). The participants that will be selected for this phase are students aged between 18 and 38, who have received recent clinical diagnoses indicating specific stress and anxiety levels. In order to enhance user engagement in the intervention, gamified elements - an emerging trend to influence user behavior and lifestyle - has been utilized. Incorporating gamified elements into non-game contexts (e.g., health-related) has gained overwhelming popularity during the last few years which has made the interventions more delightful, engaging, and motivating. In the subsequent phase of this research, we conducted an AI experiment employing a personalized machine learning approach to perform emotion recognition on an established dataset called Emognition. This experiment served as a simulation of the future analysis that will be conducted as part of a more comprehensive study focusing on stress and anxiety recognition. The outcomes of the emotion recognition experiment in this study highlight the effectiveness of personalized machine learning techniques and bear significance for the development of future diagnostic endeavors. For training purposes, we selected three models, namely KNN, Random Forest, and MLP. The preliminary performance accuracy results for the experiment were 93%, 95%, and 87% respectively for these models

    Training Effects of Adaptive Emotive Responses From Animated Agents in Simulated Environments

    Get PDF
    Humans are distinct from machines in their capacity to emote, stimulate, and express emotions. Because emotions play such an important role in human interactions, human-like agents used in pedagogical roles for simulation-based training should properly reflect emotions. Currently, research concerning the development of this type of agent focuses on basic agent interface characteristics, as well as character building qualities. However, human-like agents should provide emotion-like qualities that are clearly expressed, properly synchronized, and that simulate complex, real-time interactions through adaptive emotion systems. The research conducted for this dissertation was a quantitative investigation using 3 (within) x 2 (between) x 3 (within) factorial design. A total of 56 paid participants consented to complete the study. Independent variables included emotion intensity (i.e., low, moderate, and high emotion), levels of expertise (novice participant versus experienced participant), and number of trials. Dependent measures included visual attention, emotional response towards the animated agents, simulation performance score, and learners\u27 perception of the pedagogical agent persona while participants interacted with a pain assessment and management simulation. While no relationships were indicated between the levels of emotion intensity portrayed by the animated agents and the participants\u27 visual attention, emotional response towards the animated agent, and simulation performance score, there were significant relationships between the level of expertise of the participant and the visual attention, emotional responses, and performance outcomes. The results indicated that nursing students had higher visual attention during their interaction with the animated agents. Additionally, nursing students expressed more neutral facial expression whereas experienced nurses expressed more emotional facial expressions towards the animated agents. The results of the simulation performance scores indicated that nursing students obtained higher performance scores in the pain assessment and management task than experienced nurses. Both groups of participants had a positive perception of the animated agents persona

    EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and Their Applications.

    Full text link
    Brain-Computer interfaces (BCIs) enhance the capability of human brain activities to interact with the environment. Recent advancements in technology and machine learning algorithms have increased interest in electroencephalographic (EEG)-based BCI applications. EEG-based intelligent BCI systems can facilitate continuous monitoring of fluctuations in human cognitive states under monotonous tasks, which is both beneficial for people in need of healthcare support and general researchers in different domain areas. In this review, we survey the recent literature on EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensating for the gaps in the systematic summary of the past five years. Specifically, we first review the current status of BCI and signal sensing technologies for collecting reliable EEG signals. Then, we demonstrate state-of-the-art computational intelligence techniques, including fuzzy models and transfer learning in machine learning and deep learning algorithms, to detect, monitor, and maintain human cognitive states and task performance in prevalent applications. Finally, we present a couple of innovative BCI-inspired healthcare applications and discuss future research directions in EEG-based BCI research

    Advances in Human Factors in Wearable Technologies and Game Design

    Get PDF
    • 

    corecore