104 research outputs found

    Pragmatic Evaluation of Health Monitoring & Analysis Models from an Empirical Perspective

    Get PDF
    Implementing and deploying several linked modules that can conduct real-time analysis and recommendation of patient datasets is necessary for designing health monitoring and analysis models. These databases include, but are not limited to, blood test results, computer tomography (CT) scans, MRI scans, PET scans, and other imaging tests. A combination of signal processing and image processing methods are used to process them. These methods include data collection, pre-processing, feature extraction and selection, classification, and context-specific post-processing. Researchers have put forward a variety of machine learning (ML) and deep learning (DL) techniques to carry out these tasks, which help with the high-accuracy categorization of these datasets. However, the internal operational features and the quantitative and qualitative performance indicators of each of these models differ. These models also demonstrate various functional subtleties, contextual benefits, application-specific constraints, and deployment-specific future research directions. It is difficult for researchers to pinpoint models that perform well for their application-specific use cases because of the vast range of performance. In order to reduce this uncertainty, this paper discusses a review of several Health Monitoring & Analysis Models in terms of their internal operational features & performance measurements. Readers will be able to recognise models that are appropriate for their application-specific use cases based on this discussion. When compared to other models, it was shown that Convolutional Neural Networks (CNNs), Masked Region CNN (MRCNN), Recurrent NN (RNN), Q-Learning, and Reinforcement learning models had greater analytical performance. They are hence suitable for clinical use cases. These models' worse scaling performance is a result of their increased complexity and higher implementation costs. This paper compares evaluated models in terms of accuracy, computational latency, deployment complexity, scalability, and deployment cost metrics to analyse such scenarios. This comparison will help users choose the best models for their performance-specific use cases. In this article, a new Health Monitoring Metric (HMM), which integrates many performance indicators to identify the best-performing models under various real-time patient settings, is reviewed to make the process of model selection even easier for real-time scenarios

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    A survey of wearable biometric recognition systems

    Get PDF
    The growing popularity of wearable devices is leading to new ways to interact with the environment, with other smart devices, and with other people. Wearables equipped with an array of sensors are able to capture the owner's physiological and behavioural traits, thus are well suited for biometric authentication to control other devices or access digital services. However, wearable biometrics have substantial differences from traditional biometrics for computer systems, such as fingerprints, eye features, or voice. In this article, we discuss these differences and analyse how researchers are approaching the wearable biometrics field. We review and provide a categorization of wearable sensors useful for capturing biometric signals. We analyse the computational cost of the different signal processing techniques, an important practical factor in constrained devices such as wearables. Finally, we review and classify the most recent proposals in the field of wearable biometrics in terms of the structure of the biometric system proposed, their experimental setup, and their results. We also present a critique of experimental issues such as evaluation and feasibility aspects, and offer some final thoughts on research directions that need attention in future work.This work was partially supported by the MINECO grant TIN2013-46469-R (SPINY) and the CAM Grant S2013/ICE-3095 (CIBERDINE

    Socio-Cognitive and Affective Computing

    Get PDF
    Social cognition focuses on how people process, store, and apply information about other people and social situations. It focuses on the role that cognitive processes play in social interactions. On the other hand, the term cognitive computing is generally used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making. In this sense, it is a type of computing with the goal of discovering more accurate models of how the human brain/mind senses, reasons, and responds to stimuli. Socio-Cognitive Computing should be understood as a set of theoretical interdisciplinary frameworks, methodologies, methods and hardware/software tools to model how the human brain mediates social interactions. In addition, Affective Computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects, a fundamental aspect of socio-cognitive neuroscience. It is an interdisciplinary field spanning computer science, electrical engineering, psychology, and cognitive science. Physiological Computing is a category of technology in which electrophysiological data recorded directly from human activity are used to interface with a computing device. This technology becomes even more relevant when computing can be integrated pervasively in everyday life environments. Thus, Socio-Cognitive and Affective Computing systems should be able to adapt their behavior according to the Physiological Computing paradigm. This book integrates proposals from researchers who use signals from the brain and/or body to infer people's intentions and psychological state in smart computing systems. The design of this kind of systems combines knowledge and methods of ubiquitous and pervasive computing, as well as physiological data measurement and processing, with those of socio-cognitive and affective computing

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the user’s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer

    Recognising Complex Mental States from Naturalistic Human-Computer Interactions

    Get PDF
    New advances in computer vision techniques will revolutionize the way we interact with computers, as they, together with other improvements, will help us build machines that understand us better. The face is the main non-verbal channel for human-human communication and contains valuable information about emotion, mood, and mental state. Affective computing researchers have investigated widely how facial expressions can be used for automatically recognizing affect and mental states. Nowadays, physiological signals can be measured by video-based techniques, which can also be utilised for emotion detection. Physiological signals, are an important indicator of internal feelings, and are more robust against social masking. This thesis focuses on computer vision techniques to detect facial expression and physiological changes for recognizing non-basic and natural emotions during human-computer interaction. It covers all stages of the research process from data acquisition, integration and application. Most previous studies focused on acquiring data from prototypic basic emotions acted out under laboratory conditions. To evaluate the proposed method under more practical conditions, two different scenarios were used for data collection. In the first scenario, a set of controlled stimulus was used to trigger the user’s emotion. The second scenario aimed at capturing more naturalistic emotions that might occur during a writing activity. In the second scenario, the engagement level of the participants with other affective states was the target of the system. For the first time this thesis explores how video-based physiological measures can be used in affect detection. Video-based measuring of physiological signals is a new technique that needs more improvement to be used in practical applications. A machine learning approach is proposed and evaluated to improve the accuracy of heart rate (HR) measurement using an ordinary camera during a naturalistic interaction with computer

    Characterization and processing of novel neck photoplethysmography signals for cardiorespiratory monitoring

    Get PDF
    Epilepsy is a neurological disorder causing serious brain seizures that severely affect the patients' quality of life. Sudden unexpected death in epilepsy (SUDEP), for which no evident decease reason is found after post-mortem examination, is a common cause of mortality. The mechanisms leading to SUDEP are uncertain, but, centrally mediated apneic respiratory dysfunction, inducing dangerous hypoxemia, plays a key role. Continuous physiological monitoring appears as the only reliable solution for SUDEP prevention. However, current seizure-detection systems do not show enough sensitivity and present a high number of intolerable false alarms. A wearable system capable of measuring several physiological signals from the same body location, could efficiently overcome these limitations. In this framework, a neck wearable apnea detection device (WADD), sensing airflow through tracheal sounds, was designed. Despite the promising performance, it is still necessary to integrate an oximeter sensor into the system, to measure oxygen saturation in blood (SpO2) from neck photoplethysmography (PPG) signals, and hence, support the apnea detection decision. The neck is a novel PPG measurement site that has not yet been thoroughly explored, due to numerous challenges. This research work aims to characterize neck PPG signals, in order to fully exploit this alternative pulse oximetry location, for precise cardiorespiratory biomarkers monitoring. In this thesis, neck PPG signals were recorded, for the first time in literature, in a series of experiments under different artifacts and respiratory conditions. Morphological and spectral characteristics were analyzed in order to identify potential singularities of the signals. The most common neck PPG artifacts critically corrupting the signal quality, and other breathing states of interest, were thoroughly characterized in terms of the most discriminative features. An algorithm was further developed to differentiate artifacts from clean PPG signals. Both, the proposed characterization and classification model can be useful tools for researchers to denoise neck PPG signals and exploit them in a variety of clinical contexts. In addition to that, it was demonstrated that the neck also offered the possibility, unlike other body parts, to extract the Jugular Venous Pulse (JVP) non-invasively. Overall, the thesis showed how the neck could be an optimum location for multi-modal monitoring in the context of diseases affecting respiration, since it not only allows the sensing of airflow related signals, but also, the breathing frequency component of the PPG appeared more prominent than in the standard finger location. In this context, this property enabled the extraction of relevant features to develop a promising algorithm for apnea detection in near-real time. These findings could be of great importance for SUDEP prevention, facilitating the investigation of the mechanisms and risk factors associated to it, and ultimately reduce epilepsy mortality.Open Acces
    • …
    corecore