4,341 research outputs found

    Breathing analysis using thermal and depth imaging camera video records

    Get PDF
    The paper is devoted to the study of facial region temperature changes using a simple thermal imaging camera and to the comparison of their time evolution with the pectoral area motion recorded by the MS Kinect depth sensor. The goal of this research is to propose the use of video records as alternative diagnostics of breathing disorders allowing their analysis in the home environment as well. The methods proposed include (i) specific image processing algorithms for detecting facial parts with periodic temperature changes; (ii) computational intelligence tools for analysing the associated videosequences; and (iii) digital filters and spectral estimation tools for processing the depth matrices. Machine learning applied to thermal imaging camera calibration allowed the recognition of its digital information with an accuracy close to 100% for the classification of individual temperature values. The proposed detection of breathing features was used for monitoring of physical activities by the home exercise bike. The results include a decrease of breathing temperature and its frequency after a load, with mean values −0.16°C/min and −0.72 bpm respectively, for the given set of experiments. The proposed methods verify that thermal and depth cameras can be used as additional tools for multimodal detection of breathing patterns. © 2017 by the authors. Licensee MDPI, Basel, Switzerland.Department of Neurology, University of Pittsburg

    A systematic review of physiological signals based driver drowsiness detection systems.

    Get PDF
    Driving a vehicle is a complex, multidimensional, and potentially risky activity demanding full mobilization and utilization of physiological and cognitive abilities. Drowsiness, often caused by stress, fatigue, and illness declines cognitive capabilities that affect drivers' capability and cause many accidents. Drowsiness-related road accidents are associated with trauma, physical injuries, and fatalities, and often accompany economic loss. Drowsy-related crashes are most common in young people and night shift workers. Real-time and accurate driver drowsiness detection is necessary to bring down the drowsy driving accident rate. Many researchers endeavored for systems to detect drowsiness using different features related to vehicles, and drivers' behavior, as well as, physiological measures. Keeping in view the rising trend in the use of physiological measures, this study presents a comprehensive and systematic review of the recent techniques to detect driver drowsiness using physiological signals. Different sensors augmented with machine learning are utilized which subsequently yield better results. These techniques are analyzed with respect to several aspects such as data collection sensor, environment consideration like controlled or dynamic, experimental set up like real traffic or driving simulators, etc. Similarly, by investigating the type of sensors involved in experiments, this study discusses the advantages and disadvantages of existing studies and points out the research gaps. Perceptions and conceptions are made to provide future research directions for drowsiness detection techniques based on physiological signals. [Abstract copyright: © The Author(s), under exclusive licence to Springer Nature B.V. 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

    Thermal imaging developments for respiratory airflow measurement to diagnose apnoea

    Get PDF
    Sleep-disordered breathing is a sleep disorder that manifests itself as intermittent pauses (apnoeas) in breathing during sleep. The condition disturbs the sleep and can results in a variety of health problems. Its diagnosis is complex and involves multiple sensors attached to the person to measure electroencephalogram (EEG), electrocardiogram (ECG), blood oxygen saturation (pulse oximetry, S

    IoT Platform for COVID-19 Prevention and Control: A Survey

    Full text link
    As a result of the worldwide transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), coronavirus disease 2019 (COVID-19) has evolved into an unprecedented pandemic. Currently, with unavailable pharmaceutical treatments and vaccines, this novel coronavirus results in a great impact on public health, human society, and global economy, which is likely to last for many years. One of the lessons learned from the COVID-19 pandemic is that a long-term system with non-pharmaceutical interventions for preventing and controlling new infectious diseases is desirable to be implemented. Internet of things (IoT) platform is preferred to be utilized to achieve this goal, due to its ubiquitous sensing ability and seamless connectivity. IoT technology is changing our lives through smart healthcare, smart home, and smart city, which aims to build a more convenient and intelligent community. This paper presents how the IoT could be incorporated into the epidemic prevention and control system. Specifically, we demonstrate a potential fog-cloud combined IoT platform that can be used in the systematic and intelligent COVID-19 prevention and control, which involves five interventions including COVID-19 Symptom Diagnosis, Quarantine Monitoring, Contact Tracing & Social Distancing, COVID-19 Outbreak Forecasting, and SARS-CoV-2 Mutation Tracking. We investigate and review the state-of-the-art literatures of these five interventions to present the capabilities of IoT in countering against the current COVID-19 pandemic or future infectious disease epidemics.Comment: 12 pages; Submitted to IEEE Internet of Things Journa

    Fur seals do, but sea lions don’t – cross taxa insights into exhalation during ascent from dives

    Get PDF
    Many agencies provided funding and logistical support for the various research efforts resulting in the data presented here: the South African Department of Science and Technology, administered by the National Research Foundation and the Department of Environmental Affairs for subantarctic fur seal work; the Australian Research Council (DP110102065), Holsworth Wildlife Research Endowment and the Office of Naval Research (Marine Mammals and Biological Oceanography Program Award no. N00014-10-1-0385) for Australian fur seal work; the National Oceanic and Atmospheric Administration (NOAA) via grants to the Alaska SeaLife Center and the National Marine Mammal Laboratory, with additional funding and logistical support from North Pacific Wildlife Consulting for Steller sea lion and northern fur seal (Russia) work; the National Marine Mammal Laboratory, National Marine Fisheries Service, NOAA for northern fur seal (Alaska) work. Research support for R.W. Davis was provided by the National Science Foundation.Management of gases during diving is not well understood across marine mammal species. Prior to diving, phocid (true) seals generally exhale, a behaviour thought to assist with the prevention of decompression sickness. Otariid seals (fur seals and sea lions) have a greater reliance on their lung oxygen stores, and inhale prior to diving. One otariid, the Antarctic fur seal (Arctocephalus gazella), then exhales during the final 50–85% of the return to the surface, which may prevent another gas management issue: shallow-water blackout. Here, we compare data collected from animal-attached tags (video cameras, hydrophones and conductivity sensors) deployed on a suite of otariid seal species to examine the ubiquity of ascent exhalations for this group. We find evidence for ascent exhalations across four fur seal species, but that such exhalations are absent for three sea lion species. Fur seals and sea lions are no longer genetically separated into distinct subfamilies, but are morphologically distinguished by the thick underfur layer of fur seals. Together with their smaller size and energetic dives, we suggest their air-filled fur might underlie the need to perform these exhalations, although whether to reduce buoyancy and ascent speed, for the avoidance of shallow-water blackout or to prevent other cardiovascular management issues in their diving remains unclear.PostprintPostprintPeer reviewe

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions
    • …
    corecore