41 research outputs found

    Human Activity Recognition and Fall Detection Using Unobtrusive Technologies

    Full text link
    As the population ages, health issues like injurious falls demand more attention. Wearable devices can be used to detect falls. However, despite their commercial success, most wearable devices are obtrusive, and patients generally do not like or may forget to wear them. In this thesis, a monitoring system consisting of two 24×32 thermal array sensors and a millimetre-wave (mmWave) radar sensor was developed to unobtrusively detect locations and recognise human activities such as sitting, standing, walking, lying, and falling. Data were collected by observing healthy young volunteers simulate ten different scenarios. The optimal installation position of the sensors was initially unknown. Therefore, the sensors were mounted on a side wall, a corner, and on the ceiling of the experimental room to allow performance comparison between these sensor placements. Every thermal frame was converted into an image and a set of features was manually extracted or convolutional neural networks (CNNs) were used to automatically extract features. Applying a CNN model on the infrared stereo dataset to recognise five activities (falling plus lying on the floor, lying in bed, sitting on chair, sitting in bed, standing plus walking), overall average accuracy and F1-score were 97.6%, and 0.935, respectively. The scores for detecting falling plus lying on the floor from the remaining activities were 97.9%, and 0.945, respectively. When using radar technology, the generated point clouds were converted into an occupancy grid and a CNN model was used to automatically extract features, or a set of features was manually extracted. Applying several classifiers on the manually extracted features to detect falling plus lying on the floor from the remaining activities, Random Forest (RF) classifier achieved the best results in overhead position (an accuracy of 92.2%, a recall of 0.881, a precision of 0.805, and an F1-score of 0.841). Additionally, the CNN model achieved the best results (an accuracy of 92.3%, a recall of 0.891, a precision of 0.801, and an F1-score of 0.844), in overhead position and slightly outperformed the RF method. Data fusion was performed at a feature level, combining both infrared and radar technologies, however the benefit was not significant. The proposed system was cost, processing time, and space efficient. The system with further development can be utilised as a real-time fall detection system in aged care facilities or at homes of older people

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Cognitive Hyperconnected Digital Transformation

    Get PDF
    Cognitive Hyperconnected Digital Transformation provides an overview of the current Internet of Things (IoT) landscape, ranging from research, innovation and development priorities to enabling technologies in a global context. It is intended as a standalone book in a series that covers the Internet of Things activities of the IERC-Internet of Things European Research Cluster, including both research and technological innovation, validation and deployment. The book builds on the ideas put forward by the European Research Cluster, the IoT European Platform Initiative (IoT-EPI) and the IoT European Large-Scale Pilots Programme, presenting global views and state-of-the-art results regarding the challenges facing IoT research, innovation, development and deployment in the next years. Hyperconnected environments integrating industrial/business/consumer IoT technologies and applications require new IoT open systems architectures integrated with network architecture (a knowledge-centric network for IoT), IoT system design and open, horizontal and interoperable platforms managing things that are digital, automated and connected and that function in real-time with remote access and control based on Internet-enabled tools. The IoT is bridging the physical world with the virtual world by combining augmented reality (AR), virtual reality (VR), machine learning and artificial intelligence (AI) to support the physical-digital integrations in the Internet of mobile things based on sensors/actuators, communication, analytics technologies, cyber-physical systems, software, cognitive systems and IoT platforms with multiple functionalities. These IoT systems have the potential to understand, learn, predict, adapt and operate autonomously. They can change future behaviour, while the combination of extensive parallel processing power, advanced algorithms and data sets feed the cognitive algorithms that allow the IoT systems to develop new services and propose new solutions. IoT technologies are moving into the industrial space and enhancing traditional industrial platforms with solutions that break free of device-, operating system- and protocol-dependency. Secure edge computing solutions replace local networks, web services replace software, and devices with networked programmable logic controllers (NPLCs) based on Internet protocols replace devices that use proprietary protocols. Information captured by edge devices on the factory floor is secure and accessible from any location in real time, opening the communication gateway both vertically (connecting machines across the factory and enabling the instant availability of data to stakeholders within operational silos) and horizontally (with one framework for the entire supply chain, across departments, business units, global factory locations and other markets). End-to-end security and privacy solutions in IoT space require agile, context-aware and scalable components with mechanisms that are both fluid and adaptive. The convergence of IT (information technology) and OT (operational technology) makes security and privacy by default a new important element where security is addressed at the architecture level, across applications and domains, using multi-layered distributed security measures. Blockchain is transforming industry operating models by adding trust to untrusted environments, providing distributed security mechanisms and transparent access to the information in the chain. Digital technology platforms are evolving, with IoT platforms integrating complex information systems, customer experience, analytics and intelligence to enable new capabilities and business models for digital business

    Sensing and Signal Processing in Smart Healthcare

    Get PDF
    In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive human–computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included

    Personalized fall detection monitoring system based on learning from the user movements

    Get PDF
    Personalized fall detection system is shown to provide added and more benefits compare to the current fall detection system. The personalized model can also be applied to anything where one class of data is hard to gather. The results show that adapting to the user needs, improve the overall accuracy of the system. Future work includes detection of the smartphone on the user so that the user can place the system anywhere on the body and make sure it detects. Even though the accuracy is not 100% the proof of concept of personalization can be used to achieve greater accuracy. The concept of personalization used in this paper can also be extended to other research in the medical field or where data is hard to come by for a particular class. More research into the feature extraction and feature selection module should be investigated. For the feature selection module, more research into selecting features based on one class data.http://jit.ndhu.edu.twam2022Electrical, Electronic and Computer Engineerin

    Tracking the Temporal-Evolution of Supernova Bubbles in Numerical Simulations

    Get PDF
    The study of low-dimensional, noisy manifolds embedded in a higher dimensional space has been extremely useful in many applications, from the chemical analysis of multi-phase flows to simulations of galactic mergers. Building a probabilistic model of the manifolds has helped in describing their essential properties and how they vary in space. However, when the manifold is evolving through time, a joint spatio-temporal modelling is needed, in order to fully comprehend its nature. We propose a first-order Markovian process that propagates the spatial probabilistic model of a manifold at fixed time, to its adjacent temporal stages. The proposed methodology is demonstrated using a particle simulation of an interacting dwarf galaxy to describe the evolution of a cavity generated by a Supernov

    WOFEX 2021 : 19th annual workshop, Ostrava, 1th September 2021 : proceedings of papers

    Get PDF
    The workshop WOFEX 2021 (PhD workshop of Faculty of Electrical Engineer-ing and Computer Science) was held on September 1st September 2021 at the VSB – Technical University of Ostrava. The workshop offers an opportunity for students to meet and share their research experiences, to discover commonalities in research and studentship, and to foster a collaborative environment for joint problem solving. PhD students are encouraged to attend in order to ensure a broad, unconfined discussion. In that view, this workshop is intended for students and researchers of this faculty offering opportunities to meet new colleagues.Ostrav
    corecore