177 research outputs found
Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations
Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions
TS-RGBD Dataset: a Novel Dataset for Theatre Scenes Description for People with Visual Impairments
Computer vision was long a tool used for aiding visually impaired people to
move around their environment and avoid obstacles and falls. Solutions are
limited to either indoor or outdoor scenes, which limits the kind of places and
scenes visually disabled people can be in, including entertainment places such
as theatres. Furthermore, most of the proposed computer-vision-based methods
rely on RGB benchmarks to train their models resulting in a limited performance
due to the absence of the depth modality.
In this paper, we propose a novel RGB-D dataset containing theatre scenes
with ground truth human actions and dense captions annotations for image
captioning and human action recognition: TS-RGBD dataset. It includes three
types of data: RGB, depth, and skeleton sequences, captured by Microsoft
Kinect.
We test image captioning models on our dataset as well as some skeleton-based
human action recognition models in order to extend the range of environment
types where a visually disabled person can be, by detecting human actions and
textually describing appearances of regions of interest in theatre scenes
A discussion on the validation tests employed to compare human action recognition methods using the MSR Action3D dataset
This paper aims to determine which is the best human action recognition
method based on features extracted from RGB-D devices, such as the Microsoft
Kinect. A review of all the papers that make reference to MSR Action3D, the
most used dataset that includes depth information acquired from a RGB-D device,
has been performed. We found that the validation method used by each work
differs from the others. So, a direct comparison among works cannot be made.
However, almost all the works present their results comparing them without
taking into account this issue. Therefore, we present different rankings
according to the methodology used for the validation in orden to clarify the
existing confusion.Comment: 16 pages and 7 table
Practical and Rich User Digitization
A long-standing vision in computer science has been to evolve computing
devices into proactive assistants that enhance our productivity, health and
wellness, and many other facets of our lives. User digitization is crucial in
achieving this vision as it allows computers to intimately understand their
users, capturing activity, pose, routine, and behavior. Today's consumer
devices - like smartphones and smartwatches provide a glimpse of this
potential, offering coarse digital representations of users with metrics such
as step count, heart rate, and a handful of human activities like running and
biking. Even these very low-dimensional representations are already bringing
value to millions of people's lives, but there is significant potential for
improvement. On the other end, professional, high-fidelity comprehensive user
digitization systems exist. For example, motion capture suits and multi-camera
rigs that digitize our full body and appearance, and scanning machines such as
MRI capture our detailed anatomy. However, these carry significant user
practicality burdens, such as financial, privacy, ergonomic, aesthetic, and
instrumentation considerations, that preclude consumer use. In general, the
higher the fidelity of capture, the lower the user's practicality. Most
conventional approaches strike a balance between user practicality and
digitization fidelity.
My research aims to break this trend, developing sensing systems that
increase user digitization fidelity to create new and powerful computing
experiences while retaining or even improving user practicality and
accessibility, allowing such technologies to have a societal impact. Armed with
such knowledge, our future devices could offer longitudinal health tracking,
more productive work environments, full body avatars in extended reality, and
embodied telepresence experiences, to name just a few domains.Comment: PhD thesi
A Survey on Human-aware Robot Navigation
Intelligent systems are increasingly part of our everyday lives and have been
integrated seamlessly to the point where it is difficult to imagine a world
without them. Physical manifestations of those systems on the other hand, in
the form of embodied agents or robots, have so far been used only for specific
applications and are often limited to functional roles (e.g. in the industry,
entertainment and military fields). Given the current growth and innovation in
the research communities concerned with the topics of robot navigation,
human-robot-interaction and human activity recognition, it seems like this
might soon change. Robots are increasingly easy to obtain and use and the
acceptance of them in general is growing. However, the design of a socially
compliant robot that can function as a companion needs to take various areas of
research into account. This paper is concerned with the navigation aspect of a
socially-compliant robot and provides a survey of existing solutions for the
relevant areas of research as well as an outlook on possible future directions.Comment: Robotics and Autonomous Systems, 202
Data Distribution Dynamics in Real-World WiFi-Based Patient Activity Monitoring for Home Healthcare
This paper examines the application of WiFi signals for real-world monitoring
of daily activities in home healthcare scenarios. While the state-of-the-art of
WiFi-based activity recognition is promising in lab environments, challenges
arise in real-world settings due to environmental, subject, and system
configuration variables, affecting accuracy and adaptability. The research
involved deploying systems in various settings and analyzing data shifts. It
aims to guide realistic development of robust, context-aware WiFi sensing
systems for elderly care. The findings suggest a shift in WiFi-based activity
sensing, bridging the gap between academic research and practical applications,
enhancing life quality through technology
Ambient Assisted Living: Scoping Review of Artificial Intelligence Models, Domains, Technology, and Concerns
Background: Ambient assisted living (AAL) is a common name for various artificial intelligence (AI)—infused applications and platforms that support their users in need in multiple activities, from health to daily living. These systems use different approaches to learn about their users and make automated decisions, known as AI models, for personalizing their services and increasing outcomes. Given the numerous systems developed and deployed for people with different needs, health conditions, and dispositions toward the technology, it is critical to obtain clear and comprehensive insights concerning AI models used, along with their domains, technology, and concerns, to identify promising directions for future work. Objective: This study aimed to provide a scoping review of the literature on AI models in AAL. In particular, we analyzed specific AI models used in AАL systems, the target domains of the models, the technology using the models, and the major concerns from the end-user perspective. Our goal was to consolidate research on this topic and inform end users, health care professionals and providers, researchers, and practitioners in developing, deploying, and evaluating future intelligent AAL systems. Methods: This study was conducted as a scoping review to identify, analyze, and extract the relevant literature. It used a natural language processing toolkit to retrieve the article corpus for an efficient and comprehensive automated literature search. Relevant articles were then extracted from the corpus and analyzed manually. This review included 5 digital libraries: IEEE, PubMed, Springer, Elsevier, and MDPI. Results: We included a total of 108 articles. The annual distribution of relevant articles showed a growing trend for all categories from January 2010 to July 2022. The AI models mainly used unsupervised and semisupervised approaches. The leading models are deep learning, natural language processing, instance-based learning, and clustering. Activity assistance and recognition were the most common target domains of the models. Ambient sensing, mobile technology, and robotic devices mainly implemented the models. Older adults were the primary beneficiaries, followed by patients and frail persons of various ages. Availability was a top beneficiary concern. Conclusions: This study presents the analytical evidence of AI models in AAL and their domains, technologies, beneficiaries, and concerns. Future research on intelligent AAL should involve health care professionals and caregivers as designers and users, comply with health-related regulations, improve transparency and privacy, integrate with health care technological infrastructure, explain their decisions to the users, and establish evaluation metrics and design guidelines. Trial Registration: PROSPERO (International Prospective Register of Systematic Reviews) CRD42022347590; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022347590This work was part of and supported by GoodBrother, COST Action 19121—Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living
Non-contact Multimodal Indoor Human Monitoring Systems: A Survey
Indoor human monitoring systems leverage a wide range of sensors, including
cameras, radio devices, and inertial measurement units, to collect extensive
data from users and the environment. These sensors contribute diverse data
modalities, such as video feeds from cameras, received signal strength
indicators and channel state information from WiFi devices, and three-axis
acceleration data from inertial measurement units. In this context, we present
a comprehensive survey of multimodal approaches for indoor human monitoring
systems, with a specific focus on their relevance in elderly care. Our survey
primarily highlights non-contact technologies, particularly cameras and radio
devices, as key components in the development of indoor human monitoring
systems. Throughout this article, we explore well-established techniques for
extracting features from multimodal data sources. Our exploration extends to
methodologies for fusing these features and harnessing multiple modalities to
improve the accuracy and robustness of machine learning models. Furthermore, we
conduct comparative analysis across different data modalities in diverse human
monitoring tasks and undertake a comprehensive examination of existing
multimodal datasets. This extensive survey not only highlights the significance
of indoor human monitoring systems but also affirms their versatile
applications. In particular, we emphasize their critical role in enhancing the
quality of elderly care, offering valuable insights into the development of
non-contact monitoring solutions applicable to the needs of aging populations.Comment: 19 pages, 5 figure
- …