2,981 research outputs found
Use of Augmented Reality in Human Wayfinding: A Systematic Review
Augmented reality technology has emerged as a promising solution to assist
with wayfinding difficulties, bridging the gap between obtaining navigational
assistance and maintaining an awareness of one's real-world surroundings. This
article presents a systematic review of research literature related to AR
navigation technologies. An in-depth analysis of 65 salient studies was
conducted, addressing four main research topics: 1) current state-of-the-art of
AR navigational assistance technologies, 2) user experiences with these
technologies, 3) the effect of AR on human wayfinding performance, and 4)
impacts of AR on human navigational cognition. Notably, studies demonstrate
that AR can decrease cognitive load and improve cognitive map development, in
contrast to traditional guidance modalities. However, findings regarding
wayfinding performance and user experience were mixed. Some studies suggest
little impact of AR on improving outdoor navigational performance, and certain
information modalities may be distracting and ineffective. This article
discusses these nuances in detail, supporting the conclusion that AR holds
great potential in enhancing wayfinding by providing enriched navigational
cues, interactive experiences, and improved situational awareness.Comment: 52 page
Non-contact Multimodal Indoor Human Monitoring Systems: A Survey
Indoor human monitoring systems leverage a wide range of sensors, including
cameras, radio devices, and inertial measurement units, to collect extensive
data from users and the environment. These sensors contribute diverse data
modalities, such as video feeds from cameras, received signal strength
indicators and channel state information from WiFi devices, and three-axis
acceleration data from inertial measurement units. In this context, we present
a comprehensive survey of multimodal approaches for indoor human monitoring
systems, with a specific focus on their relevance in elderly care. Our survey
primarily highlights non-contact technologies, particularly cameras and radio
devices, as key components in the development of indoor human monitoring
systems. Throughout this article, we explore well-established techniques for
extracting features from multimodal data sources. Our exploration extends to
methodologies for fusing these features and harnessing multiple modalities to
improve the accuracy and robustness of machine learning models. Furthermore, we
conduct comparative analysis across different data modalities in diverse human
monitoring tasks and undertake a comprehensive examination of existing
multimodal datasets. This extensive survey not only highlights the significance
of indoor human monitoring systems but also affirms their versatile
applications. In particular, we emphasize their critical role in enhancing the
quality of elderly care, offering valuable insights into the development of
non-contact monitoring solutions applicable to the needs of aging populations.Comment: 19 pages, 5 figure
SLAM for Visually Impaired People: A Survey
In recent decades, several assistive technologies for visually impaired and
blind (VIB) people have been developed to improve their ability to navigate
independently and safely. At the same time, simultaneous localization and
mapping (SLAM) techniques have become sufficiently robust and efficient to be
adopted in the development of assistive technologies. In this paper, we first
report the results of an anonymous survey conducted with VIB people to
understand their experience and needs; we focus on digital assistive
technologies that help them with indoor and outdoor navigation. Then, we
present a literature review of assistive technologies based on SLAM. We discuss
proposed approaches and indicate their pros and cons. We conclude by presenting
future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure
Implementation of an Autonomous Impulse Response Measurement System
Data collection is crucial for researchers, as it can provide important insights for describing phenomena. In acoustics, acoustic phenomena are characterized by Room Impulse Responses (RIRs) occurring when sound propagates in a room. Room impulse responses are needed in vast quantities for various reasons, including the prediction of acoustical parameters and the rendering of virtual acoustical spaces. Recently, mobile robots navigating within indoor spaces have become increasingly used to acquire information about its environment. However, little research has attempted to utilize robots for the collection of room acoustic data.
This thesis presents an adaptable automated system to measure room impulse responses in multi-room environments, using mobile and stationary measurement platforms. The system, known as Autonomous Impulse Response Measurement System (AIRMS), is divided into two stages: data collection and post-processing. To automate data collection, a mobile robotic platform was developed to perform acoustic measurements within a room. The robot was equipped with spatial microphones, multiple loudspeakers and an indoor localization system, which reported real time location of the robot. Additionally, stationary platforms were installed in specific locations inside and outside the room. The mobile and stationary platforms wirelessly communicated with one another to perform the acoustical tests systematically. Since a major requirement of the system is adaptability, researchers can define the elements of the system according to their needs, including the mounted equipment and the number of platforms. Post-processing included extraction of sine sweeps and the calculation of impulse responses. Extraction of the sine sweeps refers to the process of framing every acoustical test signal from the raw recordings. These signals are then processed to calculate the room impulse responses. The automatically collected information was complemented with manually produced data, which included rendering of a 3D model of the room, a panoramic picture.
The performance of the system was tested under two conditions: a single-room and a multiroom setting. Room impulse responses were calculated for each of the test conditions, representing typical characteristics of the signals and showing the effects of proximity from sources and receivers, as well as the presence of boundaries. This prototype produces RIR measurements in a fast and reliable manner.
Although some shortcomings were noted in the compact loudspeakers used to produce the sine sweeps and the accuracy of the indoor localization system, the proposed autonomous measurement system yielded reasonable results. Future work could expand the amount of impulse response measurements in order to further refine the artificial intelligence algorithms
Wayfinding and Navigation for People with Disabilities Using Social Navigation Networks
To achieve safe and independent mobility, people usually depend on published information, prior experience, the knowledge of others, and/or technology to navigate unfamiliar outdoor and indoor environments. Today, due to advances in various technologies, wayfinding and navigation systems and services are commonplace and are accessible on desktop, laptop, and mobile devices. However, despite their popularity and widespread use, current wayfinding and navigation solutions often fail to address the needs of people with disabilities (PWDs). We argue that these shortcomings are primarily due to the ubiquity of the compute-centric approach adopted in these systems and services, where they do not benefit from the experience-centric approach. We propose that following a hybrid approach of combining experience-centric and compute-centric methods will overcome the shortcomings of current wayfinding and navigation solutions for PWDs
An Outlook into the Future of Egocentric Vision
What will the future be? We wonder! In this survey, we explore the gap
between current research in egocentric vision and the ever-anticipated future,
where wearable computing, with outward facing cameras and digital overlays, is
expected to be integrated in our every day lives. To understand this gap, the
article starts by envisaging the future through character-based stories,
showcasing through examples the limitations of current technology. We then
provide a mapping between this future and previously defined research tasks.
For each task, we survey its seminal works, current state-of-the-art
methodologies and available datasets, then reflect on shortcomings that limit
its applicability to future research. Note that this survey focuses on software
models for egocentric vision, independent of any specific hardware. The paper
concludes with recommendations for areas of immediate explorations so as to
unlock our path to the future always-on, personalised and life-enhancing
egocentric vision.Comment: We invite comments, suggestions and corrections here:
https://openreview.net/forum?id=V3974SUk1
- …