5,341 research outputs found
Autonomous navigation for guide following in crowded indoor environments
The requirements for assisted living are rapidly changing as the number of elderly
patients over the age of 60 continues to increase. This rise places a high level of stress on
nurse practitioners who must care for more patients than they are capable. As this trend is
expected to continue, new technology will be required to help care for patients. Mobile
robots present an opportunity to help alleviate the stress on nurse practitioners by
monitoring and performing remedial tasks for elderly patients. In order to produce
mobile robots with the ability to perform these tasks, however, many challenges must be
overcome.
The hospital environment requires a high level of safety to prevent patient injury. Any
facility that uses mobile robots, therefore, must be able to ensure that no harm will come
to patients whilst in a care environment. This requires the robot to build a high level of
understanding about the environment and the people with close proximity to the robot.
Hitherto, most mobile robots have used vision-based sensors or 2D laser range finders.
3D time-of-flight sensors have recently been introduced and provide dense 3D point
clouds of the environment at real-time frame rates. This provides mobile robots with
previously unavailable dense information in real-time. I investigate the use of time-of-flight
cameras for mobile robot navigation in crowded environments in this thesis. A
unified framework to allow the robot to follow a guide through an indoor environment
safely and efficiently is presented. Each component of the framework is analyzed in
detail, with real-world scenarios illustrating its practical use.
Time-of-flight cameras are relatively new sensors and, therefore, have inherent problems
that must be overcome to receive consistent and accurate data. I propose a novel and
practical probabilistic framework to overcome many of the inherent problems in this
thesis. The framework fuses multiple depth maps with color information forming a
reliable and consistent view of the world. In order for the robot to interact with the
environment, contextual information is required. To this end, I propose a region-growing
segmentation algorithm to group points based on surface characteristics, surface normal
and surface curvature. The segmentation process creates a distinct set of surfaces,
however, only a limited amount of contextual information is available to allow for
interaction. Therefore, a novel classifier is proposed using spherical harmonics to
differentiate people from all other objects.
The added ability to identify people allows the robot to find potential candidates to
follow. However, for safe navigation, the robot must continuously track all visible
objects to obtain positional and velocity information. A multi-object tracking system is
investigated to track visible objects reliably using multiple cues, shape and color. The
tracking system allows the robot to react to the dynamic nature of people by building an
estimate of the motion flow. This flow provides the robot with the necessary information
to determine where and at what speeds it is safe to drive. In addition, a novel search
strategy is proposed to allow the robot to recover a guide who has left the field-of-view.
To achieve this, a search map is constructed with areas of the environment ranked
according to how likely they are to reveal the guide’s true location. Then, the robot can
approach the most likely search area to recover the guide. Finally, all components
presented are joined to follow a guide through an indoor environment. The results
achieved demonstrate the efficacy of the proposed components
Optimizing Lead Time in Fall Detection for a Planar Bipedal Robot
For legged robots to operate in complex terrains, they must be robust to the
disturbances and uncertainties they encounter. This paper contributes to
enhancing robustness through the design of fall detection/prediction algorithms
that will provide sufficient lead time for corrective motions to be taken.
Falls can be caused by abrupt (fast-acting), incipient (slow-acting), or
intermittent (non-continuous) faults. Early fall detection is a challenging
task due to the masking effects of controllers (through their disturbance
attenuation actions), the inverse relationship between lead time and false
positive rates, and the temporal behavior of the faults/underlying factors. In
this paper, we propose a fall detection algorithm that is capable of detecting
both incipient and abrupt faults while maximizing lead time and meeting desired
thresholds on the false positive and negative rates
Fall Prediction for Bipedal Robots: The Standing Phase
This paper presents a novel approach to fall prediction for bipedal robots,
specifically targeting the detection of potential falls while standing caused
by abrupt, incipient, and intermittent faults. Leveraging a 1D convolutional
neural network (CNN), our method aims to maximize lead time for fall prediction
while minimizing false positive rates. The proposed algorithm uniquely
integrates the detection of various fault types and estimates the lead time for
potential falls. Our contributions include the development of an algorithm
capable of detecting abrupt, incipient, and intermittent faults in full-sized
robots, its implementation using both simulation and hardware data for a
humanoid robot, and a method for estimating lead time. Evaluation metrics,
including false positive rate, lead time, and response time, demonstrate the
efficacy of our approach. Particularly, our model achieves impressive lead
times and response times across different fault scenarios with a false positive
rate of 0. The findings of this study hold significant implications for
enhancing the safety and reliability of bipedal robotic systems.Comment: Submitted to ICRA 2024. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessibl
Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations
Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions
Advances in Human-Robot Interaction
Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
- …