118,435 research outputs found
Radar and RGB-depth sensors for fall detection: a review
This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing
Fast Graph-Based Object Segmentation for RGB-D Images
Object segmentation is an important capability for robotic systems, in
particular for grasping. We present a graph- based approach for the
segmentation of simple objects from RGB-D images. We are interested in
segmenting objects with large variety in appearance, from lack of texture to
strong textures, for the task of robotic grasping. The algorithm does not rely
on image features or machine learning. We propose a modified Canny edge
detector for extracting robust edges by using depth information and two simple
cost functions for combining color and depth cues. The cost functions are used
to build an undirected graph, which is partitioned using the concept of
internal and external differences between graph regions. The partitioning is
fast with O(NlogN) complexity. We also discuss ways to deal with missing depth
information. We test the approach on different publicly available RGB-D object
datasets, such as the Rutgers APC RGB-D dataset and the RGB-D Object Dataset,
and compare the results with other existing methods
Spinning-Up the Envelope Before Entering a Common Envelope Phase
We calculate the orbital evolution of binary systems where the primary star
is an evolved red giant branch (RGB) star, while the secondary star is a
low-mass main sequence (MS) star or a brown dwarf. The evolution starts with a
tidal interaction causes the secondary to spiral-in. Than either a common
envelope (CE) is formed in a very short time, or alternatively the system
reaches synchronization and the spiraling-in process substantially slows down.
Some of the latter systems later enter a CE phase. We find that for a large
range of system parameters, binary systems reach stable synchronized orbits
before the onset of a CE phase. Such stable synchronized orbits allow the RGB
star to lose mass prior to the onset of the CE phase. Even after the secondary
enters the giant envelope, the rotational velocity is high enough to cause an
enhanced mass-loss rate. Our results imply that it is crucial to include the
pre-CE evolution when studying the outcome of the CE phase. We find that many
more systems survive the CE phase than would be the case if these preceding
spin-up and mass-loss phases had not been taken into account. Although we have
made the calculations for RGB stars, the results have implications for other
evolved stars that interact with close companions.Comment: New Astronomy, in pres
RGBDTAM: A Cost-Effective and Accurate RGB-D Tracking and Mapping System
Simultaneous Localization and Mapping using RGB-D cameras has been a fertile
research topic in the latest decade, due to the suitability of such sensors for
indoor robotics. In this paper we propose a direct RGB-D SLAM algorithm with
state-of-the-art accuracy and robustness at a los cost. Our experiments in the
RGB-D TUM dataset [34] effectively show a better accuracy and robustness in CPU
real time than direct RGB-D SLAM systems that make use of the GPU. The key
ingredients of our approach are mainly two. Firstly, the combination of a
semi-dense photometric and dense geometric error for the pose tracking (see
Figure 1), which we demonstrate to be the most accurate alternative. And
secondly, a model of the multi-view constraints and their errors in the mapping
and tracking threads, which adds extra information over other approaches. We
release the open-source implementation of our approach 1 . The reader is
referred to a video with our results 2 for a more illustrative visualization of
its performance
Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras
Color-depth cameras (RGB-D cameras) have become the primary sensors in most
robotics systems, from service robotics to industrial robotics applications.
Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and
extrinsic calibration that generally does not meet the accuracy requirements
needed by many robotics applications (e.g., highly accurate 3D environment
reconstruction and mapping, high precision object recognition and localization,
...). In this paper, we propose a human-friendly, reliable and accurate
calibration framework that enables to easily estimate both the intrinsic and
extrinsic parameters of a general color-depth sensor couple. Our approach is
based on a novel two components error model. This model unifies the error
sources of RGB-D pairs based on different technologies, such as
structured-light 3D cameras and time-of-flight cameras. Our method provides
some important advantages compared to other state-of-the-art systems: it is
general (i.e., well suited for different types of sensors), based on an easy
and stable calibration protocol, provides a greater calibration accuracy, and
has been implemented within the ROS robotics framework. We report detailed
experimental validations and performance comparisons to support our statements
Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks
RGB-D object recognition systems improve their predictive performances by
fusing color and depth information, outperforming neural network architectures
that rely solely on colors. While RGB-D systems are expected to be more robust
to adversarial examples than RGB-only systems, they have also been proven to be
highly vulnerable. Their robustness is similar even when the adversarial
examples are generated by altering only the original images' colors. Different
works highlighted the vulnerability of RGB-D systems; however, there is a
lacking of technical explanations for this weakness. Hence, in our work, we
bridge this gap by investigating the learned deep representation of RGB-D
systems, discovering that color features make the function learned by the
network more complex and, thus, more sensitive to small perturbations. To
mitigate this problem, we propose a defense based on a detection mechanism that
makes RGB-D systems more robust against adversarial examples. We empirically
show that this defense improves the performances of RGB-D systems against
adversarial examples even when they are computed ad-hoc to circumvent this
detection mechanism, and that is also more effective than adversarial training.Comment: Accepted for publication in the Information Sciences journa
Monocular SLAM Supported Object Recognition
In this work, we develop a monocular SLAM-aware object recognition system
that is able to achieve considerably stronger recognition performance, as
compared to classical object recognition systems that function on a
frame-by-frame basis. By incorporating several key ideas including multi-view
object proposals and efficient feature encoding methods, our proposed system is
able to detect and robustly recognize objects in its environment using a single
RGB camera in near-constant time. Through experiments, we illustrate the
utility of using such a system to effectively detect and recognize objects,
incorporating multiple object viewpoint detections into a unified prediction
hypothesis. The performance of the proposed recognition system is evaluated on
the UW RGB-D Dataset, showing strong recognition performance and scalable
run-time performance compared to current state-of-the-art recognition systems.Comment: Accepted to appear at Robotics: Science and Systems 2015, Rome, Ital
- …