44,896 research outputs found
Registration and Recognition in 3D
The simplest Computer Vision algorithm can tell you what color it sees when you point it at an object, but asking that computer what it is looking at is a much harder problem. Camera and LiDAR (Light Detection And Ranging) sensors generally provide streams pixel of values and sophisticated algorithms must be engineered to recognize objects or the environment. There has been significant effort expended by the computer vision community on recognizing objects in color images; however, LiDAR sensors, which sense depth values for pixels instead of color, have been studied less. Recently we have seen a renewed interest in depth data with the democratization provided by consumer depth cameras. Detecting objects in depth data is more challenging in some ways because of the lack of texture and increased complexity of processing unordered point sets. We present three systems that contribute to solving the object recognition problem from the LiDAR perspective. They are: calibration, registration, and object recognition systems. We propose a novel calibration system that works with both line and raster based LiDAR sensors, and calibrates them with respect to image cameras. Our system can be extended to calibrate LiDAR sensors that do not give intensity information. We demonstrate a novel system that produces registrations between different LiDAR scans by transforming the input point cloud into a Constellation Extended Gaussian Image (CEGI) and then uses this CEGI to estimate the rotational alignment of the scans independently. Finally we present a method for object recognition which uses local (Spin Images) and global (CEGI) information to recognize cars in a large urban dataset. We present real world results from these three systems. Compelling experiments show that object recognition systems can gain much information using only 3D geometry. There are many object recognition and navigation algorithms that work on images; the work we propose in this thesis is more complimentary to those image based methods than competitive. This is an important step along the way to more intelligent robots
Multi-object Tracking Based on a Novel Feature Image with Multi-modal Information
Multi-object tracking technology plays a crucial role in many applications, such as autonomous vehicles and security monitoring. This paper proposes a multi-object tracking framework based on the multi-modal information of 3D point clouds and color images. At each sampling instant, the 3D point cloud and image acquired by a LiDAR and a camera are fused into a color point cloud, where objects are detected by the Point-GNN method. And, a novel height-intensity-density (HID) image is constructed from the bird's eye view. The HID image truly reflects the shapes and materials of objects and effectively avoids the influence of object occlusion, which is helpful to object tracking. In two sequential HID images, a new rotation kernel correlation filter is proposed to predict the objects. Furthermore, an object retention module and an object re-recognition module are developed to overcome the object matching failure in the in-between frames. The proposed method takes full advantage of the multi-modal data and effectively achieves the information complementation to improve the accuracy of multi-object tracking. The experiments with the KITTI dataset show that the proposed method has the best performance among the existing traditional multi-object tracking methods
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
On Recognizing Transparent Objects in Domestic Environments Using Fusion of Multiple Sensor Modalities
Current object recognition methods fail on object sets that include both
diffuse, reflective and transparent materials, although they are very common in
domestic scenarios. We show that a combination of cues from multiple sensor
modalities, including specular reflectance and unavailable depth information,
allows us to capture a larger subset of household objects by extending a state
of the art object recognition method. This leads to a significant increase in
robustness of recognition over a larger set of commonly used objects.Comment: 12 page
Grounding semantics in robots for Visual Question Answering
In this thesis I describe an operational implementation of an object detection and description system that incorporates in an end-to-end Visual Question Answering system and evaluated it on two visual question answering datasets for compositional language and elementary visual reasoning
Articulated Clinician Detection Using 3D Pictorial Structures on RGB-D Data
Reliable human pose estimation (HPE) is essential to many clinical
applications, such as surgical workflow analysis, radiation safety monitoring
and human-robot cooperation. Proposed methods for the operating room (OR) rely
either on foreground estimation using a multi-camera system, which is a
challenge in real ORs due to color similarities and frequent illumination
changes, or on wearable sensors or markers, which are invasive and therefore
difficult to introduce in the room. Instead, we propose a novel approach based
on Pictorial Structures (PS) and on RGB-D data, which can be easily deployed in
real ORs. We extend the PS framework in two ways. First, we build robust and
discriminative part detectors using both color and depth images. We also
present a novel descriptor for depth images, called histogram of depth
differences (HDD). Second, we extend PS to 3D by proposing 3D pairwise
constraints and a new method that makes exact inference tractable. Our approach
is evaluated for pose estimation and clinician detection on a challenging RGB-D
dataset recorded in a busy operating room during live surgeries. We conduct
series of experiments to study the different part detectors in conjunction with
the various 2D or 3D pairwise constraints. Our comparisons demonstrate that 3D
PS with RGB-D part detectors significantly improves the results in a visually
challenging operating environment.Comment: The supplementary video is available at https://youtu.be/iabbGSqRSg
- …