40 research outputs found
Real-time automated road, lane and car detection for autonomous driving
In this paper, we discuss a vision based system for autonomous guidance of vehicles. An autonomous intelligent vehicle has to perform a number of
functionalities. Segmentation of the road, determining the boundaries to drive in and recognizing the vehicles and obstacles around are the main tasks for vision guided vehicle navigation. In this article we propose a set of algorithms which lead to the solution of road and vehicle segmentation using data from a color camera. The algorithms described here combine gray value difference
and texture analysis techniques to segment the road from the image, several geometric transformations and contour processing algorithms are used to segment lanes, and moving cars are extracted with the help of background modeling and estimation. The techniques developed have been tested in real road images and the results are presented
3D Point Capsule Networks
In this paper, we propose 3D point-capsule networks, an auto-encoder designed
to process sparse 3D point clouds while preserving spatial arrangements of the
input data. 3D capsule networks arise as a direct consequence of our novel
unified 3D auto-encoder formulation. Their dynamic routing scheme and the
peculiar 2D latent space deployed by our approach bring in improvements for
several common point cloud-related tasks, such as object classification, object
reconstruction and part segmentation as substantiated by our extensive
evaluations. Moreover, it enables new applications such as part interpolation
and replacement
3D Point Capsule Networks
In this paper, we propose 3D point-capsule networks, an auto-encoder designed
to process sparse 3D point clouds while preserving spatial arrangements of the
input data. 3D capsule networks arise as a direct consequence of our novel
unified 3D auto-encoder formulation. Their dynamic routing scheme and the
peculiar 2D latent space deployed by our approach bring in improvements for
several common point cloud-related tasks, such as object classification, object
reconstruction and part segmentation as substantiated by our extensive
evaluations. Moreover, it enables new applications such as part interpolation
and replacement.Comment: As published in CVPR 2019 (camera ready version), with supplementary
materia
A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds
This paper proposes a segmentation-free, automatic and efficient procedure to
detect general geometric quadric forms in point clouds, where clutter and
occlusions are inevitable. Our everyday world is dominated by man-made objects
which are designed using 3D primitives (such as planes, cones, spheres,
cylinders, etc.). These objects are also omnipresent in industrial
environments. This gives rise to the possibility of abstracting 3D scenes
through primitives, thereby positions these geometric forms as an integral part
of perception and high level 3D scene understanding.
As opposed to state-of-the-art, where a tailored algorithm treats each
primitive type separately, we propose to encapsulate all types in a single
robust detection procedure. At the center of our approach lies a closed form 3D
quadric fit, operating in both primal & dual spaces and requiring as low as 4
oriented-points. Around this fit, we design a novel, local null-space voting
strategy to reduce the 4-point case to 3. Voting is coupled with the famous
RANSAC and makes our algorithm orders of magnitude faster than its conventional
counterparts. This is the first method capable of performing a generic
cross-type multi-object primitive detection in difficult scenes. Results on
synthetic and real datasets support the validity of our method.Comment: Accepted for publication at CVPR 201
SignAvatars: A Large-scale 3D Sign Language Holistic Motion Dataset and Benchmark
In this paper, we present SignAvatars, the first large-scale multi-prompt 3D
sign language (SL) motion dataset designed to bridge the communication gap for
hearing-impaired individuals. While there has been an exponentially growing
number of research regarding digital communication, the majority of existing
communication technologies primarily cater to spoken or written languages,
instead of SL, the essential communication method for hearing-impaired
communities. Existing SL datasets, dictionaries, and sign language production
(SLP) methods are typically limited to 2D as the annotating 3D models and
avatars for SL is usually an entirely manual and labor-intensive process
conducted by SL experts, often resulting in unnatural avatars. In response to
these challenges, we compile and curate the SignAvatars dataset, which
comprises 70,000 videos from 153 signers, totaling 8.34 million frames,
covering both isolated signs and continuous, co-articulated signs, with
multiple prompts including HamNoSys, spoken language, and words. To yield 3D
holistic annotations, including meshes and biomechanically-valid poses of body,
hands, and face, as well as 2D and 3D keypoints, we introduce an automated
annotation pipeline operating on our large corpus of SL videos. SignAvatars
facilitates various tasks such as 3D sign language recognition (SLR) and the
novel 3D SL production (SLP) from diverse inputs like text scripts, individual
words, and HamNoSys notation. Hence, to evaluate the potential of SignAvatars,
we further propose a unified benchmark of 3D SL holistic motion production. We
believe that this work is a significant step forward towards bringing the
digital world to the hearing-impaired communities. Our project page is at
https://signavatars.github.io/Comment: 9 pages; Project page available at https://signavatars.github.io