17 research outputs found
Estimation de pose multimodale- Approche robuste par les droites 2D et 3D
Camera pose estimation consists in determining the position and the orientation of a camera with respect to a reference frame. In the context of mobile robotics, multimodality, i.e. the use of various sensor types, is often a requirement to solve complex tasks. However, knowing the orientation and position, i.e. the pose, of each sensor regarding a common frame is generally necessary to benefit multimodality. In this context, we present two major contributions with this PhD thesis. First, we introduce a pose estimation algorithm relying on 2D and 3D line and a known vertical direction. Secondly, we present two outliers rejection and line pairing methods based on the well known RANSAC algorithm. Our methods make use of the vertical direction to reduce the number of lines required to 2 and 1, i.e. RANSAC2 and RANSAC1. A robustness evaluation of our contributions is performed on simulated and real data. We show state of the art results.Avec la complexification des tĂąches confiĂ©es aux robots, la perception de lâenvironnement, aussi appelĂ©e perception de scĂšne, doit se faire de maniĂšre plus complĂšte. Lâemploi simultanĂ© de diffĂ©rents types de capteurs, ou multimodalitĂ©, est lâun des leviers employĂ©s Ă cet effet. Lâexploitation de donnĂ©es issues de modalitĂ©s diffĂ©rentes nĂ©cessite gĂ©nĂ©ralement de connaĂźtre la pose, câest-Ă -dire lâorientation et la position, de chaque capteur relativement aux autres. Les travaux prĂ©sentĂ©s dans cette thĂšse se concentrent sur la problĂ©matique dâestimation de pose robuste dans le cas dâune multimodalitĂ© incluant des capteurs de type camĂ©ra et LiDAR. Deux contributions majeures sont prĂ©sentĂ©es dans ce manuscrit. On prĂ©sente dans un premier temps un algorithme dâestimation de pose original sâappuyant sur la correspondances entre droites 2D et 3D, ainsi quâune connaissance a priori de la direction verticale. Dans lâoptique dâamĂ©liorer la robustesse de cet algorithme, une deuxiĂšme contribution repose sur un algorithme dâappariement de droites et de rejet de paires aberrantes basĂ© RANSAC. Cette mĂ©thode fonctionne Ă lâaide de deux ou dâune seule paire de droites, diminuant le coĂ»t en calcul du problĂšme. Les rĂ©sultats obtenus sur des jeux de donnĂ©es simulĂ©es et rĂ©elles dĂ©montrent une amĂ©lioration des performances en comparaison avec les mĂ©thodes de lâĂ©tat de lâart
Multimodal pose estimation. Robust approach using 2D and 3D lines
Avec la complexification des tĂąches confiĂ©es aux robots, la perception de lâenvironnement, aussi appelĂ©e perception de scĂšne, doit se faire de maniĂšre plus complĂšte. Lâemploi simultanĂ© de diffĂ©rents types de capteurs, ou multimodalitĂ©, est lâun des leviers employĂ©s Ă cet effet. Lâexploitation de donnĂ©es issues de modalitĂ©s diffĂ©rentes nĂ©cessite gĂ©nĂ©ralement de connaĂźtre la pose, câest-Ă -dire lâorientation et la position, de chaque capteur relativement aux autres. Les travaux prĂ©sentĂ©s dans cette thĂšse se concentrent sur la problĂ©matique dâestimation de pose robuste dans le cas dâune multimodalitĂ© incluant des capteurs de type camĂ©ra et LiDAR. Deux contributions majeures sont prĂ©sentĂ©es dans ce manuscrit. On prĂ©sente dans un premier temps un algorithme dâestimation de pose original sâappuyant sur la correspondances entre droites 2D et 3D, ainsi quâune connaissance a priori de la direction verticale. Dans lâoptique dâamĂ©liorer la robustesse de cet algorithme, une deuxiĂšme contribution repose sur un algorithme dâappariement de droites et de rejet de paires aberrantes basĂ© RANSAC. Cette mĂ©thode fonctionne Ă lâaide de deux ou dâune seule paire de droites, diminuant le coĂ»t en calcul du problĂšme. Les rĂ©sultats obtenus sur des jeux de donnĂ©es simulĂ©es et rĂ©elles dĂ©montrent une amĂ©lioration des performances en comparaison avec les mĂ©thodes de lâĂ©tat de lâart.Camera pose estimation consists in determining the position and the orientation of a camera with respect to a reference frame. In the context of mobile robotics, multimodality, i.e. the use of various sensor types, is often a requirement to solve complex tasks. However, knowing the orientation and position, i.e. the pose, of each sensor regarding a common frame is generally necessary to benefit multimodality. In this context, we present two major contributions with this PhD thesis. First, we introduce a pose estimation algorithm relying on 2D and 3D line and a known vertical direction. Secondly, we present two outliers rejection and line pairing methods based on the well known RANSAC algorithm. Our methods make use of the vertical direction to reduce the number of lines required to 2 and 1, i.e. RANSAC2 and RANSAC1. A robustness evaluation of our contributions is performed on simulated and real data. We show state of the art results
Estimation de pose multimodale- Approche robuste par les droites 2D et 3D
Camera pose estimation consists in determining the position and the orientation of a camera with respect to a reference frame. In the context of mobile robotics, multimodality, i.e. the use of various sensor types, is often a requirement to solve complex tasks. However, knowing the orientation and position, i.e. the pose, of each sensor regarding a common frame is generally necessary to benefit multimodality. In this context, we present two major contributions with this PhD thesis. First, we introduce a pose estimation algorithm relying on 2D and 3D line and a known vertical direction. Secondly, we present two outliers rejection and line pairing methods based on the well known RANSAC algorithm. Our methods make use of the vertical direction to reduce the number of lines required to 2 and 1, i.e. RANSAC2 and RANSAC1. A robustness evaluation of our contributions is performed on simulated and real data. We show state of the art results.Avec la complexification des tĂąches confiĂ©es aux robots, la perception de lâenvironnement, aussi appelĂ©e perception de scĂšne, doit se faire de maniĂšre plus complĂšte. Lâemploi simultanĂ© de diffĂ©rents types de capteurs, ou multimodalitĂ©, est lâun des leviers employĂ©s Ă cet effet. Lâexploitation de donnĂ©es issues de modalitĂ©s diffĂ©rentes nĂ©cessite gĂ©nĂ©ralement de connaĂźtre la pose, câest-Ă -dire lâorientation et la position, de chaque capteur relativement aux autres. Les travaux prĂ©sentĂ©s dans cette thĂšse se concentrent sur la problĂ©matique dâestimation de pose robuste dans le cas dâune multimodalitĂ© incluant des capteurs de type camĂ©ra et LiDAR. Deux contributions majeures sont prĂ©sentĂ©es dans ce manuscrit. On prĂ©sente dans un premier temps un algorithme dâestimation de pose original sâappuyant sur la correspondances entre droites 2D et 3D, ainsi quâune connaissance a priori de la direction verticale. Dans lâoptique dâamĂ©liorer la robustesse de cet algorithme, une deuxiĂšme contribution repose sur un algorithme dâappariement de droites et de rejet de paires aberrantes basĂ© RANSAC. Cette mĂ©thode fonctionne Ă lâaide de deux ou dâune seule paire de droites, diminuant le coĂ»t en calcul du problĂšme. Les rĂ©sultats obtenus sur des jeux de donnĂ©es simulĂ©es et rĂ©elles dĂ©montrent une amĂ©lioration des performances en comparaison avec les mĂ©thodes de lâĂ©tat de lâart
Deep Learning-Based Object Detection, Localisation and Tracking for Smart Wheelchair Healthcare Mobility
This paper deals with the development of an Advanced Driver Assistance System (ADAS) for a smart electric wheelchair in order to improve the autonomy of disabled people. Our use case, built from a formal clinical study, is based on the detection, depth estimation, localization and tracking of objects in wheelchair’s indoor environment, namely: door and door handles. The aim of this work is to provide a perception layer to the wheelchair, enabling this way the detection of these keypoints in its immediate surrounding, and constructing of a short lifespan semantic map. Firstly, we present an adaptation of the YOLOv3 object detection algorithm to our use case. Then, we present our depth estimation approach using an Intel RealSense camera. Finally, as a third and last step of our approach, we present our 3D object tracking approach based on the SORT algorithm. In order to validate all the developments, we have carried out different experiments in a controlled indoor environment. Detection, distance estimation and object tracking are experimented using our own dataset, which includes doors and door handles
Vision based vehicle relocalization in 3D line-feature map using Perspective-n-Line with a known vertical direction
Common approaches for vehicle localization propose to match LiDAR data or 2D features from cameras to
a prior 3D LiDAR map. Yet, these methods require both
heavy computational power often provided by GPU, and a first
rough localization estimate via GNSS to be performed online.
Moreover, storing and accessing 3D dense LiDAR maps can be
challenging in case of city-wide coverage.
In this paper, we address the problem of camera global
relocalization in a prior 3D line-feature map from a single
image, in a GNSS denied context and with no prior pose
estimation. We propose a dual contribution.
(1) We introduce a novel pose estimation method from
lines, (i.e. Perspective-n-Line or PnL), with a known vertical
direction. Our method benefits a Gauss-Newton optimization
scheme to compensate the sensor-induced vertical direction
errors, and refine the overall pose. Our algorithm requires
at least 3 lines to output a pose (P3L) and requires no
reformulation to operate with a higher number of lines.
(2) We propose a RANSAC (RANdom SAmple Consensus)
2D-3D line matching and outliers removal algorithm requiring
solely one 2D-3D line pair to operate, i.e. RANSAC1. Our
method reduces the number of iteration required to match
features and can be easily modified to exhaustively test all
feature combinations.
We evaluate the robustness of our algorithms with a synthetic
data, and on a challenging sub-sequence of the KITTI dataset
Camera pose estimation based on PnL with a known vertical direction
International audienc
Camera pose estimation based on PnL with a known vertical direction
International audienc
AI-baed Environment Perception for Autonomous Vehicle Parameters Estimation.: State Estimation and Control of Autonomous Connected Vehicles: Advances and Challenges.
State Estimation and Control of Autonomous Connected Vehicles: Advances and Challenges
Self-Supervised Sidewalk Perception Using Fast Video Semantic Segmentation for Robotic Wheelchairs in Smart Mobility
International audienceThe real-time segmentation of sidewalk environments is critical to achieving autonomous navigation for robotic wheelchairs in urban territories. A robust and real-time video semantic segmentation offers an apt solution for advanced visual perception in such complex domains. The key to this proposition is to have a method with lightweight flow estimations and reliable feature extractions. We address this by selecting an approach based on recent trends in video segmentation. Although these approaches demonstrate efficient and cost-effective segmentation performance in cross-domain implementations, they require additional procedures to put their striking characteristics into practical use. We use our method for developing a visual perception technique to perform in urban sidewalk environments for the robotic wheelchair. We generate a collection of synthetic scenes in a blending target distribution to train and validate our approach. Experimental results show that our method improves prediction accuracy on our benchmark with tolerable loss of speed and without additional overhead. Overall, our technique serves as a reference to transfer and develop perception algorithms for any cross-domain visual perception applications with less downtime