24 research outputs found

    Multimodality semantic segmentation based on polarization and color images

    No full text
    International audienceSemantic segmentation gives a meaningful class label to every pixel in an image. It enables intelligent devices to understand the scene and has received sufficient attention during recent years. Traditional imaging systems always apply their methods on RGB, RGB-D or even RGB combined with geometric information. However, for outdoor applications, strong reflection or poor illumination appears to reduce the visualization of the real shape or texture of the objects, thus limiting the performance of semantic segmentation algorithms. To tackle this problem , this paper adopts polarization imaging as it can provide complementary information by describing some imperceptible light properties, which varies from different materials. For acceleration, SLIC superpixel segmen-tation is used to speed up the system. HOG and LBP features are extracted from both color and polarization images. After quantization using visual codebooks, Joint Boosting classifier is trained to label each pixel based on the quantized features. The proposed method was evaluated both on Day-set and Dusk-set. The experimental results show that using polarization setup can provide complementary information to improve the semantic segmentation accuracy. Especially, a large improvement on Dusk-set shows its capacity for intelligent vehicle applications under dark illumination condition

    Geometric-based segmentation of polarization-encoded images

    No full text
    International audienc

    Vers une plate-forme de réalité mixte pour les robots mobiles autonomes

    No full text
    RJCIA 2022National audienceL'entraînement des robots mobiles à la navigation autonome requiert la simulation de multiples scénarios variés auxquels le robot n'est pas habitué. Par conséquent, le transfert d'algorithmes de la simulation vers la réalité peut s'avérer risqué dû à l'écart entre la réalité et la simulation. Dans cet article, nous développons une première version d'une plate-forme de réalité mixte pour les robots mobiles dont la perception est basée sur la vision. Les premiers tests permettent à un robot mobile de visualiser la fusion de deux environnements synchronisés à travers une caméra RGB-D durant la navigation

    Image filtering in catadioptric plane

    No full text
    International audienc

    Building a vision-based mixed-reality framework for autonomous driving navigation

    No full text
    Testing autonomous driving algorithms on mobile systems in simulation is an essential step to validate the model and prepare the vehicle for a wide range of (potentially unexpected and critical) conditions. Transferring the model from simulation to reality can be challenging because of the reality gap. Mixed-reality environments enable the evaluation of models on actual vehicles with limited financial or safety risks. Additionally, by allowing for quicker testing and debugging for mobile robots, it could reduce the system's development costs. This paper presents a tentative work for an autonomous navigation framework based on RGB-D cameras. We use an augmentation approach to represent the objects in two contexts in a single environment. The first experiments use KITTI dataset, and then the capabilities of our system were tested on real data by extracting depth maps from the ZED2 camera. Finally, we assess our fusion process by using a pre-trained object detection model

    Towards a Mixed-Reality framework for autonomous driving*

    No full text
    International audienceTesting autonomous driving algorithms on mobile systems in simulation is an essential step to validate the models and train the system for a large set of (possibly unpredictable and critical) situations. Yet, the transfer of the model from simulation to reality is challenging due to the reality gap (i.e., discrepancies between reality and simulation models). Mixed-reality environments enable testing models on real vehicles without taking financial and safety risks. Additionally, it can reduce the development costs of the system by providing faster testing and debugging for mobile robots. This paper proposes a preliminary work towards a mixed-reality framework for autonomous navigation based on RGB-D cameras. The aim is to represent the objects in two environments within a single display using an augmentation strategy. We tested a first prototype by introducing a differential robot able to navigate in its environment, visualize augmented objects and detect them correctly using a pre-trained model based on Faster R-CNN

    Mirror-based matching of catadioptric images

    No full text
    International audienc
    corecore