26 research outputs found
Kitting in the Wild through Online Domain Adaptation
Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain
SemanticFusion: Dense 3D Semantic Mapping with Convolutional Neural Networks
Ever more robust, accurate and detailed mapping using visual sensing has proven to be an enabling factor for mobile robots across a wide variety of applications. For the next level of robot intelligence and intuitive user interaction, maps need extend beyond geometry and appearence - they need to contain semantics. We address this challenge by combining Convolutional Neural Networks (CNNs) and a state of the art dense Simultaneous Localisation and Mapping (SLAM) system, ElasticFusion, which provides long-term dense correspondence between frames of indoor RGB-D video even during loopy scanning trajectories. These correspondences allow the CNN's semantic predictions from multiple view points to be probabilistically fused into a map. This not only produces a useful semantic 3D map, but we also show on the NYUv2 dataset that fusing multiple predictions leads to an improvement even in the 2D semantic labelling over baseline single frame predictions. We also show that for a smaller reconstruction dataset with larger variation in prediction viewpoint, the improvement over single frame segmentation increases. Our system is efficient enough to allow real-time interactive use at frame-rates of approximately 25Hz
Unsupervised Domain Adaptation through Inter-Modal Rotation for RGB-D Object Recognition
Unsupervised Domain Adaptation (DA) exploits the supervision of a label-rich source dataset to make predictions on an unlabeled target dataset by aligning the two data distributions. In robotics, DA is used to take advantage of automatically generated synthetic data, that come with 'free' annotation, to make effective predictions on real data. However, existing DA methods are not designed to cope with the multi-modal nature of RGB-D data, which are widely used in robotic vision. We propose a novel RGB-D DA method that reduces the synthetic-to-real domain shift by exploiting the inter-modal relation between the RGB and depth image. Our method consists of training a convolutional neural network to solve, in addition to the main recognition task, the pretext task of predicting the relative rotation between the RGB and depth image. To evaluate our method and encourage further research in this area, we define two benchmark datasets for object categorization and instance recognition. With extensive experiments, we show the benefits of leveraging the inter-modal relations for RGB-D DA. The code is available at: 'https://github.com/MRLoghmani/relative-rotation'
Distilling Inductive Bias: Knowledge Distillation Beyond Model Compression
With the rapid development of computer vision, Vision Transformers (ViTs)
offer the tantalizing prospect of unified information processing across visual
and textual domains. But due to the lack of inherent inductive biases in ViTs,
they require enormous amount of data for training. To make their applications
practical, we introduce an innovative ensemble-based distillation approach
distilling inductive bias from complementary lightweight teacher models. Prior
systems relied solely on convolution-based teaching. However, this method
incorporates an ensemble of light teachers with different architectural
tendencies, such as convolution and involution, to instruct the student
transformer jointly. Because of these unique inductive biases, instructors can
accumulate a wide range of knowledge, even from readily identifiable stored
datasets, which leads to enhanced student performance. Our proposed framework
also involves precomputing and storing logits in advance, essentially the
unnormalized predictions of the model. This optimization can accelerate the
distillation process by eliminating the need for repeated forward passes during
knowledge distillation, significantly reducing the computational burden and
enhancing efficiency