5 research outputs found

    Effortless Deep Training for Traffic Sign Detection Using Templates and Arbitrary Natural Images

    Full text link
    Deep learning has been successfully applied to several problems related to autonomous driving. Often, these solutions rely on large networks that require databases of real image samples of the problem (i.e., real world) for proper training. The acquisition of such real-world data sets is not always possible in the autonomous driving context, and sometimes their annotation is not feasible (e.g., takes too long or is too expensive). Moreover, in many tasks, there is an intrinsic data imbalance that most learning-based methods struggle to cope with. It turns out that traffic sign detection is a problem in which these three issues are seen altogether. In this work, we propose a novel database generation method that requires only (i) arbitrary natural images, i.e., requires no real image from the domain of interest, and (ii) templates of the traffic signs, i.e., templates synthetically created to illustrate the appearance of the category of a traffic sign. The effortlessly generated training database is shown to be effective for the training of a deep detector (such as Faster R-CNN) on German traffic signs, achieving 95.66% of mAP on average. In addition, the proposed method is able to detect traffic signs with an average precision, recall and F1-score of about 94%, 91% and 93%, respectively. The experiments surprisingly show that detectors can be trained with simple data generation methods and without problem domain data for the background, which is in the opposite direction of the common sense for deep learning

    Cross-Domain Car Detection Using Unsupervised Image-to-Image Translation: From Day to Night

    Full text link
    Deep learning techniques have enabled the emergence of state-of-the-art models to address object detection tasks. However, these techniques are data-driven, delegating the accuracy to the training dataset which must resemble the images in the target task. The acquisition of a dataset involves annotating images, an arduous and expensive process, generally requiring time and manual effort. Thus, a challenging scenario arises when the target domain of application has no annotated dataset available, making tasks in such situation to lean on a training dataset of a different domain. Sharing this issue, object detection is a vital task for autonomous vehicles where the large amount of driving scenarios yields several domains of application requiring annotated data for the training process. In this work, a method for training a car detection system with annotated data from a source domain (day images) without requiring the image annotations of the target domain (night images) is presented. For that, a model based on Generative Adversarial Networks (GANs) is explored to enable the generation of an artificial dataset with its respective annotations. The artificial dataset (fake dataset) is created translating images from day-time domain to night-time domain. The fake dataset, which comprises annotated images of only the target domain (night images), is then used to train the car detector model. Experimental results showed that the proposed method achieved significant and consistent improvements, including the increasing by more than 10% of the detection performance when compared to the training with only the available annotated data (i.e., day images).Comment: 8 pages, 8 figures, https://github.com/viniciusarruda/cross-domain-car-detection and accepted at IJCNN 201

    Drosophila local search emerges from iterative odometry of consecutive run lengths

    Get PDF
    The ability to keep track of one's location in space is a critical behavior for animals navigating to and from a salient location, but its computational basis remains unknown. Here, we tracked flies in a ring-shaped channel as they executed bouts of search, triggered by optogenetic activation of sugar receptors. Flies centered their back-and-forth local search excursions near fictive food locations by closely matching the length of consecutive runs. We tested a set of agent-based models that incorporate iterative odometry to store and retrieve the distance walked between consecutive events, such as reversals in walking direction. In contrast to memoryless models such as Levy flight, simulations employing reversal-to-reversal integration recapitulated flies' centered search behavior, even during epochs when the food stimulus was withheld or in experiments with multiple food sites. However, experiments in which flies reinitiated local search after circumnavigating the arena suggest that flies can also integrate azimuthal heading to perform path integration. Together, this work provides a concrete theoretical framework and experimental system to advance investigations of the neural basis of path integration
    corecore