1,387 research outputs found
Recommended from our members
Large Scale Labelled Video Data Augmentation for Semantic Segmentation in Driving Scenarios
In this paper we present an analysis of the effect of large scale video data augmentation for semantic segmentation in driving scenarios. Our work is motivated by a strong correlation between the high performance of most recent deep learning based methods and the availability of large volumes
of ground truth labels. To generate additional labelled data, we make use of an occlusion-aware and uncertainty-enabled label propagation algorithm. As a result we increase the availability of high-resolution labelled frames by a factor of 20, yielding in a 6.8% to 10.8% rise in average classification accuracy and/or IoU scores for several semantic segmentation networks.
Our key contributions include: (a) augmented CityScapes and CamVid datasets providing 56.2K and 6.5K additional labelled frames of object classes respectively, (b) detailed empirical analysis of the effect of the use of augmented data as well as (c) extension of proposed framework to instance segmentation
PanDA: Panoptic Data Augmentation
The recently proposed panoptic segmentation task presents a significant challenge of image understanding with computer vision by unifying semantic segmentation and instance segmentation tasks. In this paper we present an efficient and novel panoptic data augmentation (PanDA) method which operates exclusively in pixel space, requires no additional data or training, and is computationally cheap to implement. By retraining original state-of-the-art models on PanDA augmented datasets generated with a single frozen set of parameters, we show robust performance gains in panoptic segmentation, instance segmentation, as well as detection across models, backbones, dataset domains, and scales. Finally, the effectiveness of unrealistic-looking training images synthesized by PanDA suggest that one should rethink the need for image realism for efficient data augmentation
Reducing DNN Labelling Cost using Surprise Adequacy: An Industrial Case Study for Autonomous Driving
Deep Neural Networks (DNNs) are rapidly being adopted by the automotive
industry, due to their impressive performance in tasks that are essential for
autonomous driving. Object segmentation is one such task: its aim is to
precisely locate boundaries of objects and classify the identified objects,
helping autonomous cars to recognise the road environment and the traffic
situation. Not only is this task safety critical, but developing a DNN based
object segmentation module presents a set of challenges that are significantly
different from traditional development of safety critical software. The
development process in use consists of multiple iterations of data collection,
labelling, training, and evaluation. Among these stages, training and
evaluation are computation intensive while data collection and labelling are
manual labour intensive. This paper shows how development of DNN based object
segmentation can be improved by exploiting the correlation between Surprise
Adequacy (SA) and model performance. The correlation allows us to predict model
performance for inputs without manually labelling them. This, in turn, enables
understanding of model performance, more guided data collection, and informed
decisions about further training. In our industrial case study the technique
allows cost savings of up to 50% with negligible evaluation inaccuracy.
Furthermore, engineers can trade off cost savings versus the tolerable level of
inaccuracy depending on different development phases and scenarios.Comment: to be published in Proceedings of the 28th ACM Joint European
Software Engineering Conference and Symposium on the Foundations of Software
Engineerin
Reducing DNN labelling cost using surprise adequacy: An industrial case study for autonomous driving
Deep Neural Networks (DNNs) are rapidly being adopted by the automotive industry, due to their impressive performance in tasks that are essential for autonomous driving. Object segmentation is one such task: its aim is to precisely locate boundaries of objects and classify the identified objects, helping autonomous cars to recognise the road environment and the traffic situation. Not only is this task safety critical, but developing a DNN based object segmentation module presents a set of challenges that are significantly different from traditional development of safety critical software. The development process in use consists of multiple iterations of data collection, labelling, training, and evaluation. Among these stages, training and evaluation are computation intensive while data collection and labelling are manual labour intensive. This paper shows how development of DNN based object segmentation can be improved by exploiting the correlation between Surprise Adequacy (SA) and model performance. The correlation allows us to predict model performance for inputs without manually labelling them. This, in turn, enables understanding of model performance, more guided data collection, and informed decisions about further training. In our industrial case study the technique allows cost savings of up to 50% with negligible evaluation inaccuracy. Furthermore, engineers can trade off cost savings versus the tolerable level of inaccuracy depending on different development phases and scenarios
- …