11,738 research outputs found
Graph-guided Architecture Search for Real-time Semantic Segmentation
Designing a lightweight semantic segmentation network often requires
researchers to find a trade-off between performance and speed, which is always
empirical due to the limited interpretability of neural networks. In order to
release researchers from these tedious mechanical trials, we propose a
Graph-guided Architecture Search (GAS) pipeline to automatically search
real-time semantic segmentation networks. Unlike previous works that use a
simplified search space and stack a repeatable cell to form a network, we
introduce a novel search mechanism with new search space where a lightweight
model can be effectively explored through the cell-level diversity and
latencyoriented constraint. Specifically, to produce the cell-level diversity,
the cell-sharing constraint is eliminated through the cell-independent manner.
Then a graph convolution network (GCN) is seamlessly integrated as a
communication mechanism between cells. Finally, a latency-oriented constraint
is endowed into the search process to balance the speed and performance.
Extensive experiments on Cityscapes and CamVid datasets demonstrate that GAS
achieves the new state-of-the-art trade-off between accuracy and speed. In
particular, on Cityscapes dataset, GAS achieves the new best performance of
73.5% mIoU with speed of 108.4 FPS on Titan Xp.Comment: CVPR202
Local-to-Global Information Communication for Real-Time Semantic Segmentation Network Search
Neural Architecture Search (NAS) has shown great potentials in automatically
designing neural network architectures for real-time semantic segmentation.
Unlike previous works that utilize a simplified search space with cell-sharing
way, we introduce a new search space where a lightweight model can be more
effectively searched by replacing the cell-sharing manner with cell-independent
one. Based on this, the communication of local to global information is
achieved through two well-designed modules. For local information exchange, a
graph convolutional network (GCN) guided module is seamlessly integrated as a
communication deliver between cells. For global information aggregation, we
propose a novel dense-connected fusion module (cell) which aggregates
long-range multi-level features in the network automatically. In addition, a
latency-oriented constraint is endowed into the search process to balance the
accuracy and latency. We name the proposed framework as Local-to-Global
Information Communication Network Search (LGCNet). Extensive experiments on
Cityscapes and CamVid datasets demonstrate that LGCNet achieves the new
state-of-the-art trade-off between accuracy and speed. In particular, on
Cityscapes dataset, LGCNet achieves the new best performance of 74.0\% mIoU
with the speed of 115.2 FPS on Titan Xp.Comment: arXiv admin note: text overlap with arXiv:1909.0679
Object segmentation in depth maps with one user click and a synthetically trained fully convolutional network
With more and more household objects built on planned obsolescence and
consumed by a fast-growing population, hazardous waste recycling has become a
critical challenge. Given the large variability of household waste, current
recycling platforms mostly rely on human operators to analyze the scene,
typically composed of many object instances piled up in bulk. Helping them by
robotizing the unitary extraction is a key challenge to speed up this tedious
process. Whereas supervised deep learning has proven very efficient for such
object-level scene understanding, e.g., generic object detection and
segmentation in everyday scenes, it however requires large sets of per-pixel
labeled images, that are hardly available for numerous application contexts,
including industrial robotics. We thus propose a step towards a practical
interactive application for generating an object-oriented robotic grasp,
requiring as inputs only one depth map of the scene and one user click on the
next object to extract. More precisely, we address in this paper the middle
issue of object seg-mentation in top views of piles of bulk objects given a
pixel location, namely seed, provided interactively by a human operator. We
propose a twofold framework for generating edge-driven instance segments.
First, we repurpose a state-of-the-art fully convolutional object contour
detector for seed-based instance segmentation by introducing the notion of
edge-mask duality with a novel patch-free and contour-oriented loss function.
Second, we train one model using only synthetic scenes, instead of manually
labeled training data. Our experimental results show that considering edge-mask
duality for training an encoder-decoder network, as we suggest, outperforms a
state-of-the-art patch-based network in the present application context.Comment: This is a pre-print of an article published in Human Friendly
Robotics, 10th International Workshop, Springer Proceedings in Advanced
Robotics, vol 7. The final authenticated version is available online at:
https://doi.org/10.1007/978-3-319-89327-3\_16, Springer Proceedings in
Advanced Robotics, Siciliano Bruno, Khatib Oussama, In press, Human Friendly
Robotics, 10th International Workshop,
Attentive Single-Tasking of Multiple Tasks
In this work we address task interference in universal networks by
considering that a network is trained on multiple tasks, but performs one task
at a time, an approach we refer to as "single-tasking multiple tasks". The
network thus modifies its behaviour through task-dependent feature adaptation,
or task attention. This gives the network the ability to accentuate the
features that are adapted to a task, while shunning irrelevant ones. We further
reduce task interference by forcing the task gradients to be statistically
indistinguishable through adversarial training, ensuring that the common
backbone architecture serving all tasks is not dominated by any of the
task-specific gradients. Results in three multi-task dense labelling problems
consistently show: (i) a large reduction in the number of parameters while
preserving, or even improving performance and (ii) a smooth trade-off between
computation and multi-task accuracy. We provide our system's code and
pre-trained models at http://vision.ee.ethz.ch/~kmaninis/astmt/.Comment: CVPR 2019 Camera Read
- …