4,775 research outputs found
Fireground location understanding by semantic linking of visual objects and building information models
This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding
FastDeepIoT: Towards Understanding and Optimizing Neural Network Execution Time on Mobile and Embedded Devices
Deep neural networks show great potential as solutions to many sensing
application problems, but their excessive resource demand slows down execution
time, pausing a serious impediment to deployment on low-end devices. To address
this challenge, recent literature focused on compressing neural network size to
improve performance. We show that changing neural network size does not
proportionally affect performance attributes of interest, such as execution
time. Rather, extreme run-time nonlinearities exist over the network
configuration space. Hence, we propose a novel framework, called FastDeepIoT,
that uncovers the non-linear relation between neural network structure and
execution time, then exploits that understanding to find network configurations
that significantly improve the trade-off between execution time and accuracy on
mobile and embedded devices. FastDeepIoT makes two key contributions. First,
FastDeepIoT automatically learns an accurate and highly interpretable execution
time model for deep neural networks on the target device. This is done without
prior knowledge of either the hardware specifications or the detailed
implementation of the used deep learning library. Second, FastDeepIoT informs a
compression algorithm how to minimize execution time on the profiled device
without impacting accuracy. We evaluate FastDeepIoT using three different
sensing-related tasks on two mobile devices: Nexus 5 and Galaxy Nexus.
FastDeepIoT further reduces the neural network execution time by to
and energy consumption by to compared with the
state-of-the-art compression algorithms.Comment: Accepted by SenSys '1
Object segmentation in depth maps with one user click and a synthetically trained fully convolutional network
With more and more household objects built on planned obsolescence and
consumed by a fast-growing population, hazardous waste recycling has become a
critical challenge. Given the large variability of household waste, current
recycling platforms mostly rely on human operators to analyze the scene,
typically composed of many object instances piled up in bulk. Helping them by
robotizing the unitary extraction is a key challenge to speed up this tedious
process. Whereas supervised deep learning has proven very efficient for such
object-level scene understanding, e.g., generic object detection and
segmentation in everyday scenes, it however requires large sets of per-pixel
labeled images, that are hardly available for numerous application contexts,
including industrial robotics. We thus propose a step towards a practical
interactive application for generating an object-oriented robotic grasp,
requiring as inputs only one depth map of the scene and one user click on the
next object to extract. More precisely, we address in this paper the middle
issue of object seg-mentation in top views of piles of bulk objects given a
pixel location, namely seed, provided interactively by a human operator. We
propose a twofold framework for generating edge-driven instance segments.
First, we repurpose a state-of-the-art fully convolutional object contour
detector for seed-based instance segmentation by introducing the notion of
edge-mask duality with a novel patch-free and contour-oriented loss function.
Second, we train one model using only synthetic scenes, instead of manually
labeled training data. Our experimental results show that considering edge-mask
duality for training an encoder-decoder network, as we suggest, outperforms a
state-of-the-art patch-based network in the present application context.Comment: This is a pre-print of an article published in Human Friendly
Robotics, 10th International Workshop, Springer Proceedings in Advanced
Robotics, vol 7. The final authenticated version is available online at:
https://doi.org/10.1007/978-3-319-89327-3\_16, Springer Proceedings in
Advanced Robotics, Siciliano Bruno, Khatib Oussama, In press, Human Friendly
Robotics, 10th International Workshop,
- …