6,902 research outputs found
The RGB-D Triathlon: Towards Agile Visual Toolboxes for Robots
Deep networks have brought significant advances in robot perception, enabling
to improve the capabilities of robots in several visual tasks, ranging from
object detection and recognition to pose estimation, semantic scene
segmentation and many others. Still, most approaches typically address visual
tasks in isolation, resulting in overspecialized models which achieve strong
performances in specific applications but work poorly in other (often related)
tasks. This is clearly sub-optimal for a robot which is often required to
perform simultaneously multiple visual recognition tasks in order to properly
act and interact with the environment. This problem is exacerbated by the
limited computational and memory resources typically available onboard to a
robotic platform. The problem of learning flexible models which can handle
multiple tasks in a lightweight manner has recently gained attention in the
computer vision community and benchmarks supporting this research have been
proposed. In this work we study this problem in the robot vision context,
proposing a new benchmark, the RGB-D Triathlon, and evaluating state of the art
algorithms in this novel challenging scenario. We also define a new evaluation
protocol, better suited to the robot vision setting. Results shed light on the
strengths and weaknesses of existing approaches and on open issues, suggesting
directions for future research.Comment: This work has been submitted to IROS/RAL 201
Unsupervised classification for landslide detection from airborne laser scanning
Landslides are natural disasters that cause extensive environmental, infrastructure and socioeconomic damage worldwide. Since they are difficult to identify, it is imperative to evaluate innovative approaches to detect early-warning signs and assess their susceptibility, hazard and risk. The increasing availability of airborne laser-scanning data provides an opportunity for modern landslide mapping techniques to analyze topographic signature patterns of landslide, landslide-prone and landslide scarred areas over large swaths of terrain. In this study, a methodology based on several feature extractors and unsupervised classification, specifically k-means clustering and the Gaussian mixture model (GMM) were tested at the Carlyon Beach Peninsula in the state of Washington to map slide and non-slide terrain. When compared with the detailed, independently compiled landslide inventory map, the unsupervised methods correctly classify up to 87% of the terrain in the study area. These results suggest that (1) landslide scars associated with past deep-seated landslides may be identified using digital elevation models (DEMs) with unsupervised classification models; (2) feature extractors allow for individual analysis of specific topographic signatures; (3) unsupervised classification can be performed on each topographic signature using multiple number of clusters; (4) comparison of documented landslide prone regions to algorithm mapped regions show that algorithmic classification can accurately identify areas where deep-seated landslides have occurred. The conclusions of this study can be summarized by stating that unsupervised classification mapping methods and airborne light detection and ranging (LiDAR)-derived DEMs can offer important surface information that can be used as effective tools for digital terrain analysis to support landslide detection.Fil: Tran, Caitlin J.. California State Polytechnic University; Estados UnidosFil: Mora, Omar E.. California State Polytechnic University; Estados UnidosFil: Fayne, Jessica V.. University of California at Los Angeles; Estados UnidosFil: Lenzano, María Gabriela. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Mendoza. Instituto Argentino de Nivología, Glaciología y Ciencias Ambientales. Provincia de Mendoza. Instituto Argentino de Nivología, Glaciología y Ciencias Ambientales. Universidad Nacional de Cuyo. Instituto Argentino de Nivología, Glaciología y Ciencias Ambientales; Argentin
Sonification of Network Traffic Flow for Monitoring and Situational Awareness
Maintaining situational awareness of what is happening within a network is
challenging, not least because the behaviour happens within computers and
communications networks, but also because data traffic speeds and volumes are
beyond human ability to process. Visualisation is widely used to present
information about the dynamics of network traffic dynamics. Although it
provides operators with an overall view and specific information about
particular traffic or attacks on the network, it often fails to represent the
events in an understandable way. Visualisations require visual attention and so
are not well suited to continuous monitoring scenarios in which network
administrators must carry out other tasks. Situational awareness is critical
and essential for decision-making in the domain of computer network monitoring
where it is vital to be able to identify and recognize network environment
behaviours.Here we present SoNSTAR (Sonification of Networks for SiTuational
AwaReness), a real-time sonification system to be used in the monitoring of
computer networks to support the situational awareness of network
administrators. SoNSTAR provides an auditory representation of all the TCP/IP
protocol traffic within a network based on the different traffic flows between
between network hosts. SoNSTAR raises situational awareness levels for computer
network defence by allowing operators to achieve better understanding and
performance while imposing less workload compared to visual techniques. SoNSTAR
identifies the features of network traffic flows by inspecting the status flags
of TCP/IP packet headers and mapping traffic events to recorded sounds to
generate a soundscape representing the real-time status of the network traffic
environment. Listening to the soundscape allows the administrator to recognise
anomalous behaviour quickly and without having to continuously watch a computer
screen.Comment: 17 pages, 7 figures plus supplemental material in Github repositor
Galaxy Luminosity Functions from Deep Spectroscopic Samples of Rich Clusters
Using a new spectroscopic sample and methods accounting for spectroscopic
sampling fractions that vary in magnitude and surface brightness, we present
R-band galaxy luminosity functions (GLFs) for six nearby galaxy clusters with
redshifts 4000 < cz < 20000 km/s and velocity dispersions 700 < sigma < 1250
km/s. In the case of the nearest cluster, Abell 1060, our sample extends to
M_R=-14 (7 magnitudes below M*), making this the deepest spectroscopic
determination of the cluster GLF to date. Our methods also yield composite GLFs
for cluster and field galaxies to M_R=-17 (M*+4), including the GLFs of
subsamples of star forming and quiescent galaxies. The composite GLFs are
consistent with Schechter functions (M*_R=-21.14^{+0.17}_{-0.17},
alpha=-1.21^{+0.08}_{-0.07} for the clusters, M*_R=-21.15^{+0.16}_{-0.16},
alpha=-1.28^{+0.12}_{-0.11} for the field). All six cluster samples are
individually consistent with the composite GLF down to their respective
absolute magnitude limits, but the GLF of the quiescent population in clusters
is not universal. There are also significant variations in the GLF of quiescent
galaxies between the field and clusters that can be described as a steepening
of the faint end slope. The overall GLF in clusters is consistent with that of
field galaxies, except for the most luminous tip, which is enhanced in clusters
versus the field. The star formation properties of giant galaxies are more
strongly correlated with the environment than those of fainter galaxies.Comment: 53 pages, 8 figures, 1 ASCII table; accepted for publication in Ap
SNIFF: Reverse Engineering of Neural Networks with Fault Attacks
Neural networks have been shown to be vulnerable against fault injection
attacks. These attacks change the physical behavior of the device during the
computation, resulting in a change of value that is currently being computed.
They can be realized by various fault injection techniques, ranging from
clock/voltage glitching to application of lasers to rowhammer. In this paper we
explore the possibility to reverse engineer neural networks with the usage of
fault attacks. SNIFF stands for sign bit flip fault, which enables the reverse
engineering by changing the sign of intermediate values. We develop the first
exact extraction method on deep-layer feature extractor networks that provably
allows the recovery of the model parameters. Our experiments with Keras library
show that the precision error for the parameter recovery for the tested
networks is less than with the usage of 64-bit floats, which
improves the current state of the art by 6 orders of magnitude. Additionally,
we discuss the protection techniques against fault injection attacks that can
be applied to enhance the fault resistance
FPGA-based module for SURF extraction
We present a complete hardware and software solution of an FPGA-based computer vision embedded module capable of carrying out SURF image features extraction algorithm. Aside from image analysis, the module embeds a Linux distribution that allows to run programs specifically tailored for particular applications. The module is based on a Virtex-5 FXT FPGA which features powerful configurable logic and an embedded PowerPC processor. We describe the module hardware as well as the custom FPGA image processing cores that implement the algorithm's most computationally expensive process, the interest point detection. The module's overall performance is evaluated and compared to CPU and GPU based solutions. Results show that the embedded module achieves comparable disctinctiveness to the SURF software implementation running in a standard CPU while being faster and consuming significantly less power and space. Thus, it allows to use the SURF algorithm in applications with power and spatial constraints, such as autonomous navigation of small mobile robots
- …