51,220 research outputs found
Fast and Efficient Zero-Learning Image Fusion
We propose a real-time image fusion method using pre-trained neural networks.
Our method generates a single image containing features from multiple sources.
We first decompose images into a base layer representing large scale intensity
variations, and a detail layer containing small scale changes. We use visual
saliency to fuse the base layers, and deep feature maps extracted from a
pre-trained neural network to fuse the detail layers. We conduct ablation
studies to analyze our method's parameters such as decomposition filters,
weight construction methods, and network depth and architecture. Then, we
validate its effectiveness and speed on thermal, medical, and multi-focus
fusion. We also apply it to multiple image inputs such as multi-exposure
sequences. The experimental results demonstrate that our technique achieves
state-of-the-art performance in visual quality, objective assessment, and
runtime efficiency.Comment: 13 pages, 10 figure
VERIFAI: A Toolkit for the Design and Analysis of Artificial Intelligence-Based Systems
We present VERIFAI, a software toolkit for the formal design and analysis of
systems that include artificial intelligence (AI) and machine learning (ML)
components. VERIFAI particularly seeks to address challenges with applying
formal methods to perception and ML components, including those based on neural
networks, and to model and analyze system behavior in the presence of
environment uncertainty. We describe the initial version of VERIFAI which
centers on simulation guided by formal models and specifications. Several use
cases are illustrated with examples, including temporal-logic falsification,
model-based systematic fuzz testing, parameter synthesis, counterexample
analysis, and data set augmentation
Adaptive O-CNN: A Patch-based Deep Representation of 3D Shapes
We present an Adaptive Octree-based Convolutional Neural Network (Adaptive
O-CNN) for efficient 3D shape encoding and decoding. Different from
volumetric-based or octree-based CNN methods that represent a 3D shape with
voxels in the same resolution, our method represents a 3D shape adaptively with
octants at different levels and models the 3D shape within each octant with a
planar patch. Based on this adaptive patch-based representation, we propose an
Adaptive O-CNN encoder and decoder for encoding and decoding 3D shapes. The
Adaptive O-CNN encoder takes the planar patch normal and displacement as input
and performs 3D convolutions only at the octants at each level, while the
Adaptive O-CNN decoder infers the shape occupancy and subdivision status of
octants at each level and estimates the best plane normal and displacement for
each leaf octant. As a general framework for 3D shape analysis and generation,
the Adaptive O-CNN not only reduces the memory and computational cost, but also
offers better shape generation capability than the existing 3D-CNN approaches.
We validate Adaptive O-CNN in terms of efficiency and effectiveness on
different shape analysis and generation tasks, including shape classification,
3D autoencoding, shape prediction from a single image, and shape completion for
noisy and incomplete point clouds
Physics-guided Neural Networks (PGNN): An Application in Lake Temperature Modeling
This paper introduces a novel framework for combining scientific knowledge of
physics-based models with neural networks to advance scientific discovery. This
framework, termed as physics-guided neural network (PGNN), leverages the output
of physics-based model simulations along with observational features to
generate predictions using a neural network architecture. Further, this paper
presents a novel framework for using physics-based loss functions in the
learning objective of neural networks, to ensure that the model predictions not
only show lower errors on the training set but are also scientifically
consistent with the known physics on the unlabeled set. We illustrate the
effectiveness of PGNN for the problem of lake temperature modeling, where
physical relationships between the temperature, density, and depth of water are
used to design a physics-based loss function. By using scientific knowledge to
guide the construction and learning of neural networks, we are able to show
that the proposed framework ensures better generalizability as well as
scientific consistency of results.Comment: submitted to ACM SIGKDD 201
DeepFault: Fault Localization for Deep Neural Networks
Deep Neural Networks (DNNs) are increasingly deployed in safety-critical
applications including autonomous vehicles and medical diagnostics. To reduce
the residual risk for unexpected DNN behaviour and provide evidence for their
trustworthy operation, DNNs should be thoroughly tested. The DeepFault whitebox
DNN testing approach presented in our paper addresses this challenge by
employing suspiciousness measures inspired by fault localization to establish
the hit spectrum of neurons and identify suspicious neurons whose weights have
not been calibrated correctly and thus are considered responsible for
inadequate DNN performance. DeepFault also uses a suspiciousness-guided
algorithm to synthesize new inputs, from correctly classified inputs, that
increase the activation values of suspicious neurons. Our empirical evaluation
on several DNN instances trained on MNIST and CIFAR-10 datasets shows that
DeepFault is effective in identifying suspicious neurons. Also, the inputs
synthesized by DeepFault closely resemble the original inputs, exercise the
identified suspicious neurons and are highly adversarial.Comment: 15 page
Octree guided CNN with Spherical Kernels for 3D Point Clouds
We propose an octree guided neural network architecture and spherical
convolutional kernel for machine learning from arbitrary 3D point clouds. The
network architecture capitalizes on the sparse nature of irregular point
clouds, and hierarchically coarsens the data representation with space
partitioning. At the same time, the proposed spherical kernels systematically
quantize point neighborhoods to identify local geometric structures in the
data, while maintaining the properties of translation-invariance and asymmetry.
We specify spherical kernels with the help of network neurons that in turn are
associated with spatial locations. We exploit this association to avert dynamic
kernel generation during network training that enables efficient learning with
high resolution point clouds. The effectiveness of the proposed technique is
established on the benchmark tasks of 3D object classification and
segmentation, achieving new state-of-the-art on ShapeNet and RueMonge2014
datasets.Comment: Accepted in IEEE CVPR 2019. arXiv admin note: substantial text
overlap with arXiv:1805.0787
How to Learn a Model Checker
We show how machine-learning techniques, particularly neural networks, offer
a very effective and highly efficient solution to the approximate
model-checking problem for continuous and hybrid systems, a solution where the
general-purpose model checker is replaced by a model-specific classifier
trained by sampling model trajectories. To the best of our knowledge, we are
the first to establish this link from machine learning to model checking. Our
method comprises a pipeline of analysis techniques for estimating and obtaining
statistical guarantees on the classifier's prediction performance, as well as
tuning techniques to improve such performance. Our experimental evaluation
considers the time-bounded reachability problem for three well-established
benchmarks in the hybrid systems community. On these examples, we achieve an
accuracy of 99.82% to 100% and a false-negative rate (incorrectly predicting
that unsafe states are not reachable from a given state) of 0.0007 to 0. We
believe that this level of accuracy is acceptable in many practical
applications and we show how the approximate model checker can be made more
conservative by tuning the classifier through further training and selection of
the classification threshold.Comment: 16 pages, 13 figures, short version submitted to HSCC201
Towards Proof Synthesis Guided by Neural Machine Translation for Intuitionistic Propositional Logic
Inspired by the recent evolution of deep neural networks (DNNs) in machine
learning, we explore their application to PL-related topics. This paper is the
first step towards this goal; we propose a proof-synthesis method for the
negation-free propositional logic in which we use a DNN to obtain a guide of
proof search. The idea is to view the proof-synthesis problem as a translation
from a proposition to its proof. We train seq2seq, which is a popular network
in neural machine translation, so that it generates a proof encoded as a
-term of a given proposition. We implement the whole framework and
empirically observe that a generated proof term is close to a correct proof in
terms of the tree edit distance of AST. This observation justifies using the
output from a trained seq2seq model as a guide for proof search
3D Human Body Reconstruction from a Single Image via Volumetric Regression
This paper proposes the use of an end-to-end Convolutional Neural Network for
direct reconstruction of the 3D geometry of humans via volumetric regression.
The proposed method does not require the fitting of a shape model and can be
trained to work from a variety of input types, whether it be landmarks, images
or segmentation masks. Additionally, non-visible parts, either self-occluded or
otherwise, are still reconstructed, which is not the case with depth map
regression. We present results that show that our method can handle both pose
variation and detailed reconstruction given appropriate datasets for training.Comment: Accepted to ECCV Workshops (PeopleCap) 201
Verification for Machine Learning, Autonomy, and Neural Networks Survey
This survey presents an overview of verification techniques for autonomous
systems, with a focus on safety-critical autonomous cyber-physical systems
(CPS) and subcomponents thereof. Autonomy in CPS is enabling by recent advances
in artificial intelligence (AI) and machine learning (ML) through approaches
such as deep neural networks (DNNs), embedded in so-called learning enabled
components (LECs) that accomplish tasks from classification to control.
Recently, the formal methods and formal verification community has developed
methods to characterize behaviors in these LECs with eventual goals of formally
verifying specifications for LECs, and this article presents a survey of many
of these recent approaches
- …