133 research outputs found
Updating Distance Maps When Objects Move
Using a discrete distance transform one can quickly build a map of the distance from a goal to every point in a digital map. Using this map, one can easily solve the shortest path problem from any point by simply following the gradient of the distance map. This technique can be used in any number of dimensions and can incorporate obstacles of arbitrary shape (represented in the digital map) including pseudo-obstacles caused by unattainable configurations of a robotic system. This paper further extends the usefulness of the digital distance transform technique by providing an efficient means for dealing with objects which undergo motion. In particular, an algorithm is presented that allows one to update only those portions of the distance map that will potentially change as an object moves. The technique is based on an analysis of the distance transform as a problem in wave propagation. The regions that must be checked for possible update when an object moves are those that are in its "shadow~, or in the shadow of objects that are partially in the shadow of the moving object. The technique can handle multiple goals, and multiple objects moving and interacting in an arbitrary fashion. The algorithm is demonstrated on a number of synthetic two dimensional examples
Recommended from our members
Smoothness assumptions in human and machine vision, and their implications for optimal surface interpolation
In this paper we shall examine what smoothness assumptions are made about object surfaces, object motion, and image intensities. We begin by looking into the physiological limits of vision and how these might influence our perception of smoothness. We then look at a sampling of the computer vision and psychology literature, inferring smoothness constraints from the mathematical assumptions tacitly presumed by researchers. This look at computer vision and psychology of vision is not meant to be an inclusive study, but rather representative of the assumptions made, and in part representative of the mathematical models used therein. We shall conclude that prevalent assumptions are that surfaces, motion, and intensity images are functions in C2, eland c2 respectively. In the latter portion of this paper we examine one use of explicit assumptions on smoothness in the definition of existing method for obtaining "optimal" surface interpolation. We briefly introduce the nomenclature of information-based complexity originated by Traub, Wozniakowski, and their colleagues, which is the mathematical machinery used in obtaining these "optimal" surfaces. This theory requires that we know the class of functions from which our desired surface comes, and part of the definition of a class is the degree of smoothness. We then survey many possible classes for the visual interpolation problem of two dimensional surfaces, and state formulas from which one can obtain the optimal surface interpolating given depth data
CALIPER: Continuous Authentication Layered with Integrated PKI Encoding Recognition
Architectures relying on continuous authentication require a secure way to
challenge the user's identity without trusting that the Continuous
Authentication Subsystem (CAS) has not been compromised, i.e., that the
response to the layer which manages service/application access is not fake. In
this paper, we introduce the CALIPER protocol, in which a separate Continuous
Access Verification Entity (CAVE) directly challenges the user's identity in a
continuous authentication regime. Instead of simply returning authentication
probabilities or confidence scores, CALIPER's CAS uses live hard and soft
biometric samples from the user to extract a cryptographic private key embedded
in a challenge posed by the CAVE. The CAS then uses this key to sign a response
to the CAVE. CALIPER supports multiple modalities, key lengths, and security
levels and can be applied in two scenarios: One where the CAS must authenticate
its user to a CAVE running on a remote server (device-server) for access to
remote application data, and another where the CAS must authenticate its user
to a locally running trusted computing module (TCM) for access to local
application data (device-TCM). We further demonstrate that CALIPER can leverage
device hardware resources to enable privacy and security even when the device's
kernel is compromised, and we show how this authentication protocol can even be
expanded to obfuscate direct kernel object manipulation (DKOM) malwares.Comment: Accepted to CVPR 2016 Biometrics Worksho
Recommended from our members
Visual Surface Interpolation: A Comparison of Two Methods
We critically compare 2 different methods for visual surface interpolation. One method uses the reproducing kernels of Hilbert spaces to construct a spline interpolating the data, such that this spline is of minimal norm. The other method, presented in Grimson (1981), recovers the surface of minimal norm by direct minimization of the norm with a gradient projection algorithm. We present the problem that each algorithm is attempting to solve, then briefly introduce both methods. The main contribution is an analysis of each algorithm in terms of the worst case running time (serial processor), space complexity, and rough estimates of the running time and space costs for massively parallel implementations. We then conclude with a discussion of the differences in the internal representation of the surface in both algorithms
Recommended from our members
Reproducing Kernels for Visual Surface Interpolation.
We examine the details of two related methods for the recovery of virtual surfaces from space depth data. The methods use the reproducing kernels of Hilbert spaces to construct a spline interpolating the data. such that this spline is of minimal norm. We discuss the numerical properties of the two methods presented, and give example interpolations
Adversarial Robustness: Softmax versus Openmax
Deep neural networks (DNNs) provide state-of-the-art results on various tasks
and are widely used in real world applications. However, it was discovered that
machine learning models, including the best performing DNNs, suffer from a
fundamental problem: they can unexpectedly and confidently misclassify examples
formed by slightly perturbing otherwise correctly recognized inputs. Various
approaches have been developed for efficiently generating these so-called
adversarial examples, but those mostly rely on ascending the gradient of loss.
In this paper, we introduce the novel logits optimized targeting system (LOTS)
to directly manipulate deep features captured at the penultimate layer. Using
LOTS, we analyze and compare the adversarial robustness of DNNs using the
traditional Softmax layer with Openmax, which was designed to provide open set
recognition by defining classes derived from deep representations, and is
claimed to be more robust to adversarial perturbations. We demonstrate that
Openmax provides less vulnerable systems than Softmax to traditional attacks,
however, we show that it can be equally susceptible to more sophisticated
adversarial generation techniques that directly work on deep representations.Comment: Accepted to British Machine Vision Conference (BMVC) 201
Are Accuracy and Robustness Correlated?
Machine learning models are vulnerable to adversarial examples formed by
applying small carefully chosen perturbations to inputs that cause unexpected
classification errors. In this paper, we perform experiments on various
adversarial example generation approaches with multiple deep convolutional
neural networks including Residual Networks, the best performing models on
ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the
adversarial example generation techniques with respect to the quality of the
produced images, and measure the robustness of the tested machine learning
models to adversarial examples. Finally, we conduct large-scale experiments on
cross-model adversarial portability. We find that adversarial examples are
mostly transferable across similar network topologies, and we demonstrate that
better machine learning models are less vulnerable to adversarial examples.Comment: Accepted for publication at ICMLA 201
- …