133 research outputs found

    Updating Distance Maps When Objects Move

    Get PDF
    Using a discrete distance transform one can quickly build a map of the distance from a goal to every point in a digital map. Using this map, one can easily solve the shortest path problem from any point by simply following the gradient of the distance map. This technique can be used in any number of dimensions and can incorporate obstacles of arbitrary shape (represented in the digital map) including pseudo-obstacles caused by unattainable configurations of a robotic system. This paper further extends the usefulness of the digital distance transform technique by providing an efficient means for dealing with objects which undergo motion. In particular, an algorithm is presented that allows one to update only those portions of the distance map that will potentially change as an object moves. The technique is based on an analysis of the distance transform as a problem in wave propagation. The regions that must be checked for possible update when an object moves are those that are in its "shadow~, or in the shadow of objects that are partially in the shadow of the moving object. The technique can handle multiple goals, and multiple objects moving and interacting in an arbitrary fashion. The algorithm is demonstrated on a number of synthetic two dimensional examples

    CALIPER: Continuous Authentication Layered with Integrated PKI Encoding Recognition

    Full text link
    Architectures relying on continuous authentication require a secure way to challenge the user's identity without trusting that the Continuous Authentication Subsystem (CAS) has not been compromised, i.e., that the response to the layer which manages service/application access is not fake. In this paper, we introduce the CALIPER protocol, in which a separate Continuous Access Verification Entity (CAVE) directly challenges the user's identity in a continuous authentication regime. Instead of simply returning authentication probabilities or confidence scores, CALIPER's CAS uses live hard and soft biometric samples from the user to extract a cryptographic private key embedded in a challenge posed by the CAVE. The CAS then uses this key to sign a response to the CAVE. CALIPER supports multiple modalities, key lengths, and security levels and can be applied in two scenarios: One where the CAS must authenticate its user to a CAVE running on a remote server (device-server) for access to remote application data, and another where the CAS must authenticate its user to a locally running trusted computing module (TCM) for access to local application data (device-TCM). We further demonstrate that CALIPER can leverage device hardware resources to enable privacy and security even when the device's kernel is compromised, and we show how this authentication protocol can even be expanded to obfuscate direct kernel object manipulation (DKOM) malwares.Comment: Accepted to CVPR 2016 Biometrics Worksho

    Adversarial Robustness: Softmax versus Openmax

    Full text link
    Deep neural networks (DNNs) provide state-of-the-art results on various tasks and are widely used in real world applications. However, it was discovered that machine learning models, including the best performing DNNs, suffer from a fundamental problem: they can unexpectedly and confidently misclassify examples formed by slightly perturbing otherwise correctly recognized inputs. Various approaches have been developed for efficiently generating these so-called adversarial examples, but those mostly rely on ascending the gradient of loss. In this paper, we introduce the novel logits optimized targeting system (LOTS) to directly manipulate deep features captured at the penultimate layer. Using LOTS, we analyze and compare the adversarial robustness of DNNs using the traditional Softmax layer with Openmax, which was designed to provide open set recognition by defining classes derived from deep representations, and is claimed to be more robust to adversarial perturbations. We demonstrate that Openmax provides less vulnerable systems than Softmax to traditional attacks, however, we show that it can be equally susceptible to more sophisticated adversarial generation techniques that directly work on deep representations.Comment: Accepted to British Machine Vision Conference (BMVC) 201

    Are Accuracy and Robustness Correlated?

    Full text link
    Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors. In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the adversarial example generation techniques with respect to the quality of the produced images, and measure the robustness of the tested machine learning models to adversarial examples. Finally, we conduct large-scale experiments on cross-model adversarial portability. We find that adversarial examples are mostly transferable across similar network topologies, and we demonstrate that better machine learning models are less vulnerable to adversarial examples.Comment: Accepted for publication at ICMLA 201
    corecore