10,708 research outputs found

    Interpretable Probabilistic Password Strength Meters via Deep Learning

    Full text link
    Probabilistic password strength meters have been proved to be the most accurate tools to measure password strength. Unfortunately, by construction, they are limited to solely produce an opaque security estimation that fails to fully support the user during the password composition. In the present work, we move the first steps towards cracking the intelligibility barrier of this compelling class of meters. We show that probabilistic password meters inherently own the capability of describing the latent relation occurring between password strength and password structure. In our approach, the security contribution of each character composing a password is disentangled and used to provide explicit fine-grained feedback for the user. Furthermore, unlike existing heuristic constructions, our method is free from any human bias, and, more importantly, its feedback has a clear probabilistic interpretation. In our contribution: (1) we formulate the theoretical foundations of interpretable probabilistic password strength meters; (2) we describe how they can be implemented via an efficient and lightweight deep learning framework suitable for client-side operability.Comment: An abridged version of this paper appears in the proceedings of the 25th European Symposium on Research in Computer Security (ESORICS) 202

    Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks

    Full text link
    Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples---perturbed inputs specifically designed to produce intentional errors in the learning algorithms at test time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.Comment: 16 pages, 10 figures. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS '19

    Learning Universal Adversarial Perturbations with Generative Models

    Get PDF
    Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. It was recently shown that given a dataset and classifier, there exists so called universal adversarial perturbations, a single perturbation that causes a misclassification when applied to any input. In this work, we introduce universal adversarial networks, a generative network that is capable of fooling a target classifier when it's generated output is added to a clean sample from a dataset. We show that this technique improves on known universal adversarial attacks

    Automated Measurement of Heavy Equipment Greenhouse Gas Emission: The case of Road/Bridge Construction and Maintenance

    Get PDF
    Road/bridge construction and maintenance projects are major contributors to greenhouse gas (GHG) emissions such as carbon dioxide (CO2), mainly due to extensive use of heavy-duty diesel construction equipment and large-scale earthworks and earthmoving operations. Heavy equipment is a costly resource and its underutilization could result in significant budget overruns. A practical way to cut emissions is to reduce the time equipment spends doing non-value-added activities and/or idling. Recent research into the monitoring of automated equipment using sensors and Internet-of-Things (IoT) frameworks have leveraged machine learning algorithms to predict the behavior of tracked entities. In this project, end-to-end deep learning models were developed that can learn to accurately classify the activities of construction equipment based on vibration patterns picked up by accelerometers attached to the equipment. Data was collected from two types of real-world construction equipment, both used extensively in road/bridge construction and maintenance projects: excavators and vibratory rollers. The validation accuracies of the developed models were tested of three different deep learning models: a baseline convolutional neural network (CNN); a hybrid convolutional and recurrent long shortterm memory neural network (LSTM); and a temporal convolutional network (TCN). Results indicated that the TCN model had the best performance, the LSTM model had the second-best performance, and the CNN model had the worst performance. The TCN model had over 83% validation accuracy in recognizing activities. Using deep learning methodologies can significantly increase emission estimation accuracy for heavy equipment and help decision-makers to reliably evaluate the environmental impact of heavy civil and infrastructure projects. Reducing the carbon footprint and fuel use of heavy equipment in road/bridge projects have direct and indirect impacts on health and the economy. Public infrastructure projects can leverage the proposed system to reduce the environmental cost of infrastructure project
    • …
    corecore