1,816 research outputs found

    PassGAN: A Deep Learning Approach for Password Guessing

    Full text link
    State-of-the-art password guessing tools, such as HashCat and John the Ripper, enable users to check billions of passwords per second against password hashes. In addition to performing straightforward dictionary attacks, these tools can expand password dictionaries using password generation rules, such as concatenation of words (e.g., "password123456") and leet speak (e.g., "password" becomes "p4s5w0rd"). Although these rules work well in practice, expanding them to model further passwords is a laborious task that requires specialized expertise. To address this issue, in this paper we introduce PassGAN, a novel approach that replaces human-generated password rules with theory-grounded machine learning algorithms. Instead of relying on manual password analysis, PassGAN uses a Generative Adversarial Network (GAN) to autonomously learn the distribution of real passwords from actual password leaks, and to generate high-quality password guesses. Our experiments show that this approach is very promising. When we evaluated PassGAN on two large password datasets, we were able to surpass rule-based and state-of-the-art machine learning password guessing tools. However, in contrast with the other tools, PassGAN achieved this result without any a-priori knowledge on passwords or common password structures. Additionally, when we combined the output of PassGAN with the output of HashCat, we were able to match 51%-73% more passwords than with HashCat alone. This is remarkable, because it shows that PassGAN can autonomously extract a considerable number of password properties that current state-of-the art rules do not encode.Comment: This is an extended version of the paper which appeared in NeurIPS 2018 Workshop on Security in Machine Learning (SecML'18), see https://github.com/secml2018/secml2018.github.io/raw/master/PASSGAN_SECML2018.pd

    A Generative-Discriminative Basis Learning Framework to Predict Clinical Severity from Resting State Functional MRI Data

    Full text link
    We propose a matrix factorization technique that decomposes the resting state fMRI (rs-fMRI) correlation matrices for a patient population into a sparse set of representative subnetworks, as modeled by rank one outer products. The subnetworks are combined using patient specific non-negative coefficients; these coefficients are also used to model, and subsequently predict the clinical severity of a given patient via a linear regression. Our generative-discriminative framework is able to exploit the structure of rs-fMRI correlation matrices to capture group level effects, while simultaneously accounting for patient variability. We employ ten fold cross validation to demonstrate the predictive power of our model on a cohort of fifty eight patients diagnosed with Autism Spectrum Disorder. Our method outperforms classical semi-supervised frameworks, which perform dimensionality reduction on the correlation features followed by non-linear regression to predict the clinical scores

    A Novel Method for Epileptic Seizure Detection Using Coupled Hidden Markov Models

    Full text link
    We propose a novel Coupled Hidden Markov Model to detect epileptic seizures in multichannel electroencephalography (EEG) data. Our model defines a network of seizure propagation paths to capture both the temporal and spatial evolution of epileptic activity. To address the intractability introduced by the coupled interactions, we derive a variational inference procedure to efficiently infer the seizure evolution from spectral patterns in the EEG data. We validate our model on EEG aquired under clinical conditions in the Epilepsy Monitoring Unit of the Johns Hopkins Hospital. Using 5-fold cross validation, we demonstrate that our model outperforms three baseline approaches which rely on a classical detection framework. Our model also demonstrates the potential to localize seizure onset zones in focal epilepsy.Comment: To appear in MICCAI 2018 Proceeding

    Changes in Dopamine Signalling Do Not Underlie Aberrant Hippocampal Plasticity in a Mouse Model of Huntington's Disease

    Get PDF
    Altered dopamine receptor labelling has been demonstrated in presymptomatic and symptomatic Huntington's disease (HD) gene carriers, indicating that alterations in dopaminergic signalling are an early event in HD. We have previously described early alterations in synaptic transmission and plasticity in both the cortex and hippocampus of the R6/1 mouse model of Huntington's disease. Deficits in cortical synaptic plasticity were associated with altered dopaminergic signalling and could be reversed by D1- or D2-like dopamine receptor activation. In light of these findings we here investigated whether defects in dopamine signalling could also contribute to the marked alteration in hippocampal synaptic function. To this end we performed dopamine receptor labelling and pharmacology in the R6/1 hippocampus and report a marked, age-dependent elevation of hippocampal D1 and D2 receptor labelling in R6/1 hippocampal subfields. Yet, pharmacological inhibition or activation of D1- or D2-like receptors did not modify the aberrant synaptic plasticity observed in R6/1 mice. These findings demonstrate that global perturbations to dopamine receptor expression do occur in HD transgenic mice, similarly in HD gene carriers and patients. However, the direction of change and the lack of effect of dopaminergic pharmacological agents on synaptic function demonstrate that the perturbations are heterogeneous and region-specific, a finding that may explain the mixed results of dopamine therapy in HD

    Hidden Markov Models and their Application for Predicting Failure Events

    Full text link
    We show how Markov mixed membership models (MMMM) can be used to predict the degradation of assets. We model the degradation path of individual assets, to predict overall failure rates. Instead of a separate distribution for each hidden state, we use hierarchical mixtures of distributions in the exponential family. In our approach the observation distribution of the states is a finite mixture distribution of a small set of (simpler) distributions shared across all states. Using tied-mixture observation distributions offers several advantages. The mixtures act as a regularization for typically very sparse problems, and they reduce the computational effort for the learning algorithm since there are fewer distributions to be found. Using shared mixtures enables sharing of statistical strength between the Markov states and thus transfer learning. We determine for individual assets the trade-off between the risk of failure and extended operating hours by combining a MMMM with a partially observable Markov decision process (POMDP) to dynamically optimize the policy for when and how to maintain the asset.Comment: Will be published in the proceedings of ICCS 2020; @Booklet{EasyChair:3183, author = {Paul Hofmann and Zaid Tashman}, title = {Hidden Markov Models and their Application for Predicting Failure Events}, howpublished = {EasyChair Preprint no. 3183}, year = {EasyChair, 2020}

    The color of smiling: computational synaesthesia of facial expressions

    Get PDF
    This note gives a preliminary account of the transcoding or rechanneling problem between different stimuli as it is of interest for the natural interaction or affective computing fields. By the consideration of a simple example, namely the color response of an affective lamp to a sensed facial expression, we frame the problem within an information- theoretic perspective. A full justification in terms of the Information Bottleneck principle promotes a latent affective space, hitherto surmised as an appealing and intuitive solution, as a suitable mediator between the different stimuli.Comment: Submitted to: 18th International Conference on Image Analysis and Processing (ICIAP 2015), 7-11 September 2015, Genova, Ital

    Quantum machine learning: a classical perspective

    Get PDF
    Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning techniques to impressive results in regression, classification, data-generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets are motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed-up classical machine learning algorithms. Here we review the literature in quantum machine learning and discuss perspectives for a mixed readership of classical machine learning and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in machine learning are identified as promising directions for the field. Practical questions, like how to upload classical data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde

    Pseudorehearsal in value function approximation

    Full text link
    Catastrophic forgetting is of special importance in reinforcement learning, as the data distribution is generally non-stationary over time. We study and compare several pseudorehearsal approaches for Q-learning with function approximation in a pole balancing task. We have found that pseudorehearsal seems to assist learning even in such very simple problems, given proper initialization of the rehearsal parameters

    Predicting Fluid Intelligence of Children using T1-weighted MR Images and a StackNet

    Full text link
    In this work, we utilize T1-weighted MR images and StackNet to predict fluid intelligence in adolescents. Our framework includes feature extraction, feature normalization, feature denoising, feature selection, training a StackNet, and predicting fluid intelligence. The extracted feature is the distribution of different brain tissues in different brain parcellation regions. The proposed StackNet consists of three layers and 11 models. Each layer uses the predictions from all previous layers including the input layer. The proposed StackNet is tested on a public benchmark Adolescent Brain Cognitive Development Neurocognitive Prediction Challenge 2019 and achieves a mean squared error of 82.42 on the combined training and validation set with 10-fold cross-validation. In addition, the proposed StackNet also achieves a mean squared error of 94.25 on the testing data. The source code is available on GitHub.Comment: 8 pages, 2 figures, 3 tables, Accepted by MICCAI ABCD-NP Challenge 2019; Added ND

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury
    corecore