2,484 research outputs found

    Neural-Augmented Static Analysis of Android Communication

    Full text link
    We address the problem of discovering communication links between applications in the popular Android mobile operating system, an important problem for security and privacy in Android. Any scalable static analysis in this complex setting is bound to produce an excessive amount of false-positives, rendering it impractical. To improve precision, we propose to augment static analysis with a trained neural-network model that estimates the probability that a communication link truly exists. We describe a neural-network architecture that encodes abstractions of communicating objects in two applications and estimates the probability with which a link indeed exists. At the heart of our architecture are type-directed encoders (TDE), a general framework for elegantly constructing encoders of a compound data type by recursively composing encoders for its constituent types. We evaluate our approach on a large corpus of Android applications, and demonstrate that it achieves very high accuracy. Further, we conduct thorough interpretability studies to understand the internals of the learned neural networks.Comment: Appears in Proceedings of the 2018 ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE

    Application of neural networks and sensitivity analysis to improved prediction of trauma survival

    Get PDF
    Application of neural networks and sensitivity analysis to improved prediction of trauma surviva

    Learning hard quantum distributions with variational autoencoders

    Get PDF
    Studying general quantum many-body systems is one of the major challenges in modern physics because it requires an amount of computational resources that scales exponentially with the size of the system.Simulating the evolution of a state, or even storing its description, rapidly becomes intractable for exact classical algorithms. Recently, machine learning techniques, in the form of restricted Boltzmann machines, have been proposed as a way to efficiently represent certain quantum states with applications in state tomography and ground state estimation. Here, we introduce a new representation of states based on variational autoencoders. Variational autoencoders are a type of generative model in the form of a neural network. We probe the power of this representation by encoding probability distributions associated with states from different classes. Our simulations show that deep networks give a better representation for states that are hard to sample from, while providing no benefit for random states. This suggests that the probability distributions associated to hard quantum states might have a compositional structure that can be exploited by layered neural networks. Specifically, we consider the learnability of a class of quantum states introduced by Fefferman and Umans. Such states are provably hard to sample for classical computers, but not for quantum ones, under plausible computational complexity assumptions. The good level of compression achieved for hard states suggests these methods can be suitable for characterising states of the size expected in first generation quantum hardware.Comment: v2: 9 pages, 3 figures, journal version with major edits with respect to v1 (rewriting of section "hard and easy quantum states", extended discussion on comparison with tensor networks

    Examples of Artificial Perceptions in Optical Character Recognition and Iris Recognition

    Full text link
    This paper assumes the hypothesis that human learning is perception based, and consequently, the learning process and perceptions should not be represented and investigated independently or modeled in different simulation spaces. In order to keep the analogy between the artificial and human learning, the former is assumed here as being based on the artificial perception. Hence, instead of choosing to apply or develop a Computational Theory of (human) Perceptions, we choose to mirror the human perceptions in a numeric (computational) space as artificial perceptions and to analyze the interdependence between artificial learning and artificial perception in the same numeric space, using one of the simplest tools of Artificial Intelligence and Soft Computing, namely the perceptrons. As practical applications, we choose to work around two examples: Optical Character Recognition and Iris Recognition. In both cases a simple Turing test shows that artificial perceptions of the difference between two characters and between two irides are fuzzy, whereas the corresponding human perceptions are, in fact, crisp.Comment: 5th Int. Conf. on Soft Computing and Applications (Szeged, HU), 22-24 Aug 201

    Towards a learning-theoretic analysis of spike-timing dependent plasticity

    Full text link
    This paper suggests a learning-theoretic perspective on how synaptic plasticity benefits global brain functioning. We introduce a model, the selectron, that (i) arises as the fast time constant limit of leaky integrate-and-fire neurons equipped with spiking timing dependent plasticity (STDP) and (ii) is amenable to theoretical analysis. We show that the selectron encodes reward estimates into spikes and that an error bound on spikes is controlled by a spiking margin and the sum of synaptic weights. Moreover, the efficacy of spikes (their usefulness to other reward maximizing selectrons) also depends on total synaptic strength. Finally, based on our analysis, we propose a regularized version of STDP, and show the regularization improves the robustness of neuronal learning when faced with multiple stimuli.Comment: To appear in Adv. Neural Inf. Proc. System

    Redistribution of Synaptic Efficacy Supports Stable Pattern Learning in Neural Networks

    Full text link
    Markram and Tsodyks, by showing that the elevated synaptic efficacy observed with single-pulse LTP measurements disappears with higher-frequency test pulses, have critically challenged the conventional assumption that LTP reflects a general gain increase. Redistribution of synaptic efficacy (RSE) is here seen as the local realization of a global design principle in a neural network for pattern coding. As is typical of many coding systems, the network learns by dynamically balancing a pattern-independent increase in strength against a pattern-specific increase in selectivity. This computation is implemented by a monotonic long-term memory process which has a bidirectional effect on the postsynaptic potential via functionally complementary signal components. These frequency-dependent and frequency-independent components realize the balance between specific and nonspecific functions at each synapse. This synaptic balance suggests a functional purpose for RSE which, by dynamically bounding total memory change, implements a distributed coding scheme which is stable with fast as well as slow learning. Although RSE would seem to make it impossible to code high-frequency input features, a network preprocessing step called complement coding symmetrizes the input representation, which allows the system to encode high-frequency as well as low-frequency features in an input pattern. A possible physical model interprets the two synaptic signal components in terms of ligand-gated and voltage-gated receptors, where learning converts channels from one type to another.Office of Naval Research and the Defense Advanced Research Projects Agency (N00014-95-1-0409, N00014-1-95-0657
    • …
    corecore