1,632 research outputs found

    Learning Dilation Factors for Semantic Segmentation of Street Scenes

    Full text link
    Contextual information is crucial for semantic segmentation. However, finding the optimal trade-off between keeping desired fine details and at the same time providing sufficiently large receptive fields is non trivial. This is even more so, when objects or classes present in an image significantly vary in size. Dilated convolutions have proven valuable for semantic segmentation, because they allow to increase the size of the receptive field without sacrificing image resolution. However, in current state-of-the-art methods, dilation parameters are hand-tuned and fixed. In this paper, we present an approach for learning dilation parameters adaptively per channel, consistently improving semantic segmentation results on street-scene datasets like Cityscapes and Camvid.Comment: GCPR201

    Connectionist Temporal Modeling for Weakly Supervised Action Labeling

    Full text link
    We propose a weakly-supervised framework for action labeling in video, where only the order of occurring actions is required during training time. The key challenge is that the per-frame alignments between the input (video) and label (action) sequences are unknown during training. We address this by introducing the Extended Connectionist Temporal Classification (ECTC) framework to efficiently evaluate all possible alignments via dynamic programming and explicitly enforce their consistency with frame-to-frame visual similarities. This protects the model from distractions of visually inconsistent or degenerated alignments without the need of temporal supervision. We further extend our framework to the semi-supervised case when a few frames are sparsely annotated in a video. With less than 1% of labeled frames per video, our method is able to outperform existing semi-supervised approaches and achieve comparable performance to that of fully supervised approaches.Comment: To appear in ECCV 201

    CAD of Stacked Patch Antennas Through Multipurpose Admittance Matrices From FEM and Neural Networks

    Get PDF
    In this work, a novel computer-aided design methodology for probe-fed, cavity-backed, stacked microstrip patch antennas is proposed. The methodology incorporates the rigor of a numerical technique, such as finite element methods, which, in turn, makes use of a newly developed procedure (multipurpose admittance matrices) to carry out a full-wave analysis in a given structure in spite of certain physical shapes and dimensions not yet being established. With the aid of this technique, we form a training set for a neural network, whose output is the desired response of the antenna according to the value of design parameters. Last, taking advantage of this neural network, we perform a global optimization through a genetic algorithm or simulated annealing to obtain a final design. The proposed methodology is validated through a real design whose numerical results are compared with measurements with good agreement

    PassGAN: A Deep Learning Approach for Password Guessing

    Full text link
    State-of-the-art password guessing tools, such as HashCat and John the Ripper, enable users to check billions of passwords per second against password hashes. In addition to performing straightforward dictionary attacks, these tools can expand password dictionaries using password generation rules, such as concatenation of words (e.g., "password123456") and leet speak (e.g., "password" becomes "p4s5w0rd"). Although these rules work well in practice, expanding them to model further passwords is a laborious task that requires specialized expertise. To address this issue, in this paper we introduce PassGAN, a novel approach that replaces human-generated password rules with theory-grounded machine learning algorithms. Instead of relying on manual password analysis, PassGAN uses a Generative Adversarial Network (GAN) to autonomously learn the distribution of real passwords from actual password leaks, and to generate high-quality password guesses. Our experiments show that this approach is very promising. When we evaluated PassGAN on two large password datasets, we were able to surpass rule-based and state-of-the-art machine learning password guessing tools. However, in contrast with the other tools, PassGAN achieved this result without any a-priori knowledge on passwords or common password structures. Additionally, when we combined the output of PassGAN with the output of HashCat, we were able to match 51%-73% more passwords than with HashCat alone. This is remarkable, because it shows that PassGAN can autonomously extract a considerable number of password properties that current state-of-the art rules do not encode.Comment: This is an extended version of the paper which appeared in NeurIPS 2018 Workshop on Security in Machine Learning (SecML'18), see https://github.com/secml2018/secml2018.github.io/raw/master/PASSGAN_SECML2018.pd

    End to End Deep Neural Network Frequency Demodulation of Speech Signals

    Full text link
    Frequency modulation (FM) is a form of radio broadcasting which is widely used nowadays and has been for almost a century. We suggest a software-defined-radio (SDR) receiver for FM demodulation that adopts an end-to-end learning based approach and utilizes the prior information of transmitted speech message in the demodulation process. The receiver detects and enhances speech from the in-phase and quadrature components of its base band version. The new system yields high performance detection for both acoustical disturbances, and communication channel noise and is foreseen to out-perform the established methods for low signal to noise ratio (SNR) conditions in both mean square error and in perceptual evaluation of speech quality score

    A Deep Cascade of Convolutional Neural Networks for MR Image Reconstruction

    Full text link
    The acquisition of Magnetic Resonance Imaging (MRI) is inherently slow. Inspired by recent advances in deep learning, we propose a framework for reconstructing MR images from undersampled data using a deep cascade of convolutional neural networks to accelerate the data acquisition process. We show that for Cartesian undersampling of 2D cardiac MR images, the proposed method outperforms the state-of-the-art compressed sensing approaches, such as dictionary learning-based MRI (DLMRI) reconstruction, in terms of reconstruction error, perceptual quality and reconstruction speed for both 3-fold and 6-fold undersampling. Compared to DLMRI, the error produced by the method proposed is approximately twice as small, allowing to preserve anatomical structures more faithfully. Using our method, each image can be reconstructed in 23 ms, which is fast enough to enable real-time applications

    Reinforcement learning in populations of spiking neurons

    Get PDF
    Population coding is widely regarded as a key mechanism for achieving reliable behavioral responses in the face of neuronal variability. But in standard reinforcement learning a flip-side becomes apparent. Learning slows down with increasing population size since the global reinforcement becomes less and less related to the performance of any single neuron. We show that, in contrast, learning speeds up with increasing population size if feedback about the populationresponse modulates synaptic plasticity in addition to global reinforcement. The two feedback signals (reinforcement and population-response signal) can be encoded by ambient neurotransmitter concentrations which vary slowly, yielding a fully online plasticity rule where the learning of a stimulus is interleaved with the processing of the subsequent one. The assumption of a single additional feedback mechanism therefore reconciles biological plausibility with efficient learning

    JET ANALYSIS BY NEURAL NETWORKS IN HIGH ENERGY HADRON-HADRON COLLISIONS

    Full text link
    We study the possibility to employ neural networks to simulate jet clustering procedures in high energy hadron-hadron collisions. We concentrate our analysis on the Fermilab Tevatron energy and on the k⊄k_\bot algorithm. We consider both supervised multilayer feed-forward network trained by the backpropagation algorithm and unsupervised learning, where the neural network autonomously organizes the events in clusters.Comment: 9 pages, latex, 2 figures not included

    BioMetricNet: deep unconstrained face verification through learning of metrics regularized onto Gaussian distributions

    Get PDF
    We present BioMetricNet: a novel framework for deep unconstrained face verification which learns a regularized metric to compare facial features. Differently from popular methods such as FaceNet, the proposed approach does not impose any specific metric on facial features; instead, it shapes the decision space by learning a latent representation in which matching and non-matching pairs are mapped onto clearly separated and well-behaved target distributions. In particular, the network jointly learns the best feature representation, and the best metric that follows the target distributions, to be used to discriminate face images. In this paper we present this general framework, first of its kind for facial verification, and tailor it to Gaussian distributions. This choice enables the use of a simple linear decision boundary that can be tuned to achieve the desired trade-off between false alarm and genuine acceptance rate, and leads to a loss function that can be written in closed form. Extensive analysis and experimentation on publicly available datasets such as Labeled Faces in the wild (LFW), Youtube faces (YTF), Celebrities in Frontal-Profile in the Wild (CFP), and challenging datasets like cross-age LFW (CALFW), cross-pose LFW (CPLFW), In-the-wild Age Dataset (AgeDB) show a significant performance improvement and confirms the effectiveness and superiority of BioMetricNet over existing state-of-the-art methods.Comment: Accepted at ECCV2

    High Resolution Image Reconstruction of Polymer Composite Materials Using Neural Networks

    Get PDF
    A neural network is an artificial intelligence technique inspired by a simplistic model of biological neurons and their connectivity. A neural network has the ability to learn an input-output function without a priori knowledge of the relationship between them. Typically a neural network consists of layers of neurons, whereby each neuron in a given layer is fully connected to neurons in adjacent layers. Figure 1 shows such an arrangement with three layers, called the input, hidden and output layers. The connection strengths between neurons, often referred to as weights, are modified by a training phase. The training phase used here utilizes an error back propagation algorithm [1]. During training the neural network is presented with input which propagates through the network producing a corresponding output. A comparison of the actual output with the desired or target output generates an error which is used to adjust the neural network’s weights according to an error gradient descent technique [2]. This procedure is repeated for many different input and desired output pairs allowing the neural network to learn the input-output function
    • 

    corecore