123,734 research outputs found

    Gaussian Process Regression for Estimating EM Ducting Within the Marine Atmospheric Boundary Layer

    Full text link
    We show that Gaussian process regression (GPR) can be used to infer the electromagnetic (EM) duct height within the marine atmospheric boundary layer (MABL) from sparsely sampled propagation factors within the context of bistatic radars. We use GPR to calculate the posterior predictive distribution on the labels (i.e. duct height) from both noise-free and noise-contaminated array of propagation factors. For duct height inference from noise-contaminated propagation factors, we compare a naive approach, utilizing one random sample from the input distribution (i.e. disregarding the input noise), with an inverse-variance weighted approach, utilizing a few random samples to estimate the true predictive distribution. The resulting posterior predictive distributions from these two approaches are compared to a "ground truth" distribution, which is approximated using a large number of Monte-Carlo samples. The ability of GPR to yield accurate and fast duct height predictions using a few training examples indicates the suitability of the proposed method for real-time applications.Comment: 15 pages, 6 figure

    A similarity-based inference engine for non-singleton fuzzy logic systems

    Get PDF
    In non-singleton fuzzy logic systems (NSFLSs) input uncertainties are modelled with input fuzzy sets in order to capture input uncertainty such as sensor noise. The performance of NSFLSs in handling such uncertainties depends both on the actual input fuzzy sets (and their inherent model of uncertainty) and on the way that they affect the inference process. This paper proposes a novel type of NSFLS by replacing the composition-based inference method of type-1 fuzzy relations with a similarity-based inference method that makes NSFLSs more sensitive to changes in the input's uncertainty characteristics. The proposed approach is based on using the Jaccard ratio to measure the similarity between input and antecedent fuzzy sets, then using the measured similarity to determine the firing strength of each individual fuzzy rule. The standard and novel approaches to NSFLSs are experimentally compared for the well-known problem of Mackey-Glass time series predictions, where the NSFLS's inputs have been perturbed with different levels of Gaussian noise. The experiments are repeated for system training under both noisy and noise-free conditions. Analyses of the results show that the new method outperforms the standard approach by substantially reducing the prediction errors

    A similarity-based inference engine for non-singleton fuzzy logic systems

    Get PDF
    In non-singleton fuzzy logic systems (NSFLSs) input uncertainties are modelled with input fuzzy sets in order to capture input uncertainty such as sensor noise. The performance of NSFLSs in handling such uncertainties depends both on the actual input fuzzy sets (and their inherent model of uncertainty) and on the way that they affect the inference process. This paper proposes a novel type of NSFLS by replacing the composition-based inference method of type-1 fuzzy relations with a similarity-based inference method that makes NSFLSs more sensitive to changes in the input's uncertainty characteristics. The proposed approach is based on using the Jaccard ratio to measure the similarity between input and antecedent fuzzy sets, then using the measured similarity to determine the firing strength of each individual fuzzy rule. The standard and novel approaches to NSFLSs are experimentally compared for the well-known problem of Mackey-Glass time series predictions, where the NSFLS's inputs have been perturbed with different levels of Gaussian noise. The experiments are repeated for system training under both noisy and noise-free conditions. Analyses of the results show that the new method outperforms the standard approach by substantially reducing the prediction errors

    Matching matched filtering with deep networks in gravitational-wave astronomy

    Get PDF
    We report on the construction of a deep convolutional neural network that can reproduce the sensitivity of a matched-filtering search for binary black hole gravitational-wave signals. The standard method for the detection of well modeled transient gravitational-wave signals is matched filtering. However, the computational cost of such searches in low latency will grow dramatically as the low frequency sensitivity of gravitational-wave detectors improves. Convolutional neural networks provide a highly computationally efficient method for signal identification in which the majority of calculations are performed prior to data taking during a training process. We use only whitened time series of measured gravitational-wave strain as an input, and we train and test on simulated binary black hole signals in synthetic Gaussian noise representative of Advanced LIGO sensitivity. We show that our network can classify signal from noise with a performance that emulates that of match filtering applied to the same datasets when considering the sensitivity defined by Reciever-Operator characteristics.Comment: 5 pages, 3 figures, submitted to PR

    Distortion Robust Image Classification using Deep Convolutional Neural Network with Discrete Cosine Transform

    Full text link
    Convolutional Neural Network is good at image classification. However, it is found to be vulnerable to image quality degradation. Even a small amount of distortion such as noise or blur can severely hamper the performance of these CNN architectures. Most of the work in the literature strives to mitigate this problem simply by fine-tuning a pre-trained CNN on mutually exclusive or a union set of distorted training data. This iterative fine-tuning process with all known types of distortion is exhaustive and the network struggles to handle unseen distortions. In this work, we propose distortion robust DCT-Net, a Discrete Cosine Transform based module integrated into a deep network which is built on top of VGG16. Unlike other works in the literature, DCT-Net is "blind" to the distortion type and level in an image both during training and testing. As a part of the training process, the proposed DCT module discards input information which mostly represents the contribution of high frequencies. The DCT-Net is trained "blindly" only once and applied in generic situation without further retraining. We also extend the idea of traditional dropout and present a training adaptive version of the same. We evaluate our proposed method against Gaussian blur, motion blur, salt and pepper noise, Gaussian noise and speckle noise added to CIFAR-10/100 and ImageNet test sets. Experimental results demonstrate that once trained, DCT-Net not only generalizes well to a variety of unseen image distortions but also outperforms other methods in the literature

    Learning with regularizers in multilayer neural networks

    Get PDF
    We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labelled by a two-layer teacher network with an arbitrary number of hidden units which may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios

    Score-based Diffusion Models in Function Space

    Full text link
    Diffusion models have recently emerged as a powerful framework for generative modeling. They consist of a forward process that perturbs input data with Gaussian white noise and a reverse process that learns a score function to generate samples by denoising. Despite their tremendous success, they are mostly formulated on finite-dimensional spaces, e.g. Euclidean, limiting their applications to many domains where the data has a functional form such as in scientific computing and 3D geometric data analysis. In this work, we introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space. In DDOs, the forward process perturbs input functions gradually using a Gaussian process. The generative process is formulated by integrating a function-valued Langevin dynamic. Our approach requires an appropriate notion of the score for the perturbed data distribution, which we obtain by generalizing denoising score matching to function spaces that can be infinite-dimensional. We show that the corresponding discretized algorithm generates accurate samples at a fixed cost that is independent of the data resolution. We theoretically and numerically verify the applicability of our approach on a set of problems, including generating solutions to the Navier-Stokes equation viewed as the push-forward distribution of forcings from a Gaussian Random Field (GRF).Comment: 26 pages, 7 figure
    • …
    corecore