979 research outputs found

    A power comparison between nonparametric regression tests

    Get PDF
    In this paper, we consider three major types of nonparametric regression tests that are based on kernel and local polynomial smoothing techniques. Their asymptotic power comparisons are established systematically under the fixed and contiguous alternatives, and are also illustrated through non-asymptotic investigations and finite-sample simulation studies. --Goodness-of-fit,Local alternative,Local polynomial regression,Power,Smoothing parameter

    Multiple testing via FDRLFDR_L for large-scale imaging data

    Full text link
    The multiple testing procedure plays an important role in detecting the presence of spatial signals for large-scale imaging data. Typically, the spatial signals are sparse but clustered. This paper provides empirical evidence that for a range of commonly used control levels, the conventional FDR⁑\operatorname {FDR} procedure can lack the ability to detect statistical significance, even if the pp-values under the true null hypotheses are independent and uniformly distributed; more generally, ignoring the neighboring information of spatially structured data will tend to diminish the detection effectiveness of the FDR⁑\operatorname {FDR} procedure. This paper first introduces a scalar quantity to characterize the extent to which the "lack of identification phenomenon" (LIP⁑\operatorname {LIP}) of the FDR⁑\operatorname {FDR} procedure occurs. Second, we propose a new multiple comparison procedure, called FDR⁑L\operatorname {FDR}_L, to accommodate the spatial information of neighboring pp-values, via a local aggregation of pp-values. Theoretical properties of the FDR⁑L\operatorname {FDR}_L procedure are investigated under weak dependence of pp-values. It is shown that the FDR⁑L\operatorname {FDR}_L procedure alleviates the LIP⁑\operatorname {LIP} of the FDR⁑\operatorname {FDR} procedure, thus substantially facilitating the selection of more stringent control levels. Simulation evaluations indicate that the FDR⁑L\operatorname {FDR}_L procedure improves the detection sensitivity of the FDR⁑\operatorname {FDR} procedure with little loss in detection specificity. The computational simplicity and detection effectiveness of the FDR⁑L\operatorname {FDR}_L procedure are illustrated through a real brain fMRI dataset.Comment: Published in at http://dx.doi.org/10.1214/10-AOS848 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bump hunting with non-Gaussian kernels

    Full text link
    It is well known that the number of modes of a kernel density estimator is monotone nonincreasing in the bandwidth if the kernel is a Gaussian density. There is numerical evidence of nonmonotonicity in the case of some non-Gaussian kernels, but little additional information is available. The present paper provides theoretical and numerical descriptions of the extent to which the number of modes is a nonmonotone function of bandwidth in the case of general compactly supported densities. Our results address popular kernels used in practice, for example, the Epanechnikov, biweight and triweight kernels, and show that in such cases nonmonotonicity is present with strictly positive probability for all sample sizes n\geq3. In the Epanechnikov and biweight cases the probability of nonmonotonicity equals 1 for all n\geq2. Nevertheless, in spite of the prevalence of lack of monotonicity revealed by these results, it is shown that the notion of a critical bandwidth (the smallest bandwidth above which the number of modes is guaranteed to be monotone) is still well defined. Moreover, just as in the Gaussian case, the critical bandwidth is of the same size as the bandwidth that minimises mean squared error of the density estimator. These theoretical results, and new numerical evidence, show that the main effects of nonmonotonicity occur for relatively small bandwidths, and have negligible impact on many aspects of bump hunting.Comment: Published at http://dx.doi.org/10.1214/009053604000000715 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Adversarial Defense via Neural Oscillation inspired Gradient Masking

    Full text link
    Spiking neural networks (SNNs) attract great attention due to their low power consumption, low latency, and biological plausibility. As they are widely deployed in neuromorphic devices for low-power brain-inspired computing, security issues become increasingly important. However, compared to deep neural networks (DNNs), SNNs currently lack specifically designed defense methods against adversarial attacks. Inspired by neural membrane potential oscillation, we propose a novel neural model that incorporates the bio-inspired oscillation mechanism to enhance the security of SNNs. Our experiments show that SNNs with neural oscillation neurons have better resistance to adversarial attacks than ordinary SNNs with LIF neurons on kinds of architectures and datasets. Furthermore, we propose a defense method that changes model's gradients by replacing the form of oscillation, which hides the original training gradients and confuses the attacker into using gradients of 'fake' neurons to generate invalid adversarial samples. Our experiments suggest that the proposed defense method can effectively resist both single-step and iterative attacks with comparable defense effectiveness and much less computational costs than adversarial training methods on DNNs. To the best of our knowledge, this is the first work that establishes adversarial defense through masking surrogate gradients on SNNs

    Spiking sampling network for image sparse representation and dynamic vision sensor data compression

    Full text link
    Sparse representation has attracted great attention because it can greatly save storage resources and find representative features of data in a low-dimensional space. As a result, it may be widely applied in engineering domains including feature extraction, compressed sensing, signal denoising, picture clustering, and dictionary learning, just to name a few. In this paper, we propose a spiking sampling network. This network is composed of spiking neurons, and it can dynamically decide which pixel points should be retained and which ones need to be masked according to the input. Our experiments demonstrate that this approach enables better sparse representation of the original image and facilitates image reconstruction compared to random sampling. We thus use this approach for compressing massive data from the dynamic vision sensor, which greatly reduces the storage requirements for event data

    A noise based novel strategy for faster SNN training

    Full text link
    Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bio-plausibility. Optimization of SNNs is a challenging task. Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have their advantages and limitations. For ANN-to-SNN conversion, it requires a long inference time to approximate the accuracy of ANN, thus diminishing the benefits of SNN. With spike-based BP, training high-precision SNNs typically consumes dozens of times more computational resources and time than their ANN counterparts. In this paper, we propose a novel SNN training approach that combines the benefits of the two methods. We first train a single-step SNN(T=1) by approximating the neural potential distribution with random noise, then convert the single-step SNN(T=1) to a multi-step SNN(T=N) losslessly. The introduction of Gaussian distributed noise leads to a significant gain in accuracy after conversion. The results show that our method considerably reduces the training and inference times of SNNs while maintaining their high accuracy. Compared to the previous two methods, ours can reduce training time by 65%-75% and achieves more than 100 times faster inference speed. We also argue that the neuron model augmented with noise makes it more bio-plausible

    Semiparametric detection of significant activation for brain fMRI

    Full text link
    Functional magnetic resonance imaging (fMRI) aims to locate activated regions in human brains when specific tasks are performed. The conventional tool for analyzing fMRI data applies some variant of the linear model, which is restrictive in modeling assumptions. To yield more accurate prediction of the time-course behavior of neuronal responses, the semiparametric inference for the underlying hemodynamic response function is developed to identify significantly activated voxels. Under mild regularity conditions, we demonstrate that a class of the proposed semiparametric test statistics, based on the local linear estimation technique, follow Ο‡2\chi^2 distributions under null hypotheses for a number of useful hypotheses. Furthermore, the asymptotic power functions of the constructed tests are derived under the fixed and contiguous alternatives. Simulation evaluations and real fMRI data application suggest that the semiparametric inference procedure provides more efficient detection of activated brain areas than the popular imaging analysis tools AFNI and FSL.Comment: Published in at http://dx.doi.org/10.1214/07-AOS519 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore