5 research outputs found

    Imperfect chimera and synchronization in a hybrid adaptive conductance based exponential integrate and fire neuron model

    Get PDF
    In this study, the hybrid conductance-based adaptive exponential integrate and fire (CadEx) neuron model is proposed to determine the effect of magnetic flux on conductance-based neurons. To begin with, bifurcation analysis is carried out in relation to the input current, resetting parameter, and adaptation time constant in order to comprehend dynamical transitions. We exemplify that the existence of period-1, period-2, and period-4 cycles depends on the magnitude of input current via period doubling and period halving bifurcations. Furthermore, the presence of chaotic behavior is discovered by varying the adaptation time constant via the period doubling route. Following that, we examine the network behavior of CadEx neurons and discover the presence of a variety of dynamical behaviors such as desynchronization, traveling chimera, traveling wave, imperfect chimera, and synchronization. The appearance of synchronization is especially noticeable when the magnitude of the magnetic flux coefficient or the strength of coupling strength is increased. As a result, achieving synchronization in CadEx is essential for neuron activity, which can aid in the realization of such behavior during many cognitive processes

    NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips

    Get PDF
    Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.Comment: Accepted for publication at the 2020 International Joint Conference on Neural Networks (IJCNN

    A Multiple-Plasticity Spiking Neural Network Embedded in a Closed-Loop Control System to Model Cerebellar Pathologies

    Get PDF
    The cerebellum plays a crucial role in sensorimotor control and cerebellar disorders compromise adaptation and learning of motor responses. However, the link between alterations at network level and cerebellar dysfunction is still unclear. In principle, this understanding would benefit of the development of an artificial system embedding the salient neuronal and plastic properties of the cerebellum and operating in closed-loop. To this aim, we have exploited a realistic spiking computational model of the cerebellum to analyze the network correlates of cerebellar impairment. The model was modified to reproduce three different damages of the cerebellar cortex: (i) a loss of the main output neurons (Purkinje Cells), (ii) a lesion to the main cerebellar afferents (Mossy Fibers), and (iii) a damage to a major mechanism of synaptic plasticity (Long Term Depression). The modified network models were challenged with an Eye-Blink Classical Conditioning test, a standard learning paradigm used to evaluate cerebellar impairment, in which the outcome was compared to reference results obtained in human or animal experiments. In all cases, the model reproduced the partial and delayed conditioning typical of the pathologies, indicating that an intact cerebellar cortex functionality is required to accelerate learning by transferring acquired information to the cerebellar nuclei. Interestingly, depending on the type of lesion, the redistribution of synaptic plasticity and response timing varied greatly generating specific adaptation patterns. Thus, not only the present work extends the generalization capabilities of the cerebellar spiking model to pathological cases, but also predicts how changes at the neuronal level are distributed across the network, making it usable to infer cerebellar circuit alterations occurring in cerebellar pathologies

    Hardware and Software Optimizations for Accelerating Deep Neural Networks: Survey of Current Trends, Challenges, and the Road Ahead

    Get PDF
    Currently, Machine Learning (ML) is becoming ubiquitous in everyday life. Deep Learning (DL) is already present in many applications ranging from computer vision for medicine to autonomous driving of modern cars as well as other sectors in security, healthcare, and finance. However, to achieve impressive performance, these algorithms employ very deep networks, requiring a significant computational power, both during the training and inference time. A single inference of a DL model may require billions of multiply-and-accumulated operations, making the DL extremely compute-and energy-hungry. In a scenario where several sophisticated algorithms need to be executed with limited energy and low latency, the need for cost-effective hardware platforms capable of implementing energy-efficient DL execution arises. This paper first introduces the key properties of two brain-inspired models like Deep Neural Network (DNN), and Spiking Neural Network (SNN), and then analyzes techniques to produce efficient and high-performance designs. This work summarizes and compares the works for four leading platforms for the execution of algorithms such as CPU, GPU, FPGA and ASIC describing the main solutions of the state-of-the-art, giving much prominence to the last two solutions since they offer greater design flexibility and bear the potential of high energy-efficiency, especially for the inference process. In addition to hardware solutions, this paper discusses some of the important security issues that these DNN and SNN models may have during their execution, and offers a comprehensive section on benchmarking, explaining how to assess the quality of different networks and hardware systems designed for them

    Effects of intracranial stimulation and the involvement of the human parahippocampal cortex in perception

    Get PDF
    How the human brain translates photons hitting the retina into conscious perception remains an open question. Throughout the medial temporal lobe (MTL), there are neurons (called concept cells) that change their firing rate when that neuron's preferred concept, e.g., a specific person or object, is seen. The firing rate of concept cells is correlated with perception. Nevertheless, it remains unclear whether or to what extent concept cells are involved in perceptogenesis, i.e., the creation of conscious percepts. Inferring from studies in monkeys, concept-specific neurons involved in perceptogenesis would be expected along the ventral and dorsal stream of visual processing (also called the what and where pathway, respectively). Various regions that are part of the dorsal stream are connected to the parahippocampal cortex (PHC), a region within the MTL. Compared to other MTL regions, lower selectivity, the absence of multimodal responses, and especially the shorter response latencies do not exclude an involvement of the PHC in perceptogenesis. In fact, damage to the parahippocampal place area (PPA, a part of the PHC) results in topographical disorientation. The goal of this thesis is to test the involvement of the PHC in perception by using electrical stimulation during a forced-choice categorization task involving landscapes versus animals. First, we determined effective parameters for intracranial stimulation of brain tissue in epilepsy patients implanted with depth-electrodes for seizure monitoring. We investigated the effects of amplitude, phase width, frequency, and pulse-train duration on neuronal firing, the local field potential (LFP), and behavioral responses to evoked percepts. Frequency and charge per phase were the most influential parameters on all three signals. Both parameters showed a positive effect on event-related potentials (ERPs) in the LFP. Higher frequencies (especially around 200 Hz) lead to a short-term inhibition of neuronal firing, while higher charge per phase can have an inhibitory or excitatory effect on neuronal firing. All parameters had a positive effect on the reports of evoked percepts; on reports of phosphenes in response to stimulating close to the optic radiation as well as on reports of auditory verbal hallucinations in response to stimulating Heschl's gyrus. Using functional magnetic resonance imaging (fMRI), we found that the PPA, i.e., the part of the PHC that is most selective towards images of landscapes, is rather small (up to 1‰ of total brain volume per hemisphere) with varying degrees of hemispheric laterality. Stimulating the PHC outside of the PPA - using a 100 ms high-frequency pulse train delivered at the natural response latency of the PHC - had no effect on categorizing landscapes. However, stimulating inside the PPA, close to the peak activation of the fMRI cluster, resulted in a 7% to 10% increase in landscape responses to ambiguous stimuli. Furthermore, stimulating the PPA also led to an increase in behavioral response time, especially to images with a predominant landscape component. None of our patients reported visual hallucinations of places or scenes in response to our stimulation protocols. Our data suggests that the PPA is involved in the perceptogenesis of landscapes at a stage that does not reach awareness, while the rest of the PHC is unlikely to be involved in perceptogenesis, at least not as it pertains to the perception of landscapes or animals. We also developed an online spike sorting algorithm and an adaptive screening procedure for concept cells to pave the way for new paradigms involving informed feedback
    corecore