37 research outputs found

    Analysis of the neural hypercolumn in parallel PCSIM simulations

    Get PDF
    AbstractLarge and sudden changes in pitch or loudness occur statistically less frequently than gradual fluctuations, which means that natural sounds typically exhibit 1/f spectra. Experiments conducted on human subjects showed that listeners indeed prefer 1/f distributed melodies to melodies with faster or slower dynamics. It was recently demonstrated by using animal models, that neurons in primary auditory cortex of anesthetized ferrets exhibit a pronounced preference to stimuli that exhibit 1/f statistics. In the visual modality, it was shown that neurons in primary visual cortex of macaque monkeys exhibit tuning to sinusoidal gratings featuring 1/f dynamics.One might therefore suspect that neurons in mammalian cortex exhibit Self-Organizing Criticality. Indeed, we have found SOC-like phenomena in neurophysiological data collected in rat primary somatosensory cortex. In this paper we concentrated on investigation of the dynamics of cortical hypercolumn consisting of about 128 thousand simulated neurons. The set of 128 Liquid State Machines, each consisting 1024 neurons, was simulated on a simple cluster built of two double quad-core machines (16 cores).PCSIM was designed as a tool for simulating artificial biological-like neural networks composed of different models of neurons and different types of synapses. The simulator was written in C++ with a primary interface dedicated for the Python programming language. As its authors ensure it is intended to simulate networks containing up to millions of neurons and on the order of billions of synapses. This is achieved by distributing the network over different nodes of a computing cluster by using Message Passing Interface.The results obtained for Leaky Integrate-and-Fire model of neurons used for the construction of the hypercolumn and varying density of inter-column connections will be discussed. Benchmarking results for using the PCSIM on the cluster and predictions for grid computing will be presented to some extent. Research presented herein makes a good starting point for the simulations of very large parts of mammalian brain cortex and in some way leading to better understanding of the functionality of human brain

    Most Popular Signal Processing Methods in Motor-Imagery BCI: A Review and Meta-Analysis

    Get PDF
    Brain-Computer Interfaces (BCI) constitute an alternative channel of communication between humans and environment. There are a number of different technologies which enable the recording of brain activity. One of these is electroencephalography (EEG). The most common EEG methods include interfaces whose operation is based on changes in the activity of Sensorimotor Rhythms (SMR) during imagery movement, so-called Motor Imagery BCI (MIBCI).The present article is a review of 131 articles published from 1997 to 2017 discussing various procedures of data processing in MIBCI. The experiments described in these publications have been compared in terms of the methods used for data registration and analysis. Some of the studies (76 reports) were subjected to meta-analysis which showed corrected average classification accuracy achieved in these studies at the level of 51.96%, a high degree of heterogeneity of results (Q = 1806577.61; df = 486; p < 0.001; I2 = 99.97%), as well as significant effects of number of channels, number of mental images, and method of spatial filtering. On the other hand the meta-regression failed to provide evidence that there was an increase in the effectiveness of the solutions proposed in the articles published in recent years. The authors have proposed a newly developed standard for presenting results acquired during MIBCI experiments, which is designed to facilitate communication and comparison of essential information regarding the effects observed. Also, based on the findings of descriptive analysis and meta-analysis, the authors formulated recommendations regarding practices applied in research on signal processing in MIBCIs

    Mapping the Human Brain in Frequency Band Analysis of Brain Cortex Electroencephalographic Activity for Selected Psychiatric Disorders

    Get PDF
    There are still no good quantitative methods to be applied in psychiatric diagnosis. The interview is still the main and most important tool in the psychiatrist work. This paper presents the results of electroencephalographic research with the subjects of a group of 30 patients with psychiatric disorders compared to the control group of healthy volunteers. All subjects were solving working memory task. The digit-span working memory task test was chosen as one of the most popular tasks given to subjects with cognitive dysfunctions, especially for the patients with panic disorders, depression (including the depressive phase of bipolar disorder), phobias, and schizophrenia. Having such cohort of patients some results for the subjects with insomnia and Asperger syndrome are also presented. The cortical activity of their brains was registered by the dense array EEG amplifier. Source localization using the photogrammetry station and the sLORETA algorithm was then performed in five EEG frequency bands. The most active Brodmann Areas are indicated. Methodology for mapping the brain and research protocol are presented. The first results indicate that the presented technique can be useful in finding psychiatric disorder neurophysiological biomarkers. The first attempts were made to associate hyperactivity of selected Brodmann Areas with particular disorders

    A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation

    Get PDF
    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies

    A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.

    Get PDF
    Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies

    Modeling of GERDA Phase II data

    Get PDF
    The GERmanium Detector Array (GERDA) experiment at the Gran Sasso underground laboratory (LNGS) of INFN is searching for neutrinoless double-beta (0νββ0\nu\beta\beta) decay of 76^{76}Ge. The technological challenge of GERDA is to operate in a "background-free" regime in the region of interest (ROI) after analysis cuts for the full 100\,kg\cdotyr target exposure of the experiment. A careful modeling and decomposition of the full-range energy spectrum is essential to predict the shape and composition of events in the ROI around QββQ_{\beta\beta} for the 0νββ0\nu\beta\beta search, to extract a precise measurement of the half-life of the double-beta decay mode with neutrinos (2νββ2\nu\beta\beta) and in order to identify the location of residual impurities. The latter will permit future experiments to build strategies in order to further lower the background and achieve even better sensitivities. In this article the background decomposition prior to analysis cuts is presented for GERDA Phase II. The background model fit yields a flat spectrum in the ROI with a background index (BI) of 16.040.85+0.7810316.04^{+0.78}_{-0.85} \cdot 10^{-3}\,cts/(kg\cdotkeV\cdotyr) for the enriched BEGe data set and 14.680.52+0.4710314.68^{+0.47}_{-0.52} \cdot 10^{-3}\,cts/(kg\cdotkeV\cdotyr) for the enriched coaxial data set. These values are similar to the one of Gerda Phase I despite a much larger number of detectors and hence radioactive hardware components

    Bray-Curtis Metrics as Measure of Liquid State Machine Separation Ability in Function of Connections Density

    No full text
    AbstractSeparation ability is one of two most important properties of Liquid State Machines used in the Liquid Computing theory. To measure the so-called distance of states that Liquid State Machine can exist in – different norms and metrics can be applied. Till now we have used the Euclidean distance to tell the distance of states representing different stimulations of simulated cortical microcircuits. In this paper we compare our previously used methods and the approach with Bray-Curtis measure of dissimilarity. Systematic analysis of efficiency and its comparison for a different number of simulated synapses present in the model will be discussed to some extent

    Performance comparison of parallel fastICA algorithm in the PLGrid structures

    No full text
    During processing the EEG signal, the methods of cleaning it from artifacts play an important role. One of the most commonly used methods is ICA (independent component analysis) [1-3]. However, algorithms of this type are computationally expensive. Known implementations of ICA type algorithms rarely include the possibility of parallel computing and do not use the capabilities provided by the architecture itself. This paper presents a parallel implementation of the fastICA algorithm using the available libraries and extensions of the Intel processors (such as BLAS, MKL, Cilk Plus) and compares the execution time for two selected architectures in the PLGrid structure (Zeus and Prometheus)
    corecore