9,965 research outputs found
Using spike train distances to identify the most discriminative neuronal subpopulation
Background: Spike trains of multiple neurons can be analyzed following the
summed population (SP) or the labeled line (LL) hypothesis. Responses to
external stimuli are generated by a neuronal population as a whole or the
individual neurons have encoding capacities of their own. The SPIKE-distance
estimated either for a single, pooled spike train over a population or for each
neuron separately can serve to quantify these responses.
New Method: For the SP case we compare three algorithms that search for the
most discriminative subpopulation over all stimulus pairs. For the LL case we
introduce a new algorithm that combines neurons that individually separate
different pairs of stimuli best.
Results: The best approach for SP is a brute force search over all possible
subpopulations. However, it is only feasible for small populations. For more
realistic settings, simulated annealing clearly outperforms gradient algorithms
with only a limited increase in computational load. Our novel LL approach can
handle very involved coding scenarios despite its computational ease.
Comparison with Existing Methods: Spike train distances have been extended to
the analysis of neural populations interpolating between SP and LL coding. This
includes parametrizing the importance of distinguishing spikes being fired in
different neurons. Yet, these approaches only consider the population as a
whole. The explicit focus on subpopulations render our algorithms
complimentary.
Conclusions: The spectrum of encoding possibilities in neural populations is
broad. The SP and LL cases are two extremes for which our algorithms provide
correct identification results.Comment: 14 pages, 9 Figure
Surrogate time series
Before we apply nonlinear techniques, for example those inspired by chaos
theory, to dynamical phenomena occurring in nature, it is necessary to first
ask if the use of such advanced techniques is justified "by the data". While
many processes in nature seem very unlikely a priori to be linear, the possible
nonlinear nature might not be evident in specific aspects of their dynamics.
The method of surrogate data has become a very popular tool to address such a
question. However, while it was meant to provide a statistically rigorous,
foolproof framework, some limitations and caveats have shown up in its
practical use. In this paper, recent efforts to understand the caveats, avoid
the pitfalls, and to overcome some of the limitations, are reviewed and
augmented by new material. In particular, we will discuss specific as well as
more general approaches to constrained randomisation, providing a full range of
examples. New algorithms will be introduced for unevenly sampled and
multivariate data and for surrogate spike trains. The main limitation, which
lies in the interpretability of the test results, will be illustrated through
instructive case studies. We will also discuss some implementational aspects of
the realisation of these methods in the TISEAN
(http://www.mpipks-dresden.mpg.de/~tisean) software package.Comment: 28 pages, 23 figures, software at
http://www.mpipks-dresden.mpg.de/~tisea
Unconventional machine learning of genome-wide human cancer data
Recent advances in high-throughput genomic technologies coupled with
exponential increases in computer processing and memory have allowed us to
interrogate the complex aberrant molecular underpinnings of human disease from
a genome-wide perspective. While the deluge of genomic information is expected
to increase, a bottleneck in conventional high-performance computing is rapidly
approaching. Inspired in part by recent advances in physical quantum
processors, we evaluated several unconventional machine learning (ML)
strategies on actual human tumor data. Here we show for the first time the
efficacy of multiple annealing-based ML algorithms for classification of
high-dimensional, multi-omics human cancer data from the Cancer Genome Atlas.
To assess algorithm performance, we compared these classifiers to a variety of
standard ML methods. Our results indicate the feasibility of using
annealing-based ML to provide competitive classification of human cancer types
and associated molecular subtypes and superior performance with smaller
training datasets, thus providing compelling empirical evidence for the
potential future application of unconventional computing architectures in the
biomedical sciences
SPI/INTEGRAL in-flight performance
The SPI instrument has been launched on-board the INTEGRAL observatory on
October 17, 2002. SPI is a spectrometer devoted to the sky observation in the
20 keV-8 MeV energy range using 19 germanium detectors. The performance of the
cryogenic system is nominal and allows to cool the 19 kg of germanium down to
85 K with a comfortable margin. The energy resolution of the whole camera is
2.5 keV at 1.1 MeV. This resolution degrades with time due to particle
irradiation in space. We show that the annealing process allows the recovery of
the initial performance. The anticoincidence shield works as expected, with a
low threshold at 75 keV, reducing the GeD background by a factor of 20. The
digital front-end electronics system allows the perfect alignement in time of
all the signals as well as the optimisation of the dead time (12%). We
demonstrate that SPI is able to map regions as complex as the galactic plane.
The obtained spectrum of the Crab nebula validates the present version of our
response matrix. The 3 sensitivity of the instrument at 1 MeV is 8
10phcmskeV for the continuum and 3
10phcms for narrow lines.Comment: 10 pages, 18 figures, accepted for publication in A&A (special
INTEGRAL volume
- …