1,317 research outputs found
EEG analytics for early detection of autism spectrum disorder: a data-driven approach
Autism spectrum disorder (ASD) is a complex and heterogeneous disorder, diagnosed on the basis of behavioral symptoms during the second year of life or later. Finding scalable biomarkers for early detection is challenging because of the variability in presentation of the disorder and the need for simple measurements that could be implemented routinely during well-baby checkups. EEG is a relatively easy-to-use, low cost brain measurement tool that is being increasingly explored as a potential clinical tool for monitoring atypical brain development. EEG measurements were collected from 99 infants with an older sibling diagnosed with ASD, and 89 low risk controls, beginning at 3 months of age and continuing until 36 months of age. Nonlinear features were computed from EEG signals and used as input to statistical learning methods. Prediction of the clinical diagnostic outcome of ASD or not ASD was highly accurate when using EEG measurements from as early as 3 months of age. Specificity, sensitivity and PPV were high, exceeding 95% at some ages. Prediction of ADOS calibrated severity scores for all infants in the study using only EEG data taken as early as 3 months of age was strongly correlated with the actual measured scores. This suggests that useful digital biomarkers might be extracted from EEG measurements.This research was supported by National Institute of Mental Health (NIMH) grant R21 MH 093753 (to WJB), National Institute on Deafness and Other Communication Disorders (NIDCD) grant R21 DC08647 (to HTF), NIDCD grant R01 DC 10290 (to HTF and CAN) and a grant from the Simons Foundation (to CAN, HTF, and WJB). We are especially grateful to the staff and students who worked on the study and to the families who participated. (R21 MH 093753 - National Institute of Mental Health (NIMH); R21 DC08647 - National Institute on Deafness and Other Communication Disorders (NIDCD); R01 DC 10290 - NIDCD; Simons Foundation)Published versio
Concrete resource analysis of the quantum linear system algorithm used to compute the electromagnetic scattering cross section of a 2D target
We provide a detailed estimate for the logical resource requirements of the
quantum linear system algorithm (QLSA) [Phys. Rev. Lett. 103, 150502 (2009)]
including the recently described elaborations [Phys. Rev. Lett. 110, 250504
(2013)]. Our resource estimates are based on the standard quantum-circuit model
of quantum computation; they comprise circuit width, circuit depth, the number
of qubits and ancilla qubits employed, and the overall number of elementary
quantum gate operations as well as more specific gate counts for each
elementary fault-tolerant gate from the standard set {X, Y, Z, H, S, T, CNOT}.
To perform these estimates, we used an approach that combines manual analysis
with automated estimates generated via the Quipper quantum programming language
and compiler. Our estimates pertain to the example problem size N=332,020,680
beyond which, according to a crude big-O complexity comparison, QLSA is
expected to run faster than the best known classical linear-system solving
algorithm. For this problem size, a desired calculation accuracy 0.01 requires
an approximate circuit width 340 and circuit depth of order if oracle
costs are excluded, and a circuit width and depth of order and
, respectively, if oracle costs are included, indicating that the
commonly ignored oracle resources are considerable. In addition to providing
detailed logical resource estimates, it is also the purpose of this paper to
demonstrate explicitly how these impressively large numbers arise with an
actual circuit implementation of a quantum algorithm. While our estimates may
prove to be conservative as more efficient advanced quantum-computation
techniques are developed, they nevertheless provide a valid baseline for
research targeting a reduction of the resource requirements, implying that a
reduction by many orders of magnitude is necessary for the algorithm to become
practical.Comment: 37 pages, 40 figure
Stochastic finite differences and multilevel Monte Carlo for a class of SPDEs in finance
In this article, we propose a Milstein finite difference scheme for a
stochastic partial differential equation (SPDE) describing a large particle
system. We show, by means of Fourier analysis, that the discretisation on an
unbounded domain is convergent of first order in the timestep and second order
in the spatial grid size, and that the discretisation is stable with respect to
boundary data. Numerical experiments clearly indicate that the same convergence
order also holds for boundary-value problems. Multilevel path simulation,
previously used for SDEs, is shown to give substantial complexity gains
compared to a standard discretisation of the SPDE or direct simulation of the
particle system. We derive complexity bounds and illustrate the results by an
application to basket credit derivatives
Recommended from our members
Prediction of claims in export credit finance: a comparison of four machine learning techniques
This study evaluates four machine learning (ML) techniques (Decision Trees (DT), Random Forests (RF), Neural Networks (NN) and Probabilistic Neural Networks (PNN)) on their ability to accurately predict export credit insurance claims. Additionally, we compare the performance of the ML techniques against a simple benchmark (BM) heuristic. The analysis is based on the utilisation of a dataset provided by the Berne Union, which is the most comprehensive collection of export credit insurance data and has been used in only two scientific studies so far. All ML techniques performed relatively well in predicting whether or not claims would be incurred, and, with limitations, in predicting the order of magnitude of the claims. No satisfactory results were achieved predicting actual claim ratios. RF performed significantly better than DT, NN and PNN against all prediction tasks, and most reliably carried their validation performance forward to test performance
Contributions to High-Dimensional Pattern Recognition
This thesis gathers some contributions to statistical pattern recognition particularly targeted
at problems in which the feature vectors are high-dimensional. Three pattern recognition
scenarios are addressed, namely pattern classification, regression analysis and score fusion.
For each of these, an algorithm for learning a statistical model is presented. In order to
address the difficulty that is encountered when the feature vectors are high-dimensional,
adequate models and objective functions are defined. The strategy of learning simultaneously
a dimensionality reduction function and the pattern recognition model parameters is shown to
be quite effective, making it possible to learn the model without discarding any discriminative
information. Another topic that is addressed in the thesis is the use of tangent vectors as
a way to take better advantage of the available training data. Using this idea, two popular
discriminative dimensionality reduction techniques are shown to be effectively improved. For
each of the algorithms proposed throughout the thesis, several data sets are used to illustrate
the properties and the performance of the approaches. The empirical results show that the
proposed techniques perform considerably well, and furthermore the models learned tend to
be very computationally efficient.Villegas Santamaría, M. (2011). Contributions to High-Dimensional Pattern Recognition [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/10939Palanci
- …