8,790 research outputs found

    A CASE STUDY ON SUPPORT VECTOR MACHINES VERSUS ARTIFICIAL NEURAL NETWORKS

    Get PDF
    The capability of artificial neural networks for pattern recognition of real world problems is well known. In recent years, the support vector machine has been advocated for its structure risk minimization leading to tolerance margins of decision boundaries. Structures and performances of these pattern classifiers depend on the feature dimension and training data size. The objective of this research is to compare these pattern recognition systems based on a case study. The particular case considered is on classification of hypertensive and normotensive right ventricle (RV) shapes obtained from Magnetic Resonance Image (MRI) sequences. In this case, the feature dimension is reasonable, but the available training data set is small, however, the decision surface is highly nonlinear.For diagnosis of congenital heart defects, especially those associated with pressure and volume overload problems, a reliable pattern classifier for determining right ventricle function is needed. RV¡¦s global and regional surface to volume ratios are assessed from an individual¡¦s MRI heart images. These are used as features for pattern classifiers. We considered first two linear classification methods: the Fisher linear discriminant and the linear classifier trained by the Ho-Kayshap algorithm. When the data are not linearly separable, artificial neural networks with back-propagation training and radial basis function networks were then considered, providing nonlinear decision surfaces. Thirdly, a support vector machine was trained which gives tolerance margins on both sides of the decision surface. We have found in this case study that the back-propagation training of an artificial neural network depends heavily on the selection of initial weights, even though randomized. The support vector machine where radial basis function kernels are used is easily trained and provides decision tolerance margins, in spite of only small margins

    PkANN - I. Non-linear matter power spectrum interpolation through artificial neural networks

    Full text link
    We investigate the interpolation of power spectra of matter fluctuations using Artificial Neural Network (PkANN). We present a new approach to confront small-scale non-linearities in the power spectrum of matter fluctuations. This ever-present and pernicious uncertainty is often the Achilles' heel in cosmological studies and must be reduced if we are to see the advent of precision cosmology in the late-time Universe. We show that an optimally trained artificial neural network (ANN), when presented with a set of cosmological parameters (Omega_m h^2, Omega_b h^2, n_s, w_0, sigma_8, m_nu and redshift z), can provide a worst-case error <=1 per cent (for z<=2) fit to the non-linear matter power spectrum deduced through N-body simulations, for modes up to k<=0.7 h/Mpc. Our power spectrum interpolator is accurate over the entire parameter space. This is a significant improvement over some of the current matter power spectrum calculators. In this paper, we detail how an accurate interpolation of the matter power spectrum is achievable with only a sparsely sampled grid of cosmological parameters. Unlike large-scale N-body simulations which are computationally expensive and/or infeasible, a well-trained ANN can be an extremely quick and reliable tool in interpreting cosmological observations and parameter estimation. This paper is the first in a series. In this method paper, we generate the non-linear matter power spectra using HaloFit and use them as mock observations to train the ANN. This work sets the foundation for Paper II, where a suite of N-body simulations will be used to compute the non-linear matter power spectra at sub-per cent accuracy, in the quasi-non-linear regime 0.1 h/Mpc <= k <= 0.9 h/Mpc. A trained ANN based on this N-body suite will be released for the scientific community.Comment: 12 pages, 9 figures, 2 tables, updated to match version accepted by MNRA

    The High Time Resolution Universe Survey VI: An Artificial Neural Network and Timing of 75 Pulsars

    Get PDF
    We present 75 pulsars discovered in the mid-latitude portion of the High Time Resolution Universe survey, 54 of which have full timing solutions. All the pulsars have spin periods greater than 100 ms, and none of those with timing solutions are in binaries. Two display particularly interesting behaviour; PSR J1054-5944 is found to be an intermittent pulsar, and PSR J1809-0119 has glitched twice since its discovery. In the second half of the paper we discuss the development and application of an artificial neural network in the data-processing pipeline for the survey. We discuss the tests that were used to generate scores and find that our neural network was able to reject over 99% of the candidates produced in the data processing, and able to blindly detect 85% of pulsars. We suggest that improvements to the accuracy should be possible if further care is taken when training an artificial neural network; for example ensuring that a representative sample of the pulsar population is used during the training process, or the use of different artificial neural networks for the detection of different types of pulsars.Comment: 15 pages, 8 figure

    Determination of the CMSSM Parameters using Neural Networks

    Full text link
    In most (weakly interacting) extensions of the Standard Model the relation mapping the parameter values onto experimentally measurable quantities can be computed (with some uncertainties), but the inverse relation is usually not known. In this paper we demonstrate the ability of artificial neural networks to find this unknown relation, by determining the unknown parameters of the constrained minimal supersymmetric extension of the Standard Model (CMSSM) from quantities that can be measured at the LHC. We expect that the method works also for many other new physics models. We compare its performance with the results of a straightforward \chi^2 minimization. We simulate LHC signals at a center of mass energy of 14 TeV at the hadron level. In this proof-of-concept study we do not explicitly simulate Standard Model backgrounds, but apply cuts that have been shown to enhance the signal-to-background ratio. We analyze four different benchmark points that lie just beyond current lower limits on superparticle masses, each of which leads to around 1000 events after cuts for an integrated luminosity of 10 fb^{-1}. We use up to 84 observables, most of which are counting observables; we do not attempt to directly reconstruct (differences of) masses from kinematic edges or kinks of distributions. We nevertheless find that m_0 and m_{1/2} can be determined reliably, with errors as small as 1% in some cases. With 500 fb^{-1} of data tan\beta as well as A_0 can also be determined quite accurately. For comparable computational effort the \chi^2 minimization yielded much worse results.Comment: 46 pages, 10 figures, 4 tables; added short paragraph in Section 5 about the goodness of the fit, version to appear in Phys. Rev.

    Modeling Financial Time Series with Artificial Neural Networks

    Full text link
    Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001
    • …
    corecore