335,584 research outputs found

    Artificial neural networks for selection of pulsar candidates from the radio continuum surveys

    Full text link
    Pulsar search with time-domain observation is very computationally expensive and data volume will be enormous with the next generation telescopes such as the Square Kilometre Array. We apply artificial neural networks (ANNs), a machine learning method, for efficient selection of pulsar candidates from radio continuum surveys, which are much cheaper than time-domain observation. With observed quantities such as radio fluxes, sky position and compactness as inputs, our ANNs output the "score" that indicates the degree of likeliness of an object to be a pulsar. We demonstrate ANNs based on existing survey data by the TIFR GMRT Sky Survey (TGSS) and the NRAO VLA Sky Survey (NVSS) and test their performance. Precision, which is the ratio of the number of pulsars classified correctly as pulsars to that of any objects classified as pulsars, is about 96%\%. Finally, we apply the trained ANNs to unidentified radio sources and our fiducial ANN with five inputs (the galactic longitude and latitude, the TGSS and NVSS fluxes and compactness) generates 2,436 pulsar candidates from 456,866 unidentified radio sources. These candidates need to be confirmed if they are truly pulsars by time-domain observations. More information such as polarization will narrow the candidates down further.Comment: 11 pages, 13 figures, 3 tables, accepted for publication in MNRA

    Noise tailoring, noise annealing and external noise injection strategies in memristive Hopfield neural networks

    Full text link
    The commercial introduction of a novel electronic device is often preceded by a lengthy material optimization phase devoted to the suppression of device noise as much as possible. The emergence of novel computing architectures, however, triggers a paradigm change in noise engineering, demonstrating that a non-suppressed, but properly tailored noise can be harvested as a computational resource in probabilistic computing schemes. Such strategy was recently realized on the hardware level in memristive Hopfield neural networks delivering fast and highly energy efficient optimization performance. Inspired by these achievements we perform a thorough analysis of simulated memristive Hopfield neural networks relying on realistic noise characteristics acquired on various memristive devices. These characteristics highlight the possibility of orders of magnitude variations in the noise level depending on the material choice as well as on the resistance state (and the corresponding active region volume) of the devices. Our simulations separate the effects of various device non-idealities on the operation of the Hopfield neural network by investigating the role of the programming accuracy, as well as the noise type and noise amplitude of the ON and OFF states. Relying on these results we propose optimized noise tailoring, noise annealing, and external noise injection strategies.Comment: 13 pages, 7 figure

    Metaheuristic Algorithms for Convolution Neural Network

    Get PDF
    A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).Comment: Article ID 1537325, 13 pages. Received 29 January 2016; Revised 15 April 2016; Accepted 10 May 2016. Academic Editor: Martin Hagan. in Hindawi Publishing. Computational Intelligence and Neuroscience Volume 2016 (2016

    Artificial Neural Networks in Stock Return Prediction: Testing Model Specification in a Global Context

    Get PDF
    This research investigates whether artificial neural networks which make use of firm specific fundamental and technical factors can accurately predict the returns of a sample of several large-cap stocks from various markets across the globe. This study also explores which hidden layer configuration leads to the best network predictive performance. Furthermore, this research identifies which firm-specific factors predominantly influence the predictions made by the artificial neural networks. Five artificial neural networks are designed, trained and tested on a sample of 161 stocks from the Russell 1000 and the S&P International 700 stock indices. The investigation period extends over a 166-month period from January 2001 to October 2014 with a 70:30 split for training and testing subsamples respectively. Eighteen firm-specific factors, based on prior research about the presence of style effects or anomalies on the cross-section of global equity returns, are used as the input variables of the artificial neural networks to forecast one-month forward returns of all the stocks in the sample. The five artificial neural networks investigated in this research differed in hidden layer size. Specifically, the number of hidden neurons examined were three, nine, 13, 18 and 30. All five networks train significantly well, with each network's training error indicating a good model fit. Each network also achieves the desirable information coefficient of 0.1 between its predicted returns and the actual returns in the training sample. It is interestingly discovered that network performance generally improves as the number of hidden neurons in the hidden layer increases until a specific point, after which network performance weakens. In the context of avoiding overfitting, the best-trained network in this research is that with 13 neurons in its hidden layer. This is the primary network used for the out-of sample testing analysis. This network achieves an average prediction error magnitude of approximately 7% and an information coefficient of 0.05 during out-of-sample testing. These results underperform their respective benchmarks moderately. However, further analyses of the network's performance suggest an overall poor out-of-sample predictive ability. This is illustrated by a significant bias and a considerably weak relationship between the network's predicted returns and the actual returns in the testing sample. Global sensitivity analysis reveals that growth style effects, particularly, the capital expenditure ratio, return on equity, sales growth, 12-month percentage change in non-current assets and six-month percentage change in asset turnover were the most persistent factors across all the ANN models. Other significant factors include the 12-month percentage change in monthly volume traded, three-month cumulative prior return and one-month prior return. An unconventional result of this analysis is the relative insignificance of the size and value style effects

    Establishing the relationship between cortical atrophy and semantic deficits in Alzheimer's disease and Mild Cognitive Impairment patients through Voxel-Based Morphometry

    Get PDF
    The aim of this study was to determine the brain areas responsible for the semantic impairment observed in Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI) patients. Thirteen AD, 14 MCI patients, and 13 matched healthy older adults were assessed with a test battery aimed to study their semantic competence. Different subtasks were designed to study their semantic knowledge related to objects and faces in the context of semantic retrieval- and semantic association-dependent tasks. Aggregate scores obtained in the different tests were entered into voxel-based regression analyses with grey matter volume values obtained from three-dimensional brain MRI scans. Areas of significant correlation between volume loss and poor semantic scores were restricted to the temporal lobe in the AD group, while in the MCI and control groups significant associations were found with lower grey matter volume values in a widely distributed network of bilateral fronto-temporo-parietal regions. Our results suggest that degradation of partially overlapping and widely distributed neural networks, mainly including temporal regions, subserve semantic deficits related to objects and faces in AD and MCI patients

    Large Scale Functional Connectivity Networks of Resting State Magnetoencephalography

    Get PDF
    Understanding relationships between cortical neural activity is an important area of research. Investigations of the neural dynamics associated with the healthy and disordered brains could lead to new insights about disease models. Functional connectivity is a promising method for investigating these neural dynamics by observing intrinsic neural activity arising during spontaneous cortical activations recorded via magnetoencephalography (MEG). MEG is a non-invasive measure of the magnetic fields produced during neural activity and provides information regarding neural synchrony within the brain. Phase locking is a time frequency analysis method that provides frequency band specific results of neural communication. Leveraging multiple computers operating in a cluster extends the scale of these investigations to whole brain functional connectivity. Quantification of these large-scale networks would allow for the quantitative characterization of healthy connectivity in a mathematically rigorous manner. However, the volume of data required to characterize these networks creates a multiple comparison problem (MCP) in which upward of 33 million simultaneous hypothesis are tested. Conservative approaches such as Bonferroni can eliminate most of the results while more liberal methods may under-correct therefore leading to an increase in the true type I error rate. Here we used a combination of functionally defined cortical surface clustering methods followed by non-parametric permutation testing paradigm to control the family wise error rate and provide robust statistical networks. These methods were validated with simulation studies to characterize limitations in inferences from the resultant whole brain networks. We then examined healthy subject’s MEG during resting state recordings to characterize intrinsic network activity across four physiological frequency bands: theta – 4-8 Hz, alpha – 8-13 Hz, beta-low – 13-20 Hz, beta-high – 20-30 Hz. Quantifying large-scale functional connectivity networks allowed for the investigation of healthy electrophysiological networks within specific frequency bands. Understanding the intrinsic network connections would allow for better understanding of the electrophysiological processes underlying brain function. Quantification of these networks would also allow future studies to explore the ability of network aberrations to predict disordered brain states
    • …
    corecore