650 research outputs found

    InfoMax Bayesian learning of the Furuta pendulum

    Get PDF
    We have studied the InfoMax (D-optimality) learning for the two-link Furuta pendulum. We compared InfoMax and random learning methods. The InfoMax learning method won by a large margin, it visited a larger domain and provided better approximation during the same time interval. The advantages and the limitations of the InfoMax solution are treated

    Closed-form approximations of first-passage distributions for a stochastic decision making model

    Get PDF
    In free response choice tasks, decision making is often modeled as a first-passage problem for a stochastic differential equation. In particular, drift-diffusion processes with constant or time-varying drift rates and noise can reproduce behavioral data (accuracy and response-time distributions) and neuronal firing rates. However, no exact solutions are known for the first-passage problem with time-varying data. Recognizing the importance of simple closed-form expressions for modeling and inference, we show that an interrogation or cued-response protocol, appropriately interpreted, can yield approximate first-passage (response time) distributions for a specific class of time-varying processes used to model evidence accumulation. We test these against exact expressions for the constant drift case and compare them with data from a class of sigmoidal functions. We find that both the direct interrogation approximation and an error-minimizing interrogation approximation can capture a variety of distribution shapes and mode numbers but that the direct approximation, in particular, is systematically biased away from the correct free response distribution

    Application and Challenges of Signal Processing Techniques for Lamb Waves Structural Integrity Evaluation: Part B-Defects Imaging and Recognition Techniques

    Get PDF
    The wavefield of Lamb waves is yielded by the feature of plate-like structures. And many defects imaging techniques and intelligent recognition algorithms have been developed for defects location, sizing and recognition through analyzing the parameters of received Lamb waves signals including the arrival time, attenuation, amplitude and phase, etc. In this chapter, we give a briefly review about the defects imaging techniques and the intelligent recognition algorithms. Considering the available parameters of Lamb waves signals and the setting of detection/monitoring systems, we roughly divide the defect location and sizing techniques into four categories, including the sparse array imaging techniques, the tomography techniques, the compact array techniques, and full wavefield imaging techniques. The principle of them is introduced. Meanwhile, the intelligent recognition techniques based on various of intelligent recognition algorithms that have been widely used to analyze Lamb waves signals in the research of defect recognition are reviewed, including the support vector machine, Bayesian methodology, and the neural networks

    Machine learning for flow field measurements: a perspective

    Get PDF
    Advancements in machine-learning (ML) techniques are driving a paradigm shift in image processing. Flow diagnostics with optical techniques is not an exception. Considering the existing and foreseeable disruptive developments in flow field measurement techniques, we elaborate this perspective, particularly focused to the field of particle image velocimetry. The driving forces for the advancements in ML methods for flow field measurements in recent years are reviewed in terms of image preprocessing, data treatment and conditioning. Finally, possible routes for further developments are highlighted.Stefano Discetti acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 949085). Yingzheng Liu acknowledges financial support from the National Natural Science Foundation of China (11725209)

    Development of Integrated Machine Learning and Data Science Approaches for the Prediction of Cancer Mutation and Autonomous Drug Discovery of Anti-Cancer Therapeutic Agents

    Get PDF
    Few technological ideas have captivated the minds of biochemical researchers to the degree that machine learning (ML) and artificial intelligence (AI) have. Over the last few years, advances in the ML field have driven the design of new computational systems that improve with experience and are able to model increasingly complex chemical and biological phenomena. In this dissertation, we capitalize on these achievements and use machine learning to study drug receptor sites and design drugs to target these sites. First, we analyze the significance of various single nucleotide variations and assess their rate of contribution to cancer. Following that, we used a portfolio of machine learning and data science approaches to design new drugs to target protein kinase inhibitors. We show that these techniques exhibit strong promise in aiding cancer research and drug discovery

    Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity

    Get PDF
    Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1% under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.Comment: This is the final version of the manuscript from the publisher which supersedes our original pre-print version. The spike data used in this paper and the code that implements our connectivity reconstruction method are publicly available for download at http://dx.doi.org/10.5281/zenodo.17662 and http://dx.doi.org/10.5281/zenodo.17663 respectivel

    Dynamical structure in neural population activity

    Get PDF
    The question of how the collective activity of neural populations in the brain gives rise to complex behaviour is fundamental to neuroscience. At the core of this question lie considerations about how neural circuits can perform computations that enable sensory perception, motor control, and decision making. It is thought that such computations are implemented by the dynamical evolution of distributed activity in recurrent circuits. Thus, identifying and interpreting dynamical structure in neural population activity is a key challenge towards a better understanding of neural computation. In this thesis, I make several contributions in addressing this challenge. First, I develop two novel methods for neural data analysis. Both methods aim to extract trajectories of low-dimensional computational state variables directly from the unbinned spike-times of simultaneously recorded neurons on single trials. The first method separates inter-trial variability in the low-dimensional trajectory from variability in the timing of progression along its path, and thus offers a quantification of inter-trial variability in the underlying computational process. The second method simultaneously learns a low-dimensional portrait of the underlying nonlinear dynamics of the circuit, as well as the system's fixed points and locally linearised dynamics around them. This approach facilitates extracting interpretable low-dimensional hypotheses about computation directly from data. Second, I turn to the question of how low-dimensional dynamical structure may be embedded within a high-dimensional neurobiological circuit with excitatory and inhibitory cell-types. I analyse how such circuit-level features shape population activity, with particular focus on responses to targeted optogenetic perturbations of the circuit. Third, I consider the problem of implementing multiple computations in a single dynamical system. I address this in the framework of multi-task learning in recurrently connected networks and demonstrate that a careful organisation of low-dimensional, activity-defined subspaces within the network can help to avoid interference across tasks

    Diagnosis of Arrhythmia Using Neural Networks

    Get PDF
    This dissertation presents an intelligent framework for classification of heart arrhythmias. It is a framework of cascaded discrete wavelet transform and the Fourier transform as preprocessing stages for the neural network. This work exploits the information about heart activity contained in the ECG signal; the power of the wavelet and Fourier transforms in characterizing the signal and the power learningpower of neural networks. Firstly, the ECG signals are four-level discrete wavelet decomposed using a filter-bank and mother wavelet 'db2'. Then all the detailed coefficients were discarded, while retaining only the approximation coefficients at the fourth level. The retained approximation coefficients are Fourier transformed using a 16-point FFT. The FFT is symmetrical, therefore only the first 8-points are sufficient to characterize the spectrum. The last 8-points resulting from theFFTare discarded during feature selection. The 8-point feature vector is then used to train a feedforward neural network with one hidden layer of 20-units and three outputs. The neural network is trained by using the Scaled Conjugate Gradient Backpropagation algorithm (SCG). This was implemented in a MATLAB environment using the MATLAB GUINeural networktoolbox.. This approach yields an accuracy of 94.66% over three arrhythmia classes, namely the Ventricular Flutter (VFL), the Ventricular Tachycardia (VT) and the Supraventricular Tachyarrhythmia (SVTA). We conclude that for the amount of information retained and the number features used the performance is fairly competitive

    Multi-modal and multi-model interrogation of large-scale functional brain networks

    Get PDF
    Existing whole-brain models are generally tailored to the modelling of a particular data modality (e.g., fMRI or MEG/EEG). We propose that despite the differing aspects of neural activity each modality captures, they originate from shared network dynamics. Building on the universal principles of self-organising delay-coupled nonlinear systems, we aim to link distinct features of brain activity - captured across modalities - to the dynamics unfolding on a macroscopic structural connectome. To jointly predict connectivity, spatiotemporal and transient features of distinct signal modalities, we consider two large-scale models - the Stuart Landau and Wilson and Cowan models - which generate short-lived 40 Hz oscillations with varying levels of realism. To this end, we measure features of functional connectivity and metastable oscillatory modes (MOMs) in fMRI and MEG signals - and compare them against simulated data. We show that both models can represent MEG functional connectivity (FC), functional connectivity dynamics (FCD) and generate MOMs to a comparable degree. This is achieved by adjusting the global coupling and mean conduction time delay and, in the WC model, through the inclusion of balance between excitation and inhibition. For both models, the omission of delays dramatically decreased the performance. For fMRI, the SL model performed worse for FCD and MOMs, highlighting the importance of balanced dynamics for the emergence of spatiotemporal and transient patterns of ultra-slow dynamics. Notably, optimal working points varied across modalities and no model was able to achieve a correlation with empirical FC higher than 0.4 across modalities for the same set of parameters. Nonetheless, both displayed the emergence of FC patterns that extended beyond the constraints of the anatomical structure. Finally, we show that both models can generate MOMs with empirical-like properties such as size (number of brain regions engaging in a mode) and duration (continuous time interval during which a mode appears). Our results demonstrate the emergence of static and dynamic properties of neural activity at different timescales from networks of delay-coupled oscillators at 40 Hz. Given the higher dependence of simulated FC on the underlying structural connectivity, we suggest that mesoscale heterogeneities in neural circuitry may be critical for the emergence of parallel cross-modal functional networks and should be accounted for in future modelling endeavours
    • …
    corecore