140 research outputs found

    Log-domain implementation of complex dynamics reaction-diffusion neural networks

    Get PDF
    In this paper, we have identified a second-order reaction-diffusion differential equation able to reproduce through parameter setting different complex spatio-temporal behaviors. We have designed a log-domain hardware that implements the spatially discretized version of the selected reaction-diffusion equation. The logarithmic compression of the state variables allows several decades of variation of these state variables within subthreshold operation of the MOS transistors. Furthermore, as all the equation parameters are implemented as currents, they can be adjusted several decades. As a demonstrator, we have designed a chip containing a linear array of ten second-order dynamics coupled cells. Using this hardware, we have experimentally reproduced two complex spatio-temporal phenomena: the propagation of travelling waves and of trigger waves, as well as isolated oscillatory cells.Gobierno de España TIC1999-0446-C02-02Office of Naval Research (USA

    Artificial intelligence detects awareness of functional relation with the environment in 3 month old babies

    Get PDF
    A recent experiment probed how purposeful action emerges in early life by manipulating infants’ functional connection to an object in the environment (i.e., tethering an infant’s foot to a colorful mobile). Vicon motion capture data from multiple infant joints were used here to create Histograms of Joint Displacements (HJDs) to generate pose-based descriptors for 3D infant spatial trajectories. Using HJDs as inputs, machine and deep learning systems were tasked with classifying the experimental state from which snippets of movement data were sampled. The architectures tested included k-Nearest Neighbour (kNN), Linear Discriminant Analysis (LDA), Fully connected network (FCNet), 1D-Convolutional Neural Network (1D-Conv), 1D-Capsule Network (1D-CapsNet), 2D-Conv and 2D-CapsNet. Sliding window scenarios were used for temporal analysis to search for topological changes in infant movement related to functional context. kNN and LDA achieved higher classification accuracy with single joint features, while deep learning approaches, particularly 2D-CapsNet, achieved higher accuracy on full-body features. For each AI architecture tested, measures of foot activity displayed the most distinct and coherent pattern alterations across different experimental stages (reflected in the highest classification accuracy rate), indicating that interaction with the world impacts the infant behaviour most at the site of organism~world connection

    Modeling multiple object scenarios for feature recognition and classification using cellular neural networks

    Get PDF
    Cellular neural networks (CNNs) have been adopted in the spatio-temporal processing research field as a paradigm of complexity. This is due to the ease of designs for complex spatio-temporal tasks introduced by these networks. This has led to an increase in the adoption of CNNs for on-chip VLSI implementations. This dissertation proposes the use of a Cellular Neural Network to model, detect and classify objects appearing in multiple object scenes. The algorithm proposed is based on image scene enhancement through anisotropic diffusion; object detection and extraction through binary edge detection and boundary tracing; and object classification through genetically optimised associative networks and texture histograms. The first classification method is based on optimizing the space-invariant feedback template of the zero-input network through genetic operators, while the second method is based on computing diffusion filtered and modified histograms for object classes to generate decision boundaries that can be used to classify the objects. The primary goal is to design analogic algorithms that can be used to perform these tasks. While the use of genetically optimized associative networks for object learning yield an efficiency of over 95%, the use texture histograms has been found very accurate though there is a need to develop a better technique for histogram comparisons. The results found using these analogic algorithms affirm CNNs as well-suited for image processing tasks

    A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications

    Full text link
    Auditory models are commonly used as feature extractors for automatic speech-recognition systems or as front-ends for robotics, machine-hearing and hearing-aid applications. Although auditory models can capture the biophysical and nonlinear properties of human hearing in great detail, these biophysical models are computationally expensive and cannot be used in real-time applications. We present a hybrid approach where convolutional neural networks are combined with computational neuroscience to yield a real-time end-to-end model for human cochlear mechanics, including level-dependent filter tuning (CoNNear). The CoNNear model was trained on acoustic speech material and its performance and applicability were evaluated using (unseen) sound stimuli commonly employed in cochlear mechanics research. The CoNNear model accurately simulates human cochlear frequency selectivity and its dependence on sound intensity, an essential quality for robust speech intelligibility at negative speech-to-background-noise ratios. The CoNNear architecture is based on parallel and differentiable computations and has the power to achieve real-time human performance. These unique CoNNear features will enable the next generation of human-like machine-hearing applications

    Pattern Formation in a RD-MCNN with Locally Active Memristors

    Get PDF
    This chapter presents the mathematical investigation of the emergence of static patterns in a Reaction–Diffusion Memristor Cellular Nonlinear Network (RD-MCNN) structure via the application of the theory of local activity. The proposed RD-MCNN has a planar grid structure, which consists of identical memristive cells, and the couplings are established in a purely resistive fashion. The single cell has a compact design being composed of a locally active memristor in parallel with a capacitor, besides the bias circuitry, namely a DC voltage source and its series resistor. We first introduce the mathematical model of the locally active memristor and then study the main characteristics of its AC equivalent circuit. Later on, we perform a stability analysis to obtain the stability criteria for the single cell. Consequently, we apply the theory of local activity to extract the parameter space associated with locally active, edge-of-chaos, and sharp-edge-of-chaos domains, performing all the necessary calculations parametrically. The corresponding parameter space domains are represented in terms of intrinsic cell characteristics such as the DC operating point, the capacitance, and the coupling resistance. Finally, we simulate the proposed RD-MCNN structure where we demonstrate the emergence of pattern formation for various values of the design parameters

    Physics-based Machine Learning Methods for Control and Sensing in Fish-like Robots

    Get PDF
    Underwater robots are important for the construction and maintenance of underwater infrastructure, underwater resource extraction, and defense. However, they currently fall far behind biological swimmers such as fish in agility, efficiency, and sensing capabilities. As a result, mimicking the capabilities of biological swimmers has become an area of significant research interest. In this work, we focus specifically on improving the control and sensing capabilities of fish-like robots. Our control work focuses on using the Chaplygin sleigh, a two-dimensional nonholonomic system which has been used to model fish-like swimming, as part of a curriculum to train a reinforcement learning agent to control a fish-like robot to track a prescribed path. The agent is first trained on the Chaplygin sleigh model, which is not an accurate model of the swimming robot but crucially has similar physics; having learned these physics, the agent is then trained on a simulated swimming robot, resulting in faster convergence compared to only training on the simulated swimming robot. Our sensing work separately considers using kinematic data (proprioceptive sensing) and using surface pressure sensors. The effect of a swimming body\u27s internal dynamics on proprioceptive sensing is investigated by collecting time series of kinematic data of both a flexible and rigid body in a water tunnel behind a moving obstacle performing different motions, and using machine learning to classify the motion of the upstream obstacle. This revealed that the flexible body could more effectively classify the motion of the obstacle, even if only one if its internal states is used. We also consider the problem of using time series data from a `lateral line\u27 of pressure sensors on a fish-like body to estimate the position of an upstream obstacle. Feature extraction from the pressure data is attempted with a state-of-the-art convolutional neural network (CNN), and this is compared with using the dominant modes of a Koopman operator constructed on the data as features. It is found that both sets of features achieve similar estimation performance using a dense neural network to perform the estimation. This highlights the potential of the Koopman modes as an interpretable alternative to CNNs for high-dimensional time series. This problem is also extended to inferring the time evolution of the flow field surrounding the body using the same surface measurements, which is performed by first estimating the dominant Koopman modes of the surrounding flow, and using those modes to perform a flow reconstruction. This strategy of mapping from surface to field modes is more interpretable than directly constructing a mapping of unsteady fluid states, and is found to be effective at reconstructing the flow. The sensing frameworks developed as a result of this work allow better awareness of obstacles and flow patterns, knowledge which can inform the generation of paths through the fluid that the developed controller can track, contributing to the autonomy of swimming robots in challenging environments

    Bringing Lunar LiDAR Back Down to Earth: Mapping Our Industrial Heritage through Deep Transfer Learning

    Get PDF
    This is the final version. Available on open access from MDPI via the DOI in this recordThis article presents a novel deep learning method for semi-automated detection of historic mining pits using aerial LiDAR data. The recent emergence of national scale remotely sensed datasets has created the potential to greatly increase the rate of analysis and recording of cultural heritage sites. However, the time and resources required to process these datasets in traditional desktop surveys presents a near insurmountable challenge. The use of artificial intelligence to carry out preliminary processing of vast areas could enable experts to prioritize their prospection focus; however, success so far has been hindered by the lack of large training datasets in this field. This study develops an innovative transfer learning approach, utilizing a deep convolutional neural network initially trained on Lunar LiDAR datasets and reapplied here in an archaeological context. Recall rates of 80% and 83% were obtained on the 0.5 m and 0.25 m resolution datasets respectively, with false positive rates maintained below 20%. These results are state of the art and demonstrate that this model is an efficient, effective tool for semi-automated object detection for this type of archaeological objects. Further tests indicated strong potential for detection of other types of archaeological objects when trained accordingly

    Goal-driven, neurobiological-inspired convolutional neural network models of human spatial hearing

    Get PDF
    The human brain effortlessly solves the complex computational task of sound localization using a mixture of spatial cues. How the brain performs this task in naturalistic listening environments (e.g. with reverberation) is not well understood. In the present paper, we build on the success of deep neural networks at solving complex and high-dimensional problems [1] to develop goal-driven, neurobiological-inspired convolutional neural network (CNN) models of human spatial hearing. After training, we visualize and quantify feature representations in intermediate layers to gain insights into the representational mechanisms underlying sound location encoding in CNNs. Our results show that neurobiological-inspired CNN models trained on real-life sounds spatialized with human binaural hearing characteristics can accurately predict sound location in the horizontal plane. CNN localization acuity across the azimuth resembles human sound localization acuity, but CNN models outperform human sound localization in the back. Training models with different objective functions - that is, minimizing either Euclidean or angular distance - modulates localization acuity in particular ways. Moreover, different implementations of binaural integration result in unique patterns of localization errors that resemble behavioral observations in humans. Finally, feature representations reveal a gradient of spatial selectivity across network layers, starting with broad spatial representations in early layers and progressing to sparse, highly selective spatial representations in deeper layers. In sum, our results show that neurobiological-inspired CNNs are a valid approach to modeling human spatial hearing. This work paves the way for future studies combining neural network models with empirical measurements of neural activity to unravel the complex computational mechanisms underlying neural sound location encoding in the human auditory pathway

    Goal-driven, neurobiological-inspired convolutional neural network models of human spatial hearing

    Get PDF
    The human brain effortlessly solves the complex computational task of sound localization using a mixture of spatial cues. How the brain performs this task in naturalistic listening environments (e.g. with reverberation) is not well understood. In the present paper, we build on the success of deep neural networks at solving complex and high-dimensional problems [1] to develop goal-driven, neurobiological-inspired convolutional neural network (CNN) models of human spatial hearing. After training, we visualize and quantify feature representations in intermediate layers to gain insights into the representational mechanisms underlying sound location encoding in CNNs. Our results show that neurobiological-inspired CNN models trained on real-life sounds spatialized with human binaural hearing characteristics can accurately predict sound location in the horizontal plane. CNN localization acuity across the azimuth resembles human sound localization acuity, but CNN models outperform human sound localization in the back. Training models with different objective functions - that is, minimizing either Euclidean or angular distance - modulates localization acuity in particular ways. Moreover, different implementations of binaural integration result in unique patterns of localization errors that resemble behavioral observations in humans. Finally, feature representations reveal a gradient of spatial selectivity across network layers, starting with broad spatial representations in early layers and progressing to sparse, highly selective spatial representations in deeper layers. In sum, our results show that neurobiological-inspired CNNs are a valid approach to modeling human spatial hearing. This work paves the way for future studies combining neural network models with empirical measurements of neural activity to unravel the complex computational mechanisms underlying neural sound location encoding in the human auditory pathway
    • …
    corecore