311 research outputs found

    Medical imaging analysis with artificial neural networks

    Get PDF
    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Memristors for the Curious Outsiders

    Full text link
    We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page

    A feed forward neural network approach for matrix computations

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A new neural network approach for performing matrix computations is presented. The idea of this approach is to construct a feed-forward neural network (FNN) and then train it by matching a desired set of patterns. The solution of the problem is the converged weight of the FNN. Accordingly, unlike the conventional FNN research that concentrates on external properties (mappings) of the networks, this study concentrates on the internal properties (weights) of the network. The present network is linear and its weights are usually strongly constrained; hence, complicated overlapped network needs to be construct. It should be noticed, however, that the present approach depends highly on the training algorithm of the FNN. Unfortunately, the available training methods; such as, the original Back-propagation (BP) algorithm, encounter many deficiencies when applied to matrix algebra problems; e. g., slow convergence due to improper choice of learning rates (LR). Thus, this study will focus on the development of new efficient and accurate FNN training methods. One improvement suggested to alleviate the problem of LR choice is the use of a line search with steepest descent method; namely, bracketing with golden section method. This provides an optimal LR as training progresses. Another improvement proposed in this study is the use of conjugate gradient (CG) methods to speed up the training process of the neural network. The computational feasibility of these methods is assessed on two matrix problems; namely, the LU-decomposition of both band and square ill-conditioned unsymmetric matrices and the inversion of square ill-conditioned unsymmetric matrices. In this study, two performance indexes have been considered; namely, learning speed and convergence accuracy. Extensive computer simulations have been carried out using the following training methods: steepest descent with line search (SDLS) method, conventional back propagation (BP) algorithm, and conjugate gradient (CG) methods; specifically, Fletcher Reeves conjugate gradient (CGFR) method and Polak Ribiere conjugate gradient (CGPR) method. The performance comparisons between these minimization methods have demonstrated that the CG training methods give better convergence accuracy and are by far the superior with respect to learning time; they offer speed-ups of anything between 3 and 4 over SDLS depending on the severity of the error goal chosen and the size of the problem. Furthermore, when using Powell's restart criteria with the CG methods, the problem of wrong convergence directions usually encountered in pure CG learning methods is alleviated. In general, CG methods with restarts have shown the best performance among all other methods in training the FNN for LU-decomposition and matrix inversion. Consequently, it is concluded that CG methods are good candidates for training FNN of matrix computations, in particular, Polak-Ribidre conjugate gradient method with Powell's restart criteria

    Wireless Channel Equalization in Digital Communication Systems

    Get PDF
    Our modern society has transformed to an information-demanding system, seeking voice, video, and data in quantities that could not be imagined even a decade ago. The mobility of communicators has added more challenges. One of the new challenges is to conceive highly reliable and fast communication system unaffected by the problems caused in the multipath fading wireless channels. Our quest is to remove one of the obstacles in the way of achieving ultimately fast and reliable wireless digital communication, namely Inter-Symbol Interference (ISI), the intensity of which makes the channel noise inconsequential. The theoretical background for wireless channels modeling and adaptive signal processing are covered in first two chapters of dissertation. The approach of this thesis is not based on one methodology but several algorithms and configurations that are proposed and examined to fight the ISI problem. There are two main categories of channel equalization techniques, supervised (training) and blind unsupervised (blind) modes. We have studied the application of a new and specially modified neural network requiring very short training period for the proper channel equalization in supervised mode. The promising performance in the graphs for this network is presented in chapter 4. For blind modes two distinctive methodologies are presented and studied. Chapter 3 covers the concept of multiple cooperative algorithms for the cases of two and three cooperative algorithms. The select absolutely larger equalized signal and majority vote methods have been used in 2-and 3-algoirithm systems respectively. Many of the demonstrated results are encouraging for further research. Chapter 5 involves the application of general concept of simulated annealing in blind mode equalization. A limited strategy of constant annealing noise is experimented for testing the simple algorithms used in multiple systems. Convergence to local stationary points of the cost function in parameter space is clearly demonstrated and that justifies the use of additional noise. The capability of the adding the random noise to release the algorithm from the local traps is established in several cases

    Solutions of linear equations and a class of nonlinear equations using recurrent neural networks

    Get PDF
    Artificial neural networks are computational paradigms which are inspired by biological neural networks (the human brain). Recurrent neural networks (RNNs) are characterized by neuron connections which include feedback paths. This dissertation uses the dynamics of RNN architectures for solving linear and certain nonlinear equations. Neural network with linear dynamics (variants of the well-known Hopfield network) are used to solve systems of linear equations, where the network structure is adapted to match properties of the linear system in question. Nonlinear equations inturn are solved using the dynamics of nonlinear RNNs, which are based on feedforward multilayer perceptrons. Neural networks are well-suited for implementation on special parallel hardware, due to their intrinsic parallelism. The RNNs developed here are implemented on a neural network processor (NNP) designed specifically for fast neural type processing, and are applied to the inverse kinematics problem in robotics, demonstrating their superior performance over alternative approaches
    corecore