151 research outputs found

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link

    Constructivist Artificial Intelligence With Genetic Programming

    Get PDF
    Learning is an essential attribute of an intelligent system. A proper understanding of the process of learning in terms of knowledge-acquisition, processing and its effective use has been one of the main goals of artificial intelligence (AI). AI, in order to achieve the desired flexibility, performance levels and wide applicability should explore and exploit a variety of learning techniques and representations. Evolutionary algorithms, in recent years, have emerged as powerful learning methods employing task-independent approaches to problem solving and are potential candidates for implementing adaptive computational models. These algorithms, due to their attractive features such as implicit and explicit parallelism, can also be powerful meta-leaming tools for other learning systems such as connectionist networks. These networks, also known as artificial neural networks, offer a paradigm for learning at an individual level that provide an extremely rich landscape of learning mechanisms which AI should exploit. The research proposed in this thesis investigates the role of genetic programming (GP) in connectionism, a learning paradigm that, despite being extremely powerful has a number of limitations. The thesis, by systematically identifying the reasons for these limitations has argued as to why connectionism should be approached with a new perspective in order to realize its true potentialities. With genetic-based designs the key issue has been the encoding strategy. That is, how to encode a neural network within a genotype so as to achieve an optimum network structure and/ or an efficient learning that can best solve a given problem. This in turn raises a number of key questions such as: 1. Is the representation (that is the genotype) that the algorithms employ sufficient to express and explore the vast space of network architectures and learning mechanisms? 2. Is the representation capable of capturing the concepts of hierarchy and modularity that are vital and so naturally employed by humans in problem-solving? 3. Are some representations better in expressing these? If so, how to exploit the strengths that are inherent to these representations? 4. If the aim is really to automate the design process what strategies should be employed so as to minimize the involvement of a designer in the design loop? 5. Is the methodology or the approach able to overcome at least some of the limitations that are commonly seen in connectionist networks? 6. Most importantly, how effective is the approach in problem-solving? These issues are investigated through a novel approach that combines genetic programming and a self-organizing neural network which provides a framework for the simulations. Through the powerful notions of constructivism and micro-macro dynamics the approach provides a way of exploiting the potential features (such as the hierarchy and modularity) that are inherent to the representation that GP employs. By providing a general definition for learning and by imposing a single potential constraint within the representation the approach demonstrates that genetic programming, if used for construction and optimization, could be extremely creative. The method also combines the bottom-up and top-down strategies that are key to evolve ALife-like systems. A comparison with earlier methods is drawn to identify the merits of the proposed approach. A pattern recognition task is considered for illustration. Simulations suggest that genetic- programming can be a powerful meta-leaming tool for implementing useful network architectures and flexible learning mechanisms for self-organizing neural networks while interacting with a given task environment. It appears that it is possible to extend the novel approach further to other types of networks. Finally the role of flexible learning in implementing adaptive AI systems is discussed. A number of potential applications domain is identified

    Automated Analysis of Retinal and Choroidal OCT and OCTA Images in AMD

    Get PDF
    La dégénérescence maculaire liée à l'âge (DMLA) est une maladie oculaire progressive qui se manifeste principalement au niveau de la rétine externe et de la choroïde. Le projet de recherche vise à déterminer si des mesures obtenues à partir d'images de tomographie par cohérence optique (OCT) et d'angiographie OCT (OCTA) peuvent être utilisées afin de fournir de nouvelles informations sur des biomarqueurs de la DMLA, ainsi qu’une méthode de détection précoce de la maladie. À cette fin, un appareil permettant l’OCT et l’OCTA a été utilisé pour imager des sujets DMLA précoces et intermédiaires, et des sujets témoins. À la configuration sélectionnée de l’appareil OCT, chaque acquisition d'un œil fournit un volume de données qui est constitué de 300 images transversales appelées B-scan. Au total, des acquisitions de 10 yeux de sujets atteints de DMLA précoce et intermédiaire (3000 images B-scan) et un cas de DMLA néovasculaire, 12 yeux de sujets âgés de plus de 50 ans (3600 images B-scan) et 11 yeux de sujets âgés de moins de 50 ans (3300 images B-scan) ont été obtenues. Cinq méthodes d'extraction de caractéristiques ont été reproduites ou développées afin de déterminer si des différences significatives au niveau de l’œil pouvaient être observées entre les sujets atteints de DMLA précoce et intermédiaire et les sujets témoins d’âge similaire. Grâce à des tests non paramétriques, il a été établi que deux méthodes connues d'extraction de biomarqueurs de la DMLA (analyse d’absence de signal de débit sanguin au niveau de la choriocapillaire et une méthode de segmentation des drusen) produisent des mesures qui montrent des différences significatives entre les groupes, et qui sont représentées de façon uniforme à travers le plan frontal de l’œil. Il a ensuite été souhaité de tirer parti des mesures et de générer un modèle de classification de la DMLA interprétable basé sur l'apprentissage automatique au niveau des B-scans. Des spectres de fréquence résultant de la transformé de Fourier rapide de séries spatiales dérivées de mesures considérées comme représentatives des deux biomarqueurs ont été obtenues, et utilisées comme caractéristiques pour former un classifieur de type forêt aléatoire et un classifieur de type forêt profonde. L'analyse en composantes principales (PCA) a été utilisée pour réduire la dimensionnalité de l’espace des caractéristiques, et la performance des modèles et l'importance des prédicteurs ont été évaluées. Une nouvelle méthode a été conçue qui permet une reconstruction 3D automatisée et une évaluation quantitative de la structure des signaux OCTA et ainsi des vaisseaux rétiniens. Des mesures représentatives des drusen et de la choriocapillaire ont été utilisées pour créer des modèles interprétables pour la classification de la DMLA précoce et intermédiaire. Alors que la prévalence mondiale de la DMLA augmente et que les appareils OCT deviennent plus disponibles, un plus grand nombre de personnes hautement qualifiées est nécessaire pour interpréter les informations médicales et fournir les soins cliniques appropriés. L'analyse et le classement du niveau de sévérité de la DMLA par des experts par le biais d'images OCT sont coûteux et prennent du temps. Les modèles proposés pourraient servir à automatiser la détection de la DMLA, même lorsqu'elle est asymptomatique, et signaler à un ophtalmologue la nécessité de surveiller et de traiter la condition avant la survenue de pertes graves de la vision. Les modèles sont transparents et sont en mesure de fournir une classification à partir d’une seule image transversale. Par conséquent, l'outil diagnostic automatisé pourrait également être utilisé dans des situations où seules des données médicales partielles sont disponibles ou lorsque l'accès aux ressources de soins de santé est limité.----------ABSTRACT Age-related macular degeneration (AMD) is a progressive eye disease which manifests primarily at the outer retina and choroid. The research project aimed to determine whether measures obtained from optical coherence tomography (OCT) and OCT angiography (OCTA) images could be used to provide novel AMD biomarker insight and an early disease detection method. To that end, an OCT and OCTA enabled device was used to image AMD subjects and controls. At the selected device scan size, each scan of one eye gathered using an OCT device provides a volume of data which is constructed of 300 cross-sectional images termed B-scans. In total, scans of 10 eyes from subjects with early and intermediate AMD (3,000 B-scan images) and a case of neovascular AMD, 12 eyes from subjects over the age of 50 years old (3,600 B-scan images), and 11 eyes from subjects under the age of 50 years old (3,300 B-scan images) were obtained. Five feature extraction methods were either reproduced or developed in order to determine if significant differences could be observed between the early and intermediate AMD subjects and control subjects at the eye level. Through non-parametric testing it was established that two AMD biomarker extraction methods (choriocapillaris flow voids analysis and a drusen segmentation method) produced measures which showed significant differences between groups, and which were also uniformly represented across the frontal plane of the eye. It was then desired to leverage the measures and generate a B-scan level, interpretable machine learning-based AMD classification model. Frequency spectrums resulting from the fast Fourier transforms of spatial series derived from measures believed to be representative of the two biomarkers were obtained and used as features to train a random forest and a deep forest classifier. Principal component analysis was used to reduce dimensionality of the feature space, and model performance and predictor importance were assessed. A new method was devised which allows automated 3D reconstruction and quantitative evaluation of retinal flow signal patterns and incidentally of retinal microvasculature. Measures representative of drusen and choriocapillaris were leveraged to create interpretable models for the classification of early and intermediate AMD. As the worldwide prevalence of AMD increases and OCT devices are becoming more available, a greater number of highly trained personnel is needed to interpret medical information and provide the appropriate clinical care. Expert analysis and grading of AMD through OCT images are expensive and time consuming. The models proposed could serve to automate AMD detection, even when it is asymptomatic, and signal to an ophthalmologist the need to monitor and treat the condition before the occurrence of severe visual loss. The models are transparent and provide classification from single cross-sectional images. Therefore, the automated diagnosis tool could also be used in situations where only partial medical data are available, or where there is limited access to health care resources

    Computational models of object motion detectors accelerated using FPGA technology

    Get PDF
    The detection of moving objects is a trivial task when performed by vertebrate retinas, yet a complex computer vision task. This PhD research programme has made three key contributions, namely: 1) a multi-hierarchical spiking neural network (MHSNN) architecture for detecting horizontal and vertical movements, 2) a Hybrid Sensitive Motion Detector (HSMD) algorithm for detecting object motion and 3) the Neuromorphic Hybrid Sensitive Motion Detector (NeuroHSMD) , a real-time neuromorphic implementation of the HSMD algorithm. The MHSNN is a customised 4 layers Spiking Neural Network (SNN) architecture designed to reflect the basic connectivity, similar to canonical behaviours found in the majority of vertebrate retinas (including human retinas). The architecture, was trained using images from a custom dataset generated in laboratory settings. Simulation results revealed that each cell model is sensitive to vertical and horizontal movements, with a detection error of 6.75% contrasted against the teaching signals (expected output signals) used to train the MHSNN. The experimental evaluation of the methodology shows that the MH SNN was not scalable because of the overall number of neurons and synapses which lead to the development of the HSMD. The HSMD algorithm enhanced an existing Dynamic Background subtraction (DBS) algorithm using a customised 3-layer SNN. The customised 3-layer SNN was used to stabilise the foreground information of moving objects in the scene, which improves the object motion detection. The algorithm was compared against existing background subtraction approaches, available on the Open Computer Vision (OpenCV) library, specifically on the 2012 Change Detection (CDnet2012) and the 2014 Change Detection (CDnet2014) benchmark datasets. The accuracy results show that the HSMD was ranked overall first and performed better than all the other benchmarked algorithms on four of the categories, across all eight test metrics. Furthermore, the HSMD is the first to use an SNN to enhance the existing dynamic background subtraction algorithm without a substantial degradation of the frame rate, being capable of processing images 720 Ă— 480 at 13.82 Frames Per Second (fps) (CDnet2014) and 720 Ă— 480 at 13.92 fps (CDnet2012) on a High Performance computer (96 cores and 756 GB of RAM). Although the HSMD analysis shows good Percentage of Correct Classifications (PCC) on the CDnet2012 and CDnet2014, it was identified that the 3-layer customised SNN was the bottleneck, in terms of speed, and could be improved using dedicated hardware. The NeuroHSMD is thus an adaptation of the HSMD algorithm whereby the SNN component has been fully implemented on dedicated hardware [Terasic DE10-pro Field-Programmable Gate Array (FPGA) board]. Open Computer Language (OpenCL) was used to simplify the FPGA design flow and allow the code portability to other devices such as FPGA and Graphical Processing Unit (GPU). The NeuroHSMD was also tested against the CDnet2012 and CDnet2014 datasets with an acceleration of 82% over the HSMD algorithm, being capable of processing 720 Ă— 480 images at 28.06 fps (CDnet2012) and 28.71 fps (CDnet2014)
    • …
    corecore