93 research outputs found

    A Unified Cognitive Model of Visual Filling-In Based on an Emergic Network Architecture

    Get PDF
    The Emergic Cognitive Model (ECM) is a unified computational model of visual filling-in based on the Emergic Network architecture. The Emergic Network was designed to help realize systems undergoing continuous change. In this thesis, eight different filling-in phenomena are demonstrated under a regime of continuous eye movement (and under static eye conditions as well). ECM indirectly demonstrates the power of unification inherent with Emergic Networks when cognition is decomposed according to finer-grained functions supporting change. These can interact to raise additional emergent behaviours via cognitive re-use, hence the Emergic prefix throughout. Nevertheless, the model is robust and parameter free. Differential re-use occurs in the nature of model interaction with a particular testing paradigm. ECM has a novel decomposition due to the requirements of handling motion and of supporting unified modelling via finer functional grains. The breadth of phenomenal behaviour covered is largely to lend credence to our novel decomposition. The Emergic Network architecture is a hybrid between classical connectionism and classical computationalism that facilitates the construction of unified cognitive models. It helps cutting up of functionalism into finer-grains distributed over space (by harnessing massive recurrence) and over time (by harnessing continuous change), yet simplifies by using standard computer code to focus on the interaction of information flows. Thus while the structure of the network looks neurocentric, the dynamics are best understood in flowcentric terms. Surprisingly, dynamic system analysis (as usually understood) is not involved. An Emergic Network is engineered much like straightforward software or hardware systems that deal with continuously varying inputs. Ultimately, this thesis addresses the problem of reduction and induction over complex systems, and the Emergic Network architecture is merely a tool to assist in this epistemic endeavour. ECM is strictly a sensory model and apart from perception, yet it is informed by phenomenology. It addresses the attribution problem of how much of a phenomenon is best explained at a sensory level of analysis, rather than at a perceptual one. As the causal information flows are stable under eye movement, we hypothesize that they are the locus of consciousness, howsoever it is ultimately realized

    Fundus image analysis for automatic screening of ophthalmic pathologies

    Full text link
    En los ultimos años el número de casos de ceguera se ha reducido significativamente. A pesar de este hecho, la Organización Mundial de la Salud estima que un 80% de los casos de pérdida de visión (285 millones en 2010) pueden ser evitados si se diagnostican en sus estadios más tempranos y son tratados de forma efectiva. Para cumplir esta propuesta se pretende que los servicios de atención primaria incluyan un seguimiento oftalmológico de sus pacientes así como fomentar campañas de cribado en centros proclives a reunir personas de alto riesgo. Sin embargo, estas soluciones exigen una alta carga de trabajo de personal experto entrenado en el análisis de los patrones anómalos propios de cada enfermedad. Por lo tanto, el desarrollo de algoritmos para la creación de sistemas de cribado automáticos juga un papel vital en este campo. La presente tesis persigue la identificacion automática del daño retiniano provocado por dos de las patologías más comunes en la sociedad actual: la retinopatía diabética (RD) y la degenaración macular asociada a la edad (DMAE). Concretamente, el objetivo final de este trabajo es el desarrollo de métodos novedosos basados en la extracción de características de la imagen de fondo de ojo y clasificación para discernir entre tejido sano y patológico. Además, en este documento se proponen algoritmos de pre-procesado con el objetivo de normalizar la alta variabilidad existente en las bases de datos publicas de imagen de fondo de ojo y eliminar la contribución de ciertas estructuras retinianas que afectan negativamente en la detección del daño retiniano. A diferencia de la mayoría de los trabajos existentes en el estado del arte sobre detección de patologías en imagen de fondo de ojo, los métodos propuestos a lo largo de este manuscrito evitan la necesidad de segmentación de las lesiones o la generación de un mapa de candidatos antes de la fase de clasificación. En este trabajo, Local binary patterns, perfiles granulométricos y la dimensión fractal se aplican de manera local para extraer información de textura, morfología y tortuosidad de la imagen de fondo de ojo. Posteriormente, esta información se combina de diversos modos formando vectores de características con los que se entrenan avanzados métodos de clasificación formulados para discriminar de manera óptima entre exudados, microaneurismas, hemorragias y tejido sano. Mediante diversos experimentos, se valida la habilidad del sistema propuesto para identificar los signos más comunes de la RD y DMAE. Para ello se emplean bases de datos públicas con un alto grado de variabilidad sin exlcuir ninguna imagen. Además, la presente tesis también cubre aspectos básicos del paradigma de deep learning. Concretamente, se presenta un novedoso método basado en redes neuronales convolucionales (CNNs). La técnica de transferencia de conocimiento se aplica mediante el fine-tuning de las arquitecturas de CNNs más importantes en el estado del arte. La detección y localización de exudados mediante redes neuronales se lleva a cabo en los dos últimos experimentos de esta tesis doctoral. Cabe destacar que los resultados obtenidos mediante la extracción de características "manual" y posterior clasificación se comparan de forma objetiva con las predicciones obtenidas por el mejor modelo basado en CNNs. Los prometedores resultados obtenidos en esta tesis y el bajo coste y portabilidad de las cámaras de adquisión de imagen de retina podrían facilitar la incorporación de los algoritmos desarrollados en este trabajo en un sistema de cribado automático que ayude a los especialistas en la detección de patrones anomálos característicos de las dos enfermedades bajo estudio: RD y DMAE.In last years, the number of blindness cases has been significantly reduced. Despite this promising news, the World Health Organisation estimates that 80% of visual impairment (285 million cases in 2010) could be avoided if diagnosed and treated early. To accomplish this purpose, eye care services need to be established in primary health and screening campaigns should be a common task in centres with people at risk. However, these solutions entail a high workload for trained experts in the analysis of the anomalous patterns of each eye disease. Therefore, the development of algorithms for automatic screening system plays a vital role in this field. This thesis focuses on the automatic identification of the retinal damage provoked by two of the most common pathologies in the current society: diabetic retinopathy (DR) and age-related macular degeneration (AMD). Specifically, the final goal of this work is to develop novel methods, based on fundus image description and classification, to characterise the healthy and abnormal tissue in the retina background. In addition, pre-processing algorithms are proposed with the aim of normalising the high variability of fundus images and removing the contribution of some retinal structures that could hinder in the retinal damage detection. In contrast to the most of the state-of-the-art works in damage detection using fundus images, the methods proposed throughout this manuscript avoid the necessity of lesion segmentation or the candidate map generation before the classification stage. Local binary patterns, granulometric profiles and fractal dimension are locally computed to extract texture, morphological and roughness information from retinal images. Different combinations of this information feed advanced classification algorithms formulated to optimally discriminate exudates, microaneurysms, haemorrhages and healthy tissues. Through several experiments, the ability of the proposed system to identify DR and AMD signs is validated using different public databases with a large degree of variability and without image exclusion. Moreover, this thesis covers the basics of the deep learning paradigm. In particular, a novel approach based on convolutional neural networks is explored. The transfer learning technique is applied to fine-tune the most important state-of-the-art CNN architectures. Exudate detection and localisation tasks using neural networks are carried out in the last two experiments of this thesis. An objective comparison between the hand-crafted feature extraction and classification process and the prediction models based on CNNs is established. The promising results of this PhD thesis and the affordable cost and portability of retinal cameras could facilitate the further incorporation of the developed algorithms in a computer-aided diagnosis (CAD) system to help specialists in the accurate detection of anomalous patterns characteristic of the two diseases under study: DR and AMD.En els últims anys el nombre de casos de ceguera s'ha reduït significativament. A pesar d'este fet, l'Organització Mundial de la Salut estima que un 80% dels casos de pèrdua de visió (285 milions en 2010) poden ser evitats si es diagnostiquen en els seus estadis més primerencs i són tractats de forma efectiva. Per a complir esta proposta es pretén que els servicis d'atenció primària incloguen un seguiment oftalmològic dels seus pacients així com fomentar campanyes de garbellament en centres regentats per persones d'alt risc. No obstant això, estes solucions exigixen una alta càrrega de treball de personal expert entrenat en l'anàlisi dels patrons anòmals propis de cada malaltia. Per tant, el desenrotllament d'algoritmes per a la creació de sistemes de garbellament automàtics juga un paper vital en este camp. La present tesi perseguix la identificació automàtica del dany retiniano provocat per dos de les patologies més comunes en la societat actual: la retinopatia diabètica (RD) i la degenaración macular associada a l'edat (DMAE) . Concretament, l'objectiu final d'este treball és el desenrotllament de mètodes novedodos basats en l'extracció de característiques de la imatge de fons d'ull i classificació per a discernir entre teixit sa i patològic. A més, en este document es proposen algoritmes de pre- processat amb l'objectiu de normalitzar l'alta variabilitat existent en les bases de dades publiques d'imatge de fons d'ull i eliminar la contribució de certes estructures retinianas que afecten negativament en la detecció del dany retiniano. A diferència de la majoria dels treballs existents en l'estat de l'art sobre detecció de patologies en imatge de fons d'ull, els mètodes proposats al llarg d'este manuscrit eviten la necessitat de segmentació de les lesions o la generació d'un mapa de candidats abans de la fase de classificació. En este treball, Local binary patterns, perfils granulometrics i la dimensió fractal s'apliquen de manera local per a extraure informació de textura, morfologia i tortuositat de la imatge de fons d'ull. Posteriorment, esta informació es combina de diversos modes formant vectors de característiques amb els que s'entrenen avançats mètodes de classificació formulats per a discriminar de manera òptima entre exsudats, microaneurismes, hemorràgies i teixit sa. Per mitjà de diversos experiments, es valida l'habilitat del sistema proposat per a identificar els signes més comuns de la RD i DMAE. Per a això s'empren bases de dades públiques amb un alt grau de variabilitat sense exlcuir cap imatge. A més, la present tesi també cobrix aspectes bàsics del paradigma de deep learning. Concretament, es presenta un nou mètode basat en xarxes neuronals convolucionales (CNNs) . La tècnica de transferencia de coneixement s'aplica per mitjà del fine-tuning de les arquitectures de CNNs més importants en l'estat de l'art. La detecció i localització d'exudats per mitjà de xarxes neuronals es du a terme en els dos últims experiments d'esta tesi doctoral. Cal destacar que els resultats obtinguts per mitjà de l'extracció de característiques "manual" i posterior classificació es comparen de forma objectiva amb les prediccions obtingudes pel millor model basat en CNNs. Els prometedors resultats obtinguts en esta tesi i el baix cost i portabilitat de les cambres d'adquisión d'imatge de retina podrien facilitar la incorporació dels algoritmes desenrotllats en este treball en un sistema de garbellament automàtic que ajude als especialistes en la detecció de patrons anomálos característics de les dos malalties baix estudi: RD i DMAE.Colomer Granero, A. (2018). Fundus image analysis for automatic screening of ophthalmic pathologies [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/99745TESI

    Persistent Homology Tools for Image Analysis

    Get PDF
    Topological Data Analysis (TDA) is a new field of mathematics emerged rapidly since the first decade of the century from various works of algebraic topology and geometry. The goal of TDA and its main tool of persistent homology (PH) is to provide topological insight into complex and high dimensional datasets. We take this premise onboard to get more topological insight from digital image analysis and quantify tiny low-level distortion that are undetectable except possibly by highly trained persons. Such image distortion could be caused intentionally (e.g. by morphing and steganography) or naturally in abnormal human tissue/organ scan images as a result of onset of cancer or other diseases. The main objective of this thesis is to design new image analysis tools based on persistent homological invariants representing simplicial complexes on sets of pixel landmarks over a sequence of distance resolutions. We first start by proposing innovative automatic techniques to select image pixel landmarks to build a variety of simplicial topologies from a single image. Effectiveness of each image landmark selection demonstrated by testing on different image tampering problems such as morphed face detection, steganalysis and breast tumour detection. Vietoris-Rips simplicial complexes constructed based on the image landmarks at an increasing distance threshold and topological (homological) features computed at each threshold and summarized in a form known as persistent barcodes. We vectorise the space of persistent barcodes using a technique known as persistent binning where we demonstrated the strength of it for various image analysis purposes. Different machine learning approaches are adopted to develop automatic detection of tiny texture distortion in many image analysis applications. Homological invariants used in this thesis are the 0 and 1 dimensional Betti numbers. We developed an innovative approach to design persistent homology (PH) based algorithms for automatic detection of the above described types of image distortion. In particular, we developed the first PH-detector of morphing attacks on passport face biometric images. We shall demonstrate significant accuracy of 2 such morph detection algorithms with 4 types of automatically extracted image landmarks: Local Binary patterns (LBP), 8-neighbour super-pixels (8NSP), Radial-LBP (R-LBP) and centre-symmetric LBP (CS-LBP). Using any of these techniques yields several persistent barcodes that summarise persistent topological features that help gaining insights into complex hidden structures not amenable by other image analysis methods. We shall also demonstrate significant success of a similarly developed PH-based universal steganalysis tool capable for the detection of secret messages hidden inside digital images. We also argue through a pilot study that building PH records from digital images can differentiate breast malignant tumours from benign tumours using digital mammographic images. The research presented in this thesis creates new opportunities to build real applications based on TDA and demonstrate many research challenges in a variety of image processing/analysis tasks. For example, we describe a TDA-based exemplar image inpainting technique (TEBI), superior to existing exemplar algorithm, for the reconstruction of missing image regions

    The Affine Uncertainty Principle, Associated Frames and Applications in Signal Processing

    Get PDF
    Uncertainty relations play a prominent role in signal processing, stating that a signal can not be simultaneously concentrated in the two related domains of the corresponding phase space. In particular, a new uncertainty principle for the affine group, which is directly related to the wavelet transform has lead to a new minimizing waveform. In this thesis, a frame construction is proposed which leads to approximately tight frames based on this minimizing waveform. Frame properties such as the diagonality of the frame operator as well as lower and upper frame bounds are analyzed. Additionally, three applications of such frame constructions are introduced: inpainting of missing audio data, detection of neuronal spikes in extracellular recorded data and peak detection in MALDI imaging data

    Medical image synthesis using generative adversarial networks: towards photo-realistic image synthesis

    Full text link
    This proposed work addresses the photo-realism for synthetic images. We introduced a modified generative adversarial network: StencilGAN. It is a perceptually-aware generative adversarial network that synthesizes images based on overlaid labelled masks. This technique can be a prominent solution for the scarcity of the resources in the healthcare sector

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Deep learning for small object detection

    Get PDF
    Small object detection has become increasingly relevant due to the fact that the performance of common object detectors falls significantly as objects become smaller. Many computer vision applications require the analysis of the entire set of objects in the image, including extremely small objects. Moreover, the detection of small objects allows to perceive objects at a greater distance, thus giving more time to adapt to any situation or unforeseen event

    Beyond the pixels: learning and utilising video compression features for localisation of digital tampering.

    Get PDF
    Video compression is pervasive in digital society. With rising usage of deep convolutional neural networks (CNNs) in the fields of computer vision, video analysis and video tampering detection, it is important to investigate how patterns invisible to human eyes may be influencing modern computer vision techniques and how they can be used advantageously. This work thoroughly explores how video compression influences accuracy of CNNs and shows how optimal performance is achieved when compression levels in the training set closely match those of the test set. A novel method is then developed, using CNNs, to derive compression features directly from the pixels of video frames. It is then shown that these features can be readily used to detect inauthentic video content with good accuracy across multiple different video tampering techniques. Moreover, the ability to explain these features allows predictions to be made about their effectiveness against future tampering methods. The problem is motivated with a novel investigation into recent video manipulation methods, which shows that there is a consistent drive to produce convincing, photorealistic, manipulated or synthetic video. Humans, blind to the presence of video tampering, are also blind to the type of tampering. New detection techniques are required and, in order to compensate for human limitations, they should be broadly applicable to multiple tampering types. This thesis details the steps necessary to develop and evaluate such techniques

    Digital Painting Analysis:Authentication and Artistic Style from Digital Reproductions

    Get PDF
    corecore