6,618 research outputs found

    A graph-based mathematical morphology reader

    Full text link
    This survey paper aims at providing a "literary" anthology of mathematical morphology on graphs. It describes in the English language many ideas stemming from a large number of different papers, hence providing a unified view of an active and diverse field of research

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    DeepACSON automated segmentation of white matter in 3D electron microscopy

    Get PDF
    Tracing the entirety of ultrastructures in large three-dimensional electron microscopy (3D-EM) images of the brain tissue requires automated segmentation techniques. Current segmentation techniques use deep convolutional neural networks (DCNNs) and rely on high-contrast cellular membranes and high-resolution EM volumes. On the other hand, segmenting low-resolution, large EM volumes requires methods to account for severe membrane discontinuities inescapable. Therefore, we developed DeepACSON, which performs DCNN-based semantic segmentation and shape-decomposition-based instance segmentation. DeepACSON instance segmentation uses the tubularity of myelinated axons and decomposes under-segmented myelinated axons into their constituent axons. We applied DeepACSON to ten EM volumes of rats after sham-operation or traumatic brain injury, segmenting hundreds of thousands of long-span myelinated axons, thousands of cell nuclei, and millions of mitochondria with excellent evaluation scores. DeepACSON quantified the morphology and spatial aspects of white matter ultrastructures, capturing nanoscopic morphological alterations five months after the injury. With DeepACSON, Abdollahzadeh et al. combines existing deep learning-based methods for semantic segmentation and a novel shape decomposition technique for the instance segmentation. The pipeline is used to segment low-resolution 3D-EM datasets allowing quantification of white matter morphology in large fields-of-view.Peer reviewe

    Gap Filling of 3-D Microvascular Networks by Tensor Voting

    Get PDF
    We present a new algorithm which merges discontinuities in 3-D images of tubular structures presenting undesirable gaps. The application of the proposed method is mainly associated to large 3-D images of microvascular networks. In order to recover the real network topology, we need to fill the gaps between the closest discontinuous vessels. The algorithm presented in this paper aims at achieving this goal. This algorithm is based on the skeletonization of the segmented network followed by a tensor voting method. It permits to merge the most common kinds of discontinuities found in microvascular networks. It is robust, easy to use, and relatively fast. The microvascular network images were obtained using synchrotron tomography imaging at the European Synchrotron Radiation Facility. These images exhibit samples of intracortical networks. Representative results are illustrated

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Bal images analysis for their automatic quantification

    Get PDF
    Las imágenes de lavado bronchoalveolar (BAL) son una prueba médica de la que se pueden extraer diferentes patologías en función de su distribución celular. En el hospital Vall d'Hebron los técnicos hacen un recuento manual para determinar esta distribución. No teníamos ninguna imagen BAL etiquetada, porque este es el primer contacto con el problema. En este proyecto, se utiliza la solución basada en maxtree para hacer la primera segmentación y, a continuación, se realiza un proceso de corrección para etiquetar correctamente 56 imágenes BAL. Con este pequeño conjunto de datos, y teniendo en cuenta los resultados del algoritmo basado en maxtree en algunos aspectos, no eran tan buenos como esperábamos, decidimos probar una CNN basada en U-Net. Aplicamos técnicas específicas para pequeños conjuntos de datos y probamos diferentes parametrizaciones de la red. Finalmente, obtenemos un 70% de puntuación global IOU aprox. en la validación de la CNN.Bronchoalveolar lavage (BAL) images are a medical test from which different pathologies can be extracted based on their cellular distribution. In the hospital La Vall d'Hebron technicians make manual count to determine this cell distribution. We did not have any labelled BAL image, because this was the first contact with the problem. In this project, maxtree-based solution is used to do the first segmentation, and then, a correcting process is carried out in order to correctly label 56 BAL images. With this small dataset, and taking into account the maxtree results in some aspects were not as good as we expected, we decided to test and train a CNN based in the U-Net. We applied specific techniques for small datasets and we tested different parametrizations of the network. Finally, we obtain 70% IoU global score approx. in the validation of the CNN.Les imatges de rentat bronchoalveolar (BAL) són una prova mèdica de la qual es poden extreure diferents patologies en funció de la seva distribució cel·lular. A l'hospital La Vall d'Hebron els tècnics fan un recompte manual per determinar aquesta distribució. No teníem cap imatge BAL etiquetada, perquè aquest es el primer contacte amb el problema. En aquest projecte, s'utilitza la solució basada en maxtree per fer la primera segmentació i, a continuació, es realitza un procés de correcció per etiquetar correctament 56 imatges BAL. Amb aquest petit conjunt de dades, i tenint en compte els resultats maxtree en alguns aspectes, no eren tan bons com esperàvem, vam decidir provar una CNN basada en U-Net. Vam aplicar tècniques específiques per a petits conjunts de dades i vam provar diferents parametritzacions de la xarxa. Finalment, obtenim un 70% de puntuació global IoU aprox. en la validació de la CNN

    Medical Image Segmentation: Thresholding and Minimum Spanning Trees

    Get PDF
    I bildesegmentering deles et bilde i separate objekter eller regioner. Det er et essensielt skritt i bildebehandling for å definere interesseområder for videre behandling eller analyse. Oppdelingsprosessen reduserer kompleksiteten til et bilde for å forenkle analysen av attributtene oppnådd etter segmentering. Det forandrer representasjonen av informasjonen i det opprinnelige bildet og presenterer pikslene på en måte som er mer meningsfull og lettere å forstå. Bildesegmentering har forskjellige anvendelser. For medisinske bilder tar segmenteringsprosessen sikte på å trekke ut bildedatasettet for å identifisere områder av anatomien som er relevante for en bestemt studie eller diagnose av pasienten. For eksempel kan man lokalisere berørte eller anormale deler av kroppen. Segmentering av oppfølgingsdata og baseline lesjonssegmentering er også svært viktig for å vurdere behandlingsresponsen. Det er forskjellige metoder som blir brukt for bildesegmentering. De kan klassifiseres basert på hvordan de er formulert og hvordan segmenteringsprosessen utføres. Metodene inkluderer de som er baserte på terskelverdier, graf-baserte, kant-baserte, klynge-baserte, modell-baserte og hybride metoder, og metoder basert på maskinlæring og dyp læring. Andre metoder er baserte på å utvide, splitte og legge sammen regioner, å finne diskontinuiteter i randen, vannskille segmentering, aktive kontuter og graf-baserte metoder. I denne avhandlingen har vi utviklet metoder for å segmentere forskjellige typer medisinske bilder. Vi testet metodene på datasett for hvite blodceller (WBCs) og magnetiske resonansbilder (MRI). De utviklede metodene og analysen som er utført på bildedatasettet er presentert i tre artikler. I artikkel A (Paper A) foreslo vi en metode for segmentering av nukleuser og cytoplasma fra hvite blodceller. Metodene estimerer terskelen for segmentering av nukleuser automatisk basert på lokale minima. Metoden segmenterer WBC-ene før segmentering av cytoplasma avhengig av kompleksiteten til objektene i bildet. For bilder der WBC-ene er godt skilt fra røde blodlegemer (RBC), er WBC-ene segmentert ved å ta gjennomsnittet av nn bilder som allerede var filtrert med en terskelverdi. For bilder der RBC-er overlapper WBC-ene, er hele WBC-ene segmentert ved hjelp av enkle lineære iterative klynger (SLIC) og vannskillemetoder. Cytoplasmaet oppnås ved å trekke den segmenterte nukleusen fra den segmenterte WBC-en. Metoden testes på to forskjellige offentlig tilgjengelige datasett, og resultatene sammenlignes med toppmoderne metoder. I artikkel B (Paper B) foreslo vi en metode for segmentering av hjernesvulster basert på minste dekkende tre-konsepter (minimum spanning tree, MST). Metoden utfører interaktiv segmentering basert på MST. I denne artikkelen er bildet lastet inn i et interaktivt vindu for segmentering av svulsten. Fokusregion og bakgrunn skilles ved å klikke for å dele MST i to trær. Ett av disse trærne representerer fokusregionen og det andre representerer bakgrunnen. Den foreslåtte metoden ble testet ved å segmentere to forskjellige 2D-hjerne T1 vektede magnetisk resonans bildedatasett. Metoden er enkel å implementere og resultatene indikerer at den er nøyaktig og effektiv. I artikkel C (Paper C) foreslår vi en metode som behandler et 3D MRI-volum og deler det i hjernen, ikke-hjernevev og bakgrunnsegmenter. Det er en grafbasert metode som bruker MST til å skille 3D MRI inn i de tre regiontypene. Grafen lages av et forhåndsbehandlet 3D MRI-volum etterfulgt av konstrueringen av MST-en. Segmenteringsprosessen gir tre merkede, sammenkoblende komponenter som omformes tilbake til 3D MRI-form. Etikettene brukes til å segmentere hjernen, ikke-hjernevev og bakgrunn. Metoden ble testet på tre forskjellige offentlig tilgjengelige datasett og resultatene ble sammenlignet med ulike toppmoderne metoder.In image segmentation, an image is divided into separate objects or regions. It is an essential step in image processing to define areas of interest for further processing or analysis. The segmentation process reduces the complexity of an image to simplify the analysis of the attributes obtained after segmentation. It changes the representation of the information in the original image and presents the pixels in a way that is more meaningful and easier to understand. Image segmentation has various applications. For medical images, the segmentation process aims to extract the image data set to identify areas of the anatomy relevant to a particular study or diagnosis of the patient. For example, one can locate affected or abnormal parts of the body. Segmentation of follow-up data and baseline lesion segmentation is also very important to assess the treatment response. There are different methods used for image segmentation. They can be classified based on how they are formulated and how the segmentation process is performed. The methods include those based on threshold values, edge-based, cluster-based, model-based and hybrid methods, and methods based on machine learning and deep learning. Other methods are based on growing, splitting and merging regions, finding discontinuities in the edge, watershed segmentation, active contours and graph-based methods. In this thesis, we have developed methods for segmenting different types of medical images. We tested the methods on datasets for white blood cells (WBCs) and magnetic resonance images (MRI). The developed methods and the analysis performed on the image data set are presented in three articles. In Paper A we proposed a method for segmenting nuclei and cytoplasm from white blood cells. The method estimates the threshold for segmentation of nuclei automatically based on local minima. The method segments the WBCs before segmenting the cytoplasm depending on the complexity of the objects in the image. For images where the WBCs are well separated from red blood cells (RBCs), the WBCs are segmented by taking the average of nn images that were already filtered with a threshold value. For images where RBCs overlap the WBCs, the entire WBCs are segmented using simple linear iterative clustering (SLIC) and watershed methods. The cytoplasm is obtained by subtracting the segmented nucleus from the segmented WBC. The method is tested on two different publicly available datasets, and the results are compared with state of the art methods. In Paper B, we proposed a method for segmenting brain tumors based on minimum spanning tree (MST) concepts. The method performs interactive segmentation based on the MST. In this paper, the image is loaded in an interactive window for segmenting the tumor. The region of interest and the background are selected by clicking to split the MST into two trees. One of these trees represents the region of interest and the other represents the background. The proposed method was tested by segmenting two different 2D brain T1-weighted magnetic resonance image data sets. The method is simple to implement and the results indicate that it is accurate and efficient. In Paper C, we propose a method that processes a 3D MRI volume and partitions it into brain, non-brain tissues, and background segments. It is a graph-based method that uses MST to separate the 3D MRI into the brain, non-brain, and background regions. The graph is made from a preprocessed 3D MRI volume followed by constructing the MST. The segmentation process produces three labeled connected components which are reshaped back to the shape of the 3D MRI. The labels are used to segment the brain, non-brain tissues, and the background. The method was tested on three different publicly available data sets and the results were compared to different state of the art methods.Doktorgradsavhandlin
    corecore