146 research outputs found

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    Study of Computational Image Matching Techniques: Improving Our View of Biomedical Image Data

    Get PDF
    Image matching techniques are proven to be necessary in various fields of science and engineering, with many new methods and applications introduced over the years. In this PhD thesis, several computational image matching methods are introduced and investigated for improving the analysis of various biomedical image data. These improvements include the use of matching techniques for enhancing visualization of cross-sectional imaging modalities such as Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), denoising of retinal Optical Coherence Tomography (OCT), and high quality 3D reconstruction of surfaces from Scanning Electron Microscope (SEM) images. This work greatly improves the process of data interpretation of image data with far reaching consequences for basic sciences research. The thesis starts with a general notion of the problem of image matching followed by an overview of the topics covered in the thesis. This is followed by introduction and investigation of several applications of image matching/registration in biomdecial image processing: a) registration-based slice interpolation, b) fast mesh-based deformable image registration and c) use of simultaneous rigid registration and Robust Principal Component Analysis (RPCA) for speckle noise reduction of retinal OCT images. Moving towards a different notion of image matching/correspondence, the problem of view synthesis and 3D reconstruction, with a focus on 3D reconstruction of microscopic samples from 2D images captured by SEM, is considered next. Starting from sparse feature-based matching techniques, an extensive analysis is provided for using several well-known feature detector/descriptor techniques, namely ORB, BRIEF, SURF and SIFT, for the problem of multi-view 3D reconstruction. This chapter contains qualitative and quantitative comparisons in order to reveal the shortcomings of the sparse feature-based techniques. This is followed by introduction of a novel framework using sparse-dense matching/correspondence for high quality 3D reconstruction of SEM images. As will be shown, the proposed framework results in better reconstructions when compared with state-of-the-art sparse-feature based techniques. Even though the proposed framework produces satisfactory results, there is room for improvements. These improvements become more necessary when dealing with higher complexity microscopic samples imaged by SEM as well as in cases with large displacements between corresponding points in micrographs. Therefore, based on the proposed framework, a new approach is proposed for high quality 3D reconstruction of microscopic samples. While in case of having simpler microscopic samples the performance of the two proposed techniques are comparable, the new technique results in more truthful reconstruction of highly complex samples. The thesis is concluded with an overview of the thesis and also pointers regarding future directions of the research using both multi-view and photometric techniques for 3D reconstruction of SEM images

    Geodesic Active Fields:A Geometric Framework for Image Registration

    Get PDF
    Image registration is the concept of mapping homologous points in a pair of images. In other words, one is looking for an underlying deformation field that matches one image to a target image. The spectrum of applications of image registration is extremely large: It ranges from bio-medical imaging and computer vision, to remote sensing or geographic information systems, and even involves consumer electronics. Mathematically, image registration is an inverse problem that is ill-posed, which means that the exact solution might not exist or not be unique. In order to render the problem tractable, it is usual to write the problem as an energy minimization, and to introduce additional regularity constraints on the unknown data. In the case of image registration, one often minimizes an image mismatch energy, and adds an additive penalty on the deformation field regularity as smoothness prior. Here, we focus on the registration of the human cerebral cortex. Precise cortical registration is required, for example, in statistical group studies in functional MR imaging, or in the analysis of brain connectivity. In particular, we work with spherical inflations of the extracted hemispherical surface and associated features, such as cortical mean curvature. Spatial mapping between cortical surfaces can then be achieved by registering the respective spherical feature maps. Despite the simplified spherical geometry, inter-subject registration remains a challenging task, mainly due to the complexity and inter-subject variability of the involved brain structures. In this thesis, we therefore present a registration scheme, which takes the peculiarities of the spherical feature maps into particular consideration. First, we realize that we need an appropriate hierarchical representation, so as to coarsely align based on the important structures with greater inter-subject stability, before taking smaller and more variable details into account. Based on arguments from brain morphogenesis, we propose an anisotropic scale-space of mean-curvature maps, built around the Beltrami framework. Second, inspired by concepts from vision-related elements of psycho-physical Gestalt theory, we hypothesize that anisotropic Beltrami regularization better suits the requirements of image registration regularization, compared to traditional Gaussian filtering. Different objects in an image should be allowed to move separately, and regularization should be limited to within the individual Gestalts. We render the regularization feature-preserving by limiting diffusion across edges in the deformation field, which is in clear contrast to the indifferent linear smoothing. We do so by embedding the deformation field as a manifold in higher-dimensional space, and minimize the associated Beltrami energy which represents the hyperarea of this embedded manifold as measure of deformation field regularity. Further, instead of simply adding this regularity penalty to the image mismatch in lieu of the standard penalty, we propose to incorporate the local image mismatch as weighting function into the Beltrami energy. The image registration problem is thus reformulated as a weighted minimal surface problem. This approach has several appealing aspects, including (1) invariance to re-parametrization and ability to work with images defined on non-flat, Riemannian domains (e.g., curved surfaces, scalespaces), and (2) intrinsic modulation of the local regularization strength as a function of the local image mismatch and/or noise level. On a side note, we show that the proposed scheme can easily keep up with recent trends in image registration towards using diffeomorphic and inverse consistent deformation models. The proposed registration scheme, called Geodesic Active Fields (GAF), is non-linear and non-convex. Therefore we propose an efficient optimization scheme, based on splitting. Data-mismatch and deformation field regularity are optimized over two different deformation fields, which are constrained to be equal. The constraint is addressed using an augmented Lagrangian scheme, and the resulting optimization problem is solved efficiently using alternate minimization of simpler sub-problems. In particular, we show that the proposed method can easily compete with state-of-the-art registration methods, such as Demons. Finally, we provide an implementation of the fast GAF method on the sphere, so as to register the triangulated cortical feature maps. We build an automatic parcellation algorithm for the human cerebral cortex, which combines the delineations available on a set of atlas brains in a Bayesian approach, so as to automatically delineate the corresponding regions on a subject brain given its feature map. In a leave-one-out cross-validation study on 39 brain surfaces with 35 manually delineated gyral regions, we show that the pairwise subject-atlas registration with the proposed spherical registration scheme significantly improves the individual alignment of cortical labels between subject and atlas brains, and, consequently, that the estimated automatic parcellations after label fusion are of better quality

    Joint methods in imaging based on diffuse image representations

    Get PDF
    This thesis deals with the application and the analysis of different variants of the Mumford-Shah model in the context of image processing. In this kind of models, a given function is approximated in a piecewise smooth or piecewise constant manner. Especially the numerical treatment of the discontinuities requires additional models that are also outlined in this work. The main part of this thesis is concerned with four different topics. Simultaneous edge detection and registration of two images: The image edges are detected with the Ambrosio-Tortorelli model, an approximation of the Mumford-Shah model that approximates the discontinuity set with a phase field, and the registration is based on these edges. The registration obtained by this model is fully symmetric in the sense that the same matching is obtained if the roles of the two input images are swapped. Detection of grain boundaries from atomic scale images of metals or metal alloys: This is an image processing problem from materials science where atomic scale images are obtained either experimentally for instance by transmission electron microscopy or by numerical simulation tools. Grains are homogenous material regions whose atomic lattice orientation differs from their surroundings. Based on a Mumford-Shah type functional, the grain boundaries are modeled as the discontinuity set of the lattice orientation. In addition to the grain boundaries, the model incorporates the extraction of a global elastic deformation of the atomic lattice. Numerically, the discontinuity set is modeled by a level set function following the approach by Chan and Vese. Joint motion estimation and restoration of motion-blurred video: A variational model for joint object detection, motion estimation and deblurring of consecutive video frames is proposed. For this purpose, a new motion blur model is developed that accurately describes the blur also close to the boundary of a moving object. Here, the video is assumed to consist of an object moving in front of a static background. The segmentation into object and background is handled by a Mumford-Shah type aspect of the proposed model. Convexification of the binary Mumford-Shah segmentation model: After considering the application of Mumford-Shah type models to tackle specific image processing problems in the previous topics, the Mumford-Shah model itself is studied more closely. Inspired by the work of Nikolova, Esedoglu and Chan, a method is developed that allows global minimization of the binary Mumford-Shah segmentation model by solving a convex, unconstrained optimization problem. In an outlook, segmentation of flowfields into piecewise affine regions using this convexification method is briefly discussed

    Ultrasound image processing in the evaluation of labor induction failure risk

    Get PDF
    Labor induction is defined as the artificial stimulation of uterine contractions for the purpose of vaginal birth. Induction is prescribed for medical and elective reasons. Success in labor induction procedures is related to vaginal delivery. Cesarean section is one of the potential risks of labor induction as it occurs in about 20% of the inductions. A ripe cervix (soft and distensible) is needed for a successful labor. During the ripening cervical, tissues experience micro structural changes: collagen becomes disorganized and water content increases. These changes will affect the interaction between cervical tissues and sound waves during ultrasound transvaginal scanning and will be perceived as gray level intensity variations in the echographic image. Texture analysis can be used to analyze these variations and provide a means to evaluate cervical ripening in a non-invasive way

    Algorithms for super-resolution of images based on Sparse Representation and Manifolds

    Get PDF
    lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super­ resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorTese (Doutorado)Super-resolução de imagens é definido como urna classe de técnicas que melhora a resolução espacial de imagens. Métodos de super-resolução podem ser subdivididos em métodos para urna única imagens e métodos para múltiplas imagens. Esta tese foca no desenvolvimento de algoritmos baseados em teorias matemáticas para problemas de super-resolução de urna única imagem. Com o propósito de estimar urna imagem de saída, nós adotamos urna abordagem mista, ou seja: nós usamos dicionários de patches com restrição de esparsidade (método baseado em aprendizagem) e termos de regularização (método baseado em reconstrução). Embora os métodos existentes sejam eficientes, eles nao levam em consideração a geometria dos dados para: regularizar a solução, clusterizar os dados (dados sao frequentemente clusterizados usando algoritmos com a distancia Euclideana como métrica de dissimilaridade), aprendizado de dicionários (eles sao frequentemente treinados usando PCA ou K-SVD). Portante, os métodos do estado da arte ainda tem algumas deficiencias. Neste trabalho, nós propomos tres métodos originais para superar estas deficiencias. Primeiro, nós desenvolvemos SE-ASDS (um termo de regularização baseado em structure tensor) afim de melhorar a nitidez das bordas das imagens. SE-ASDS alcança resultados muito melhores que os algoritmos do estado da arte. Em seguida, nós propomos os algoritmos AGNN e GOC para determinar um subconjunto de amostras de treinamento a partir das quais um bom modelo local pode ser calculado para reconstruir urna dada amostra de entrada considerando a geometria dos dados. Os métodos AGNN e GOC superamos métodos spectral clustering, soft clustering e os métodos baseados em distancia geodésica na maioria dos casos. Depois, nós propomos o método aSOB que leva em consideração a geometria dos dados e o tamanho do dicionário. O método aSOB supera os métodos PCA e PGA. Finalmente, nós combinamos todos os métodos que propomos em um único algoritmo, a saber, G2SR. Nosso algoritmo G2SR mostra resultados melhores que os métodos do estado da arte em termos de PSRN, SSIM, FSIM e qualidade visual

    Composite Finite Elements for Trabecular Bone Microstructures

    Get PDF
    In many medical and technical applications, numerical simulations need to be performed for objects with interfaces of geometrically complex shape. We focus on the biomechanical problem of elasticity simulations for trabecular bone microstructures. The goal of this dissertation is to develop and implement an efficient simulation tool for finite element simulations on such structures, so-called composite finite elements. We will deal with both the case of material/void interfaces (complicated domains) and the case of interfaces between different materials (discontinuous coefficients). In classical finite element simulations, geometric complexity is encoded in tetrahedral and typically unstructured meshes. Composite finite elements, in contrast, encode geometric complexity in specialized basis functions on a uniform mesh of hexahedral structure. Other than alternative approaches (such as e.g. fictitious domain methods, generalized finite element methods, immersed interface methods, partition of unity methods, unfitted meshes, and extended finite element methods), the composite finite elements are tailored to geometry descriptions by 3D voxel image data and use the corresponding voxel grid as computational mesh, without introducing additional degrees of freedom, and thus making use of efficient data structures for uniformly structured meshes. The composite finite element method for complicated domains goes back to Wolfgang Hackbusch and Stefan Sauter and restricts standard affine finite element basis functions on the uniformly structured tetrahedral grid (obtained by subdivision of each cube in six tetrahedra) to an approximation of the interior. This can be implemented as a composition of standard finite element basis functions on a local auxiliary and purely virtual grid by which we approximate the interface. In case of discontinuous coefficients, the same local auxiliary composition approach is used. Composition weights are obtained by solving local interpolation problems for which coupling conditions across the interface need to be determined. These depend both on the local interface geometry and on the (scalar or tensor-valued) material coefficients on both sides of the interface. We consider heat diffusion as a scalar model problem and linear elasticity as a vector-valued model problem to develop and implement the composite finite elements. Uniform cubic meshes contain a natural hierarchy of coarsened grids, which allows us to implement a multigrid solver for the case of complicated domains. Besides simulations of single loading cases, we also apply the composite finite element method to the problem of determining effective material properties, e.g. for multiscale simulations. For periodic microstructures, this is achieved by solving corrector problems on the fundamental cells using affine-periodic boundary conditions corresponding to uniaxial compression and shearing. For statistically periodic trabecular structures, representative fundamental cells can be identified but do not permit the periodic approach. Instead, macroscopic displacements are imposed using the same set as before of affine-periodic Dirichlet boundary conditions on all faces. The stress response of the material is subsequently computed only on an interior subdomain to prevent artificial stiffening near the boundary. We finally check for orthotropy of the macroscopic elasticity tensor and identify its axes.Zusammengesetzte finite Elemente für trabekuläre Mikrostrukturen in Knochen In vielen medizinischen und technischen Anwendungen werden numerische Simulationen für Objekte mit geometrisch komplizierter Form durchgeführt. Gegenstand dieser Dissertation ist die Simulation der Elastizität trabekulärer Mikrostrukturen von Knochen, einem biomechanischen Problem. Ziel ist es, ein effizientes Simulationswerkzeug für solche Strukturen zu entwickeln, die sogenannten zusammengesetzten finiten Elemente. Wir betrachten dabei sowohl den Fall von Interfaces zwischen Material und Hohlraum (komplizierte Gebiete) als auch zwischen verschiedenen Materialien (unstetige Koeffizienten). In klassischen Finite-Element-Simulationen wird geometrische Komplexität typischerweise in unstrukturierten Tetraeder-Gittern kodiert. Zusammengesetzte finite Elemente dagegen kodieren geometrische Komplexität in speziellen Basisfunktionen auf einem gleichförmigen Würfelgitter. Anders als alternative Ansätze (wie zum Beispiel fictitious domain methods, generalized finite element methods, immersed interface methods, partition of unity methods, unfitted meshes und extended finite element methods) sind die zusammengesetzten finiten Elemente zugeschnitten auf die Geometriebeschreibung durch dreidimensionale Bilddaten und benutzen das zugehörige Voxelgitter als Rechengitter, ohne zusätzliche Freiheitsgrade einzuführen. Somit können sie effiziente Datenstrukturen für gleichförmig strukturierte Gitter ausnutzen. Die Methode der zusammengesetzten finiten Elemente geht zurück auf Wolfgang Hackbusch und Stefan Sauter. Man schränkt dabei übliche affine Finite-Element-Basisfunktionen auf gleichförmig strukturierten Tetraedergittern (die man durch Unterteilung jedes Würfels in sechs Tetraeder erhält) auf das approximierte Innere ein. Dies kann implementiert werden durch das Zusammensetzen von Standard-Basisfunktionen auf einem lokalen und rein virtuellen Hilfsgitter, durch das das Interface approximiert wird. Im Falle unstetiger Koeffizienten wird die gleiche lokale Hilfskonstruktion verwendet. Gewichte für das Zusammensetzen erhält man hier, indem lokale Interpolationsprobleme gelöst werden, wozu zunächst Kopplungsbedingungen über das Interface hinweg bestimmt werden. Diese hängen ab sowohl von der lokalen Geometrie des Interface als auch von den (skalaren oder tensorwertigen) Material-Koeffizienten auf beiden Seiten des Interface. Wir betrachten Wärmeleitung als skalares und lineare Elastizität als vektorwertiges Modellproblem, um die zusammengesetzten finiten Elemente zu entwickeln und zu implementieren. Gleichförmige Würfelgitter enthalten eine natürliche Hierarchie vergröberter Gitter, was es uns erlaubt, im Falle komplizierter Gebiete einen Mehrgitterlöser zu implementieren. Neben Simulationen einzelner Lastfälle wenden wir die zusammengesetzten finiten Elemente auch auf das Problem an, effektive Materialeigenschaften zu bestimmen, etwa für mehrskalige Simulationen. Für periodische Mikrostrukturen wird dies erreicht, indem man Korrekturprobleme auf der Fundamentalzelle löst. Dafür nutzt man affin-periodische Randwerte, die zu uniaxialem Druck oder zu Scherung korrespondieren. In statistisch periodischen trabekulären Mikrostrukturen lassen sich ebenfalls Fundamentalzellen identifizieren, sie erlauben jedoch keinen periodischen Ansatz. Stattdessen werden makroskopische Verschiebungen zu denselben affin-periodischen Randbedingungen vorgegeben, allerdings durch Dirichlet-Randwerte auf allen Seitenflächen. Die Spannungsantwort des Materials wird anschließend nur auf einem inneren Teilbereich berechnet, um künstliche Versteifung am Rand zu verhindern. Schließlich prüfen wir den makroskopischen Elastizitätstensor auf Orthotropie und identifizieren deren Achsen

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    On motion in dynamic magnetic resonance imaging: Applications in cardiac function and abdominal diffusion

    Get PDF
    La imagen por resonancia magnética (MRI), hoy en día, representa una potente herramienta para el diagnóstico clínico debido a su flexibilidad y sensibilidad a un amplio rango de propiedades del tejido. Sus principales ventajas son su sobresaliente versatilidad y su capacidad para proporcionar alto contraste entre tejidos blandos. Gracias a esa versatilidad, la MRI se puede emplear para observar diferentes fenómenos físicos dentro del cuerpo humano combinando distintos tipos de pulsos dentro de la secuencia. Esto ha permitido crear distintas modalidades con múltiples aplicaciones tanto biológicas como clínicas. La adquisición de MR es, sin embargo, un proceso lento, lo que conlleva una solución de compromiso entre resolución y tiempo de adquisición (Lima da Cruz, 2016; Royuela-del Val, 2017). Debido a esto, la presencia de movimiento fisiológico durante la adquisición puede conllevar una grave degradación de la calidad de imagen, así como un incremento del tiempo de adquisición, aumentando así tambien la incomodidad del paciente. Esta limitación práctica representa un gran obstáculo para la viabilidad clínica de la MRI. En esta Tesis Doctoral se abordan dos problemas de interés en el campo de la MRI en los que el movimiento fisiológico tiene un papel protagonista. Éstos son, por un lado, la estimación robusta de parámetros de rotación y esfuerzo miocárdico a partir de imágenes de MR-Tagging dinámica para el diagnóstico y clasificación de cardiomiopatías y, por otro, la reconstrucción de mapas del coeficiente de difusión aparente (ADC) a alta resolución y con alta relación señal a ruido (SNR) a partir de adquisiciones de imagen ponderada en difusión (DWI) multiparamétrica en el hígado.Departamento de Teoría de la Señal y Comunicaciones e Ingeniería TelemáticaDoctorado en Tecnologías de la Información y las Telecomunicacione
    corecore