211 research outputs found

    Automatisation du traitement d'images acquises par IRM de diffusion et techniques d'acquisition avancées avec application sur le primate

    Get PDF
    L'imagerie par résonance magnétique de diffusison (IRMd) est une technologie d'imagerie médicale non-invasive permettant de cartographier la structure axonale du cerveau et d'en extraire des mesures d'orientation et d'intégrité de la matière blanche. Malgré l'intérêt que connaît le domaine de la recherche en IRMd depuis presque 40 ans, un faible pourcentage des techniques modernes développées sont utilisées au niveau clinique et hospitalier. Cela vient en grande partie du fait que la communauté connaît un grand problème de variabilité et de validation, rendant la mise en application des technologies ardue et risquée. Pour valider l'exécution d'un algorithme ou la validité d'une théorie, comme aucune mesure étalon n'existe en IRMd, il est usuel de chercher à reproduire les résultats observés chez les humains dans le cerveau d'animaux similaires. Pour cela, les primates sont particulièrement intéressants, puisque la morphologie de leur cerveau est très proche de celle de l'humain. Cependant, peu d'outils de traitement automatisés dans le domaine de l'IRMd développés pour l'humain s'exécutent correctement sur les images de petit animal ou de primate. Les images sont acquises à des résolutions spatiales plus fines et angulaires plus riches et souffrent généralement d'artéfacts plus intenses, requérant plus d'itérations pour converger et une configuration fine des paramètres d'exécutions. Dans ce mémoire, nous présentons un nouvel outil d'automatisation de traitement des données d'IRMd, pouvant être utilisé pour produire des modèles et des mesures de diffusion. Nous exposons son implémentation modulaire permettant une maintenance simple des dépendances, modules et algorithmes et une configuration étendue des étapes de traitement. Nous démontrons la robustesse et la reproducibilité de son exécution sur des données d'IRMd haute résolution. Nous présentons aussi une étude de la variabilité des données de diffusion de primates contenues dans la base de données PRIME-DE

    Scene-Dependency of Spatial Image Quality Metrics

    Get PDF
    This thesis is concerned with the measurement of spatial imaging performance and the modelling of spatial image quality in digital capturing systems. Spatial imaging performance and image quality relate to the objective and subjective reproduction of luminance contrast signals by the system, respectively; they are critical to overall perceived image quality. The Modulation Transfer Function (MTF) and Noise Power Spectrum (NPS) describe the signal (contrast) transfer and noise characteristics of a system, respectively, with respect to spatial frequency. They are both, strictly speaking, only applicable to linear systems since they are founded upon linear system theory. Many contemporary capture systems use adaptive image signal processing, such as denoising and sharpening, to optimise output image quality. These non-linear processes change their behaviour according to characteristics of the input signal (i.e. the scene being captured). This behaviour renders system performance “scene-dependent” and difficult to measure accurately. The MTF and NPS are traditionally measured from test charts containing suitable predefined signals (e.g. edges, sinusoidal exposures, noise or uniform luminance patches). These signals trigger adaptive processes at uncharacteristic levels since they are unrepresentative of natural scene content. Thus, for systems using adaptive processes, the resultant MTFs and NPSs are not representative of performance “in the field” (i.e. capturing real scenes). Spatial image quality metrics for capturing systems aim to predict the relationship between MTF and NPS measurements and subjective ratings of image quality. They cascade both measures with contrast sensitivity functions that describe human visual sensitivity with respect to spatial frequency. The most recent metrics designed for adaptive systems use MTFs measured using the dead leaves test chart that is more representative of natural scene content than the abovementioned test charts. This marks a step toward modelling image quality with respect to real scene signals. This thesis presents novel scene-and-process-dependent MTFs (SPD-MTF) and NPSs (SPDNPS). They are measured from imaged pictorial scene (or dead leaves target) signals to account for system scene-dependency. Further, a number of spatial image quality metrics are revised to account for capture system and visual scene-dependency. Their MTF and NPS parameters were substituted for SPD-MTFs and SPD-NPSs. Likewise, their standard visual functions were substituted for contextual detection (cCSF) or discrimination (cVPF) functions. In addition, two novel spatial image quality metrics are presented (the log Noise Equivalent Quanta (NEQ) and Visual log NEQ) that implement SPD-MTFs and SPD-NPSs. The metrics, SPD-MTFs and SPD-NPSs were validated by analysing measurements from simulated image capture pipelines that applied either linear or adaptive image signal processing. The SPD-NPS measures displayed little evidence of measurement error, and the metrics performed most accurately when they used SPD-NPSs measured from images of scenes. The benefit of deriving SPD-MTFs from images of scenes was traded-off, however, against measurement bias. Most metrics performed most accurately with SPD-MTFs derived from dead leaves signals. Implementing the cCSF or cVPF did not increase metric accuracy. The log NEQ and Visual log NEQ metrics proposed in this thesis were highly competitive, outperforming metrics of the same genre. They were also more consistent than the IEEE P1858 Camera Phone Image Quality (CPIQ) metric when their input parameters were modified. The advantages and limitations of all performance measures and metrics were discussed, as well as their practical implementation and relevant applications

    Denoising and enhancement of digital images : variational methods, integrodifferential equations, and wavelets

    Get PDF
    The topics of this thesis are methods for denoising, enhancement, and simplification of digital image data. Special emphasis lies on the relations and structural similarities between several classes of methods which are motivated from different contexts. In particular, one can distinguish the methods treated in this thesis in three classes: For variational approaches and partial differential equations, the notion of the derivative is the tool of choice to model regularity of the data and the desired result. A general framework for such approaches is proposed that involve all partial derivatives of a prescribed order and experimentally are capable of leading to piecewise polynomial approximations of the given data. The second class of methods uses wavelets to represent the data which makes it possible to understand the filtering as very simple pointwise application of a nonlinear function. To view these wavelets as derivatives of smoothing kernels is the basis for relating these methods to integrodifferential equations which are investigated here. In the third case, values of the image in a neighbourhood are averaged where the weights of this averaging can be adapted respecting different criteria. By refinement of the pixel grid and transfer to scaling limits, connections to partial differential equations become visible here, too. They are described in the framework explained before. Numerical aspects of the simplification of images are presented with respect to the NDS energy function, a unifying approach that allows to model many of the aforementioned methods. The behaviour of the filtering methods is documented with numerical examples.Gegenstand der vorliegenden Arbeit sind Verfahren zum Entrauschen, qualitativen Verbessern und Vereinfachen digitaler Bilddaten. Besonderes Augenmerk liegt dabei auf den Beziehungen und der strukturellen Ähnlichkeit zwischen unterschiedlich motivierten Verfahrensklassen. Insbesondere lassen sich die hier behandelten Methoden in drei Klassen einordnen: Bei den Variationsansätzen und partiellen Differentialgleichungen steht der Begriff der Ableitung im Mittelpunkt, um Regularität der Daten und des gewünschten Resultats zu modellieren. Hier wird ein einheitlicher Rahmen für solche Ansätze angegeben, die alle partiellen Ableitungen einer vorgegebenen Ordnung involvieren und experimentell auf stückweise polynomielle Approximationen der gegebenen Daten führen können. Die zweite Klasse von Methoden nutzt Wavelets zur Repräsentation von Daten, mit deren Hilfe sich Filterung als sehr einfache punktweise Anwendung einer nichtlinearen Funktion verstehen lässt. Diese Wavelets als Ableitungen von Glättungskernen aufzufassen bildet die Grundlage für die hier untersuchte Verbindung dieser Verfahren zu Integrodifferentialgleichungen. Im dritten Fall werden Werte des Bildes in einer Nachbarschaft gemittelt, wobei die Gewichtung bei dieser Mittelung adaptiv nach verschiedenen Kriterien angepasst werden kann. Durch Verfeinern des Pixelgitters und Übergang zu Skalierungslimites werden auch hier Verbindungen zu partiellen Differentialgleichungen sichtbar, die in den vorher dargestellten Rahmen eingeordnet werden. Numerische Aspekte beim Vereinfachen von Bildern werden anhand der NDS-Energiefunktion dargestellt, eines einheitlichen Ansatzes, mit dessen Hilfe sich viele der vorgenannten Methoden realisieren lassen. Das Verhalten der einzelnen Filtermethoden wird dabei jeweils durch numerische Beispiele dokumentiert

    Doctor of Philosophy

    Get PDF
    dissertationDiffusion tensor MRI (DT-MRI or DTI) has been proven useful for characterizing biological tissue microstructure, with the majority of DTI studies having been performed previously in the brain. Other studies have shown that changes in DTI parameters are detectable in the presence of cardiac pathology, recovery, and development, and provide insight into the microstructural mechanisms of these processes. However, the technical challenges of implementing cardiac DTI in vivo, including prohibitive scan times inherent to DTI and measuring small-scale diffusion in the beating heart, have limited its widespread usage. This research aims to address these technical challenges by: (1) formulating a model-based reconstruction algorithm to accurately estimate DTI parameters directly from fewer MRI measurements and (2) designing novel diffusion encoding MRI pulse sequences that compensate for the higher-order motion of the beating heart. The model-based reconstruction method was tested on undersampled DTI data and its performance was compared against other state-of-the-art reconstruction algorithms. Model-based reconstruction was shown to produce DTI parameter maps with less blurring and noise and to estimate global DTI parameters more accurately than alternative methods. Through numerical simulations and experimental demonstrations in live rats, higher-order motion compensated diffusion-encoding was shown to successfully eliminate signal loss due to motion, which in turn produced data of sufficient quality to accurately estimate DTI parameters, such as fiber helix angle. Ultimately, the model-based reconstruction and higher-order motion compensation methods were combined to characterize changes in the cardiac microstructure in a rat model with inducible arterial hypertension in order to demonstrate the ability of cardiac DTI to detect pathological changes in living myocardium

    Time-fractional Cahn-Hilliard equation: Well-posedness, degeneracy, and numerical solutions

    Full text link
    In this paper, we derive the time-fractional Cahn-Hilliard equation from continuum mixture theory with a modification of Fick's law of diffusion. This model describes the process of phase separation with nonlocal memory effects. We analyze the existence, uniqueness, and regularity of weak solutions of the time-fractional Cahn-Hilliard equation. In this regard, we consider degenerating mobility functions and free energies of Landau, Flory--Huggins and double-obstacle type. We apply the Faedo-Galerkin method to the system, derive energy estimates, and use compactness theorems to pass to the limit in the discrete form. In order to compensate for the missing chain rule of fractional derivatives, we prove a fractional chain inequality for semiconvex functions. The work concludes with numerical simulations and a sensitivity analysis showing the influence of the fractional power. Here, we consider a convolution quadrature scheme for the time-fractional component, and use a mixed finite element method for the space discretization
    • …
    corecore