11 research outputs found

    Exposure Fusion Using Boosting Laplacian Pyramid

    Get PDF
    Abstract-This paper proposes a new exposure fusion approach for producing a high quality image result from multiple exposure images. Based on the local weight and global weight by considering the exposure quality measurement between different exposure images, and the just noticeable distortion-based saliency weight, a novel hybrid exposure weight measurement is developed. This new hybrid weight is guided not only by a single image's exposure level but also by the relative exposure level between different exposure images. The core of the approach is our novel boosting Laplacian pyramid, which is based on the structure of boosting the detail and base signal, respectively, and the boosting process is guided by the proposed exposure weight. Our approach can effectively blend the multiple exposure images for static scenes while preserving both color appearance and texture structure. Our experimental results demonstrate that the proposed approach successfully produces visually pleasing exposure fusion images with better color appearance and more texture details than the existing exposure fusion techniques and tone mapping operators. Index Terms-Boosting Laplacian pyramid, exposure fusion, global and local exposure weight, gradient vector

    Weighted Least Squares Based Detail Enhanced Exposure Fusion

    Get PDF

    デバイスの限界を超えた正確な撮像を可能にする深層学習

    Get PDF
    Tohoku University博士(情報科学)thesi

    Image fusion for the novelty rotating synthetic aperture system based on vision transformer

    Get PDF
    Rotating synthetic aperture (RSA) technology offers a promising solution for achieving large-aperture and lightweight designs in optical remote-sensing systems. It employs a rectangular primary mirror, resulting in noncircular spatial symmetry in the point-spread function, which changes over time as the mirror rotates. Consequently, it is crucial to employ an appropriate image-fusion method to merge high-resolution information intermittently captured from different directions in the image sequence owing to the rotation of the mirror. However, existing image-fusion methods have struggled to address the unique imaging mechanism of this system and the characteristics of the geostationary orbit in which the system operates. To address this challenge, we model the imaging process of a noncircular rotating pupil and analyse its on-orbit imaging characteristics. Based on this analysis, we propose an image-fusion network based on a vision transformer. This network incorporates inter-frame mutual attention and intra-frame self-attention mechanisms, facilitating more effective extraction of temporal and spatial information from the image sequence. Specifically, mutual attention was used to model the correlation between pixels that were close to each other in the spatial and temporal dimensions, whereas long-range spatial dependencies were captured using intra-frame self-attention in the rotated variable-size attention block. We subsequently enhanced the fusion of spatiotemporal information using video swin transformer blocks. Extensive digital simulations and semi-physical imaging experiments on remote-sensing images obtained from the WorldView-3 satellite demonstrated that our method outperformed both image-fusion methods designed for the RSA system and state-of-the-art general deep learning-based methods

    특징 혼합 네트워크를 이용한 영상 정합 기법과 고 명암비 영상법 및 비디오 고 해상화에서의 응용

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·컴퓨터공학부, 2020. 8. 조남익.This dissertation presents a deep end-to-end network for high dynamic range (HDR) imaging of dynamic scenes with background and foreground motions. Generating an HDR image from a sequence of multi-exposure images is a challenging process when the images have misalignments by being taken in a dynamic situation. Hence, recent methods first align the multi-exposure images to the reference by using patch matching, optical flow, homography transformation, or attention module before the merging. In this dissertation, a deep network that synthesizes the aligned images as a result of blending the information from multi-exposure images is proposed, because explicitly aligning photos with different exposures is inherently a difficult problem. Specifically, the proposed network generates under/over-exposure images that are structurally aligned to the reference, by blending all the information from the dynamic multi-exposure images. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned images with a wide range of exposure differences very well and thus can be effectively used for the HDR imaging of dynamic scenes. Moreover, by adding a simple merging network after the alignment network and training the overall system end-to-end, a performance gain compared to the recent state-of-the-art methods is obtained. This dissertation also presents a deep end-to-end network for video super-resolution (VSR) of frames with motions. To reconstruct an HR frame from a sequence of adjacent frames is a challenging process when the images have misalignments. Hence, recent methods first align the adjacent frames to the reference by using optical flow or adding spatial transformer network (STN). In this dissertation, a deep network that synthesizes the aligned frames as a result of blending the information from adjacent frames is proposed, because explicitly aligning frames is inherently a difficult problem. Specifically, the proposed network generates adjacent frames that are structurally aligned to the reference, by blending all the information from the neighbor frames. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing frames that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned frames very well and thus can be effectively used for the VSR. Moreover, by adding a simple reconstruction network after the alignment network and training the overall system end-to-end, A performance gain compared to the recent state-of-the-art methods is obtained. In addition to each HDR imaging and VSR network, this dissertation presents a deep end-to-end network for joint HDR-SR of dynamic scenes with background and foreground motions. The proposed HDR imaging and VSR networks enhace the dynamic range and the resolution of images, respectively. However, they can be enhanced simultaneously by a single network. In this dissertation, the network which has same structure of the proposed VSR network is proposed. The network is shown to reconstruct the final results which have higher dynamic range and resolution. It is compared with several methods designed with existing HDR imaging and VSR networks, and shows both qualitatively and quantitatively better results.본 학위논문은 배경 및 전경의 움직임이 있는 상황에서 고 명암비 영상법을 위한 딥 러닝 네트워크를 제안한다. 움직임이 있는 상황에서 촬영된 노출이 다른 여러 영 상들을 이용하여 고 명암비 영상을 생성하는 것은 매우 어려운 작업이다. 그렇기 때문에, 최근에 제안된 방법들은 이미지들을 합성하기 전에 패치 매칭, 옵티컬 플로우, 호모그래피 변환 등을 이용하여 그 이미지들을 먼저 정렬한다. 실제로 노출 정도가 다른 여러 이미지들을 정렬하는 것은 아주 어려운 작업이기 때문에, 이 논문에서는 여러 이미지들로부터 얻은 정보를 섞어서 정렬된 이미지를 합성하는 네트워크를 제안한다. 특히, 제안하는 네트워크는 더 밝게 혹은 어둡게 촬영된 이미지들을 중간 밝기로 촬영된 이미지를 기준으로 정렬한다. 주요한 아이디어는 정렬된 이미지를 합성할 때 특징 도메인에서 합성하는 것이며, 이는 픽셀 도메인에서 합성하거나 기하학적 변환을 이용할 때 보다 더 좋은 정렬 결과를 갖는다. 특히, 제안하는 정렬 네트워크는 두 갈래의 인코더와 컨볼루션 레이어들 그리고 디코더로 이루어져 있다. 인코더들은 두 입력 이미지로부터 특징을 추출하고, 컨볼루션 레이어들이 이 특징들을 섞는다. 마지막으로 디코더에서 정렬된 이미지를 생성한다. 제안하는 네트워크는 고 명암비 영상법에서 사용될 수 있도록 노출 정도가 크게 차이나는 영상에서도 잘 작동한다. 게다가, 간단한 병합 네트워크를 추가하고 전체 네트워크들을 한 번에 학습함으로서, 최근에 제안된 방법들 보다 더 좋은 성능을 갖는다. 또한, 본 학위논문은 동영상 내 프레임들을 이용하는 비디오 고 해상화 방법을 위한 딥 러닝 네트워크를 제안한다. 동영상 내 인접한 프레임들 사이에는 움직임이 존재하기 때문에, 이들을 이용하여 고 해상도의 프레임을 합성하는 것은 아주 어려운 작업이다. 따라서, 최근에 제안된 방법들은 이 인접한 프레임들을 정렬하기 위해 옵티컬 플로우를 계산하거나 STN을 추가한다. 움직임이 존재하는 프레임들을 정렬하는 것은 어려운 과정이기 때문에, 이 논문에서는 인접한 프레임들로부터 얻은 정보를 섞어서 정렬된 프레임을 합성하는 네트워크를 제안한다. 특히, 제안하는 네트워크는 이웃한 프레임들을 목표 프레임을 기준으로 정렬한다. 마찬가지로 주요 아이디어는 정렬된 프레임을 합성할 때 특징 도메인에서 합성하는 것이다. 이는 픽셀 도메인에서 합성하거나 기하학적 변환을 이용할 때 보다 더 좋은 정렬 결과를 갖는다. 특히, 제안하는 정렬 네트워크는 두 갈래의 인코더와 컨볼루션 레이어들 그리고 디코더로 이루어져 있다. 인코더들은 두 입력 프레임으로부터 특징을 추출하고, 컨볼루션 레이어들이 이 특징들을 섞는다. 마지막으로 디코더에서 정렬된 프레임을 생성한다. 제안하는 네트워크는 인접한 프레임들을 잘 정렬하며, 비디오 고 해상화에 효과적으로 사용될 수 있다. 게다가 병합 네트워크를 추가하고 전체 네트워크들을 한 번에 학습함으로서, 최근에 제안된 여러 방법들 보다 더 좋은 성능을 갖는다. 고 명암비 영상법과 비디오 고 해상화에 더하여, 본 학위논문은 명암비와 해상도를 한 번에 향상시키는 딥 네트워크를 제안한다. 앞에서 제안된 두 네트워크들은 각각 명암비와 해상도를 향상시킨다. 하지만, 그들은 하나의 네트워크를 통해 한 번에 향상될 수 있다. 이 논문에서는 비디오 고해상화를 위해 제안한 네트워크와 같은 구조의 네트워크를 이용하며, 더 높은 명암비와 해상도를 갖는 최종 결과를 생성해낼 수 있다. 이 방법은 기존의 고 명암비 영상법과 비디오 고해상화를 위한 네트워크들을 조합하는 것 보다 정성적으로 그리고 정량적으로 더 좋은 결과를 만들어 낸다.1 Introduction 1 2 Related Work 7 2.1 High Dynamic Range Imaging 7 2.1.1 Rejecting Regions with Motions 7 2.1.2 Alignment Before Merging 8 2.1.3 Patch-based Reconstruction 9 2.1.4 Deep-learning-based Methods 9 2.1.5 Single-Image HDRI 10 2.2 Video Super-resolution 11 2.2.1 Deep Single Image Super-resolution 11 2.2.2 Deep Video Super-resolution 12 3 High Dynamic Range Imaging 13 3.1 Motivation 13 3.2 Proposed Method 14 3.2.1 Overall Pipeline 14 3.2.2 Alignment Network 15 3.2.3 Merging Network 19 3.2.4 Integrated HDR imaging network 20 3.3 Datasets 21 3.3.1 Kalantari Dataset and Ground Truth Aligned Images 21 3.3.2 Preprocessing 21 3.3.3 Patch Generation 22 3.4 Experimental Results 23 3.4.1 Evaluation Metrics 23 3.4.2 Ablation Studies 23 3.4.3 Comparisons with State-of-the-Art Methods 25 3.4.4 Application to the Case of More Numbers of Exposures 29 3.4.5 Pre-processing for other HDR imaging methods 32 4 Video Super-resolution 36 4.1 Motivation 36 4.2 Proposed Method 37 4.2.1 Overall Pipeline 37 4.2.2 Alignment Network 38 4.2.3 Reconstruction Network 40 4.2.4 Integrated VSR network 42 4.3 Experimental Results 42 4.3.1 Dataset 42 4.3.2 Ablation Study 42 4.3.3 Capability of DSBN for alignment 44 4.3.4 Comparisons with State-of-the-Art Methods 45 5 Joint HDR and SR 51 5.1 Proposed Method 51 5.1.1 Feature Blending Network 51 5.1.2 Joint HDR-SR Network 51 5.1.3 Existing VSR Network 52 5.1.4 Existing HDR Network 53 5.2 Experimental Results 53 6 Conclusion 58 Abstract (In Korean) 71Docto

    YDA görüntü gölgeleme gidermede gelişmişlik seviyesi ve YDA görüntüler için nesnel bir gölgeleme giderme kalite metriği.

    Get PDF
    Despite the emergence of new HDR acquisition methods, the multiple exposure technique (MET) is still the most popular one. The application of MET on dynamic scenes is a challenging task due to the diversity of motion patterns and uncontrollable factors such as sensor noise, scene occlusion and performance concerns on some platforms with limited computational capability. Currently, there are already more than 50 deghosting algorithms proposed for artifact-free HDR imaging of dynamic scenes and it is expected that this number will grow in the future. Due to the large number of algorithms, it is a difficult and time-consuming task to conduct subjective experiments for benchmarking recently proposed algorithms. In this thesis, first, a taxonomy of HDR deghosting methods and the key characteristics of each group of algorithms are introduced. Next, the potential artifacts which are observed frequently in the outputs of HDR deghosting algorithms are defined and an objective HDR image deghosting quality metric is presented. It is found that the proposed metric is well correlated with the human preferences and it may be used as a reference for benchmarking current and future HDR image deghosting algorithmsPh.D. - Doctoral Progra

    Variational image fusion

    Get PDF
    The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. Zunächst präsentieren wir iterative Schemata, die sich gut für solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der Schlüssel für eine vielseitige Methode, die gute Ergebnisse für zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, Bildentfärbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, präsentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenüber starken Beleuchtungsänderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. Zusätzliches Wissen über die Belichtungsreihe ermöglicht uns, die erste vollständig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final präsentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusätzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst

    Variational image fusion

    Get PDF
    The main goal of this work is the fusion of multiple images to a single composite that offers more information than the individual input images. We approach those fusion tasks within a variational framework. First, we present iterative schemes that are well-suited for such variational problems and related tasks. They lead to efficient algorithms that are simple to implement and well-parallelisable. Next, we design a general fusion technique that aims for an image with optimal local contrast. This is the key for a versatile method that performs well in many application areas such as multispectral imaging, decolourisation, and exposure fusion. To handle motion within an exposure set, we present the following two-step approach: First, we introduce the complete rank transform to design an optic flow approach that is robust against severe illumination changes. Second, we eliminate remaining misalignments by means of brightness transfer functions that relate the brightness values between frames. Additional knowledge about the exposure set enables us to propose the first fully coupled method that jointly computes an aligned high dynamic range image and dense displacement fields. Finally, we present a technique that infers depth information from differently focused images. In this context, we additionally introduce a novel second order regulariser that adapts to the image structure in an anisotropic way.Das Hauptziel dieser Arbeit ist die Fusion mehrerer Bilder zu einem Einzelbild, das mehr Informationen bietet als die einzelnen Eingangsbilder. Wir verwirklichen diese Fusionsaufgaben in einem variationellen Rahmen. Zunächst präsentieren wir iterative Schemata, die sich gut für solche variationellen Probleme und verwandte Aufgaben eignen. Danach entwerfen wir eine Fusionstechnik, die ein Bild mit optimalem lokalen Kontrast anstrebt. Dies ist der Schlüssel für eine vielseitige Methode, die gute Ergebnisse für zahlreiche Anwendungsbereiche wie Multispektralaufnahmen, Bildentfärbung oder Belichtungsreihenfusion liefert. Um Bewegungen in einer Belichtungsreihe zu handhaben, präsentieren wir folgenden Zweischrittansatz: Zuerst stellen wir die komplette Rangtransformation vor, um eine optische Flussmethode zu entwerfen, die robust gegenüber starken Beleuchtungsänderungen ist. Dann eliminieren wir verbleibende Registrierungsfehler mit der Helligkeitstransferfunktion, welche die Helligkeitswerte zwischen Bildern in Beziehung setzt. Zusätzliches Wissen über die Belichtungsreihe ermöglicht uns, die erste vollständig gekoppelte Methode vorzustellen, die gemeinsam ein registriertes Hochkontrastbild sowie dichte Bewegungsfelder berechnet. Final präsentieren wir eine Technik, die von unterschiedlich fokussierten Bildern Tiefeninformation ableitet. In diesem Kontext stellen wir zusätzlich einen neuen Regularisierer zweiter Ordnung vor, der sich der Bildstruktur anisotrop anpasst

    Advanced editing methods for image and video sequences

    Get PDF
    In the context of image and video editing, this thesis proposes methods for modifying the semantic content of a recorded scene. Two different editing problems are approached: First, the removal of ghosting artifacts from high dynamic range (HDR) images recovered from exposure sequences, and second, the removal of objects from video sequences recorded with and without camera motion. These editings need to be performed in a way that the result looks plausible to humans, but without having to recover detailed models about the content of the scene, e.g. its geometry, reflectance, or illumination. The proposed editing methods add new key ingredients, such as camera noise models and global optimization frameworks, that help achieving results that surpass the capabilities of state-of-the-art methods. Using these ingredients, each proposed method defines local visual properties that approximate well the specific editing requirements of each task. These properties are then encoded into a energy function that, when globally minimized, produces the required editing results. The optimization of such energy functions corresponds to Bayesian inference problems that are solved efficiently using graph cuts. The proposed methods are demonstrated to outperform other state-ofthe-art methods. Furthermore, they are demonstrated to work well on complex real-world scenarios that have not been previously addressed in the literature, i.e., highly cluttered scenes for HDR deghosting, and highly dynamic scenes and unconstraint camera motion for object removal from videos.Diese Arbeit schlägt Methoden zur Änderung des semantischen Inhalts einer aufgenommenen Szene im Kontext der Bild-und Videobearbeitung vor. Zwei unterschiedliche Bearbeitungsmethoden werden angesprochen: Erstens, das Entfernen von Ghosting Artifacts (Geist-ähnliche Artefakte) aus High Dynamic Range (HDR) Bildern welche von Belichtungsreihen erstellt wurden und zweitens, das Entfernen von Objekten aus Videosequenzen mit und ohne Kamerabewegung. Das Bearbeiten muss in einer Weise durchgeführt werden, dass das Ergebnis für den Menschen plausibel aussieht, aber ohne das detaillierte Modelle des Szeneninhalts rekonstruiert werden müssen, z.B. die Geometrie, das Reflexionsverhalten, oder Beleuchtungseigenschaften. Die vorgeschlagenen Bearbeitungsmethoden beinhalten neuartige Elemente, etwa Kameralärm-Modelle und globale Optimierungs-Systeme, mit deren Hilfe es möglich ist die Eigenschaften der modernsten existierenden Methoden zu übertreffen. Mit Hilfe dieser Elemente definieren die vorgeschlagenen Methoden lokale visuelle Eigenschaften welche die beschriebenen Bearbeitungsmethoden gut annähern. Diese Eigenschaften werden dann als Energiefunktion codiert, welche, nach globalem minimieren, die gewünschten Bearbeitung liefert. Die Optimierung solcher Energiefunktionen entspricht dem Bayes’schen Inferenz Modell welches effizient mittels Graph-Cut Algorithmen gelöst werden kann. Es wird gezeigt, dass die vorgeschlagenen Methoden den heutigen Stand der Technik übertreffen. Darüber hinaus sind sie nachweislich gut auf komplexe natürliche Szenarien anwendbar, welche in der existierenden Literatur bisher noch nicht angegangen wurden, d.h. sehr unübersichtliche Szenen für HDR Deghosting und sehr dynamische Szenen und unbeschränkte Kamerabewegungen für das Entfernen von Objekten aus Videosequenzen

    Anatomical Modeling of Cerebral Microvascular Structures: Application to Identify Biomarkers of Microstrokes

    Get PDF
    Les réseaux microvasculaires corticaux sont responsables du transport de l’oxygène et des substrats énergétiques vers les neurones. Ces réseaux réagissent dynamiquement aux demandes énergétiques lors d’une activation neuronale par le biais du couplage neurovasculaire. Afin d’élucider le rôle de la composante microvasculaire dans ce processus de couplage, l’utilisation de la modélisation in-formatique pourrait se révéler un élément clé. Cependant, la manque de méthodologies de calcul appropriées et entièrement automatisées pour modéliser et caractériser les réseaux microvasculaires reste l’un des principaux obstacles. Le développement d’une solution entièrement automatisée est donc important pour des explorations plus avancées, notamment pour quantifier l’impact des mal-formations vasculaires associées à de nombreuses maladies cérébrovasculaires. Une observation courante dans l’ensemble des troubles neurovasculaires est la formation de micro-blocages vascu-laires cérébraux (mAVC) dans les artérioles pénétrantes de la surface piale. De récents travaux ont démontré l’impact de ces événements microscopiques sur la fonction cérébrale. Par conséquent, il est d’une importance vitale de développer une approche non invasive et comparative pour identifier leur présence dans un cadre clinique. Dans cette thèse,un pipeline de traitement entièrement automatisé est proposé pour aborder le prob-lème de la modélisation anatomique microvasculaire. La méthode de modélisation consiste en un réseau de neurones entièrement convolutif pour segmenter les capillaires sanguins, un générateur de modèle de surface 3D et un algorithme de contraction de la géométrie pour produire des mod-èles graphiques vasculaires ne comportant pas de connections multiples. Une amélioration de ce pipeline est développée plus tard pour alléger l’exigence de maillage lors de la phase de représen-tation graphique. Un nouveau schéma permettant de générer un modèle de graphe est développé avec des exigences d’entrée assouplies et permettant de retenir les informations sur les rayons des vaisseaux. Il est inspiré de graphes géométriques déformants construits en respectant les morpholo-gies vasculaires au lieu de maillages de surface. Un mécanisme pour supprimer la structure initiale du graphe à chaque exécution est implémenté avec un critère de convergence pour arrêter le pro-cessus. Une phase de raffinement est introduite pour obtenir des modèles vasculaires finaux. La modélisation informatique développée est ensuite appliquée pour simuler les signatures IRM po-tentielles de mAVC, combinant le marquage de spin artériel (ASL) et l’imagerie multidirectionnelle pondérée en diffusion (DWI). L’hypothèse est basée sur des observations récentes démontrant une réorientation radiale de la microvascularisation dans la périphérie du mAVC lors de la récupéra-tion chez la souris. Des lits capillaires synthétiques, orientés aléatoirement et radialement, et des angiogrammes de tomographie par cohérence optique (OCT), acquis dans le cortex de souris (n = 5) avant et après l’induction d’une photothrombose ciblée, sont analysés. Les graphes vasculaires informatiques sont exploités dans un simulateur 3D Monte-Carlo pour caractériser la réponse par résonance magnétique (MR), tout en considérant les effets des perturbations du champ magnétique causées par la désoxyhémoglobine, et l’advection et la diffusion des spins nucléaires. Le pipeline graphique proposé est validé sur des angiographies synthétiques et réelles acquises avec différentes modalités d’imagerie. Comparé à d’autres méthodes effectuées dans le milieu de la recherche, les expériences indiquent que le schéma proposé produit des taux d’erreur géométriques et topologiques amoindris sur divers angiogrammes. L’évaluation confirme également l’efficacité de la méthode proposée en fournissant des modèles représentatifs qui capturent tous les aspects anatomiques des structures vasculaires. Ensuite, afin de trouver des signatures de mAVC basées sur le signal IRM, la modélisation vasculaire proposée est exploitée pour quantifier le rapport de perte de signal intravoxel minimal lors de l’application de plusieurs directions de gradient, à des paramètres de séquence variables avec et sans ASL. Avec l’ASL, les résultats démontrent une dif-férence significative (p <0,05) entre le signal calculé avant et 3 semaines après la photothrombose. La puissance statistique a encore augmenté (p <0,005) en utilisant des angiogrammes capturés à la semaine suivante. Sans ASL, aucun changement de signal significatif n’est trouvé. Des rapports plus élevés sont obtenus à des intensités de champ magnétique plus faibles (par exemple, B0 = 3) et une lecture TE plus courte (<16 ms). Cette étude suggère que les mAVC pourraient être carac-térisés par des séquences ASL-DWI, et fournirait les informations nécessaires pour les validations expérimentales postérieures et les futurs essais comparatifs.----------ABSTRACT Cortical microvascular networks are responsible for carrying the necessary oxygen and energy substrates to our neurons. These networks react to the dynamic energy demands during neuronal activation through the process of neurovascular coupling. A key element in elucidating the role of the microvascular component in the brain is through computational modeling. However, the lack of fully-automated computational frameworks to model and characterize these microvascular net-works remains one of the main obstacles. Developing a fully-automated solution is thus substantial for further explorations, especially to quantify the impact of cerebrovascular malformations associ-ated with many cerebrovascular diseases. A common pathogenic outcome in a set of neurovascular disorders is the formation of microstrokes, i.e., micro occlusions in penetrating arterioles descend-ing from the pial surface. Recent experiments have demonstrated the impact of these microscopic events on brain function. Hence, it is of vital importance to develop a non-invasive and translatable approach to identify their presence in a clinical setting. In this thesis, a fully automatic processing pipeline to address the problem of microvascular anatom-ical modeling is proposed. The modeling scheme consists of a fully-convolutional neural network to segment microvessels, a 3D surface model generator and a geometry contraction algorithm to produce vascular graphical models with a single connected component. An improvement on this pipeline is developed later to alleviate the requirement of water-tight surface meshes as inputs to the graphing phase. The novel graphing scheme works with relaxed input requirements and intrin-sically captures vessel radii information, based on deforming geometric graphs constructed within vascular boundaries instead of surface meshes. A mechanism to decimate the initial graph struc-ture at each run is formulated with a convergence criterion to stop the process. A refinement phase is introduced to obtain final vascular models. The developed computational modeling is then ap-plied to simulate potential MRI signatures of microstrokes, combining arterial spin labeling (ASL) and multi-directional diffusion-weighted imaging (DWI). The hypothesis is driven based on recent observations demonstrating a radial reorientation of microvasculature around the micro-infarction locus during recovery in mice. Synthetic capillary beds, randomly- and radially oriented, and op-tical coherence tomography (OCT) angiograms, acquired in the barrel cortex of mice (n=5) before and after inducing targeted photothrombosis, are analyzed. The computational vascular graphs are exploited within a 3D Monte-Carlo simulator to characterize the magnetic resonance (MR) re-sponse, encompassing the effects of magnetic field perturbations caused by deoxyhemoglobin, and the advection and diffusion of the nuclear spins. The proposed graphing pipeline is validated on both synthetic and real angiograms acquired with different imaging modalities. Compared to other efficient and state-of-the-art graphing schemes, the experiments indicate that the proposed scheme produces the lowest geometric and topological error rates on various angiograms. The evaluation also confirms the efficiency of the proposed scheme in providing representative models that capture all anatomical aspects of vascular struc-tures. Next, searching for MRI-based signatures of microstokes, the proposed vascular modeling is exploited to quantify the minimal intravoxel signal loss ratio when applying multiple gradient di-rections, at varying sequence parameters with and without ASL. With ASL, the results demonstrate a significant difference (p<0.05) between the signal-ratios computed at baseline and 3 weeks after photothrombosis. The statistical power further increased (p<0.005) using angiograms captured at week 4. Without ASL, no reliable signal change is found. Higher ratios with improved significance are achieved at low magnetic field strengths (e.g., at 3 Tesla) and shorter readout TE (<16 ms). This study suggests that microstrokes might be characterized through ASL-DWI sequences, and provides necessary insights for posterior experimental validations, and ultimately, future transla-tional trials
    corecore