1,236 research outputs found

    Danae++: A smart approach for denoising underwater attitude estimation

    Get PDF
    One of the main issues for the navigation of underwater robots consists in accurate vehicle positioning, which heavily depends on the orientation estimation phase. The systems employed to this end are affected by different noise typologies, mainly related to the sensors and to the irregular noise of the underwater environment. Filtering algorithms can reduce their effect if opportunely con-figured, but this process usually requires fine techniques and time. This paper presents DANAE++, an improved denoising autoencoder based on DANAE (deep Denoising AutoeNcoder for Attitude Estimation), which is able to recover Kalman Filter (KF) IMU/AHRS orientation estimations from any kind of noise, independently of its nature. This deep learning-based architecture already proved to be robust and reliable, but in its enhanced implementation significant improvements are obtained in terms of both results and performance. In fact, DANAE++ is able to denoise the three angles describing the attitude at the same time, and that is verified also using the estimations provided by an extended KF. Further tests could make this method suitable for real-time applications in navigation tasks

    Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking

    Get PDF
    Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications

    에너지 효율적 인공신경망 설계

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 공과대학 전기·정보공학부, 2019. 2. 최기영.최근 심층 학습은 이미지 분류, 음성 인식 및 강화 학습과 같은 영역에서 놀라운 성과를 거두고 있다. 최첨단 심층 인공신경망 중 일부는 이미 인간의 능력을 넘어선 성능을 보여주고 있다. 그러나 인공신경망은 엄청난 수의 고정밀 계산과 수백만개의 매개 변수를 이용하기 위한 빈번한 메모리 액세스를 수반한다. 이는 엄청난 칩 공간과 에너지 소모 문제를 야기하여 임베디드 시스템에서 인공신경망이 사용되는 것을 제한하게 된다. 이 문제를 해결하기 위해 인공신경망을 높은 에너지 효율성을 갖도록 설계하는 방법을 제안한다. 첫번째 파트에서는 가중 스파이크를 이용하여 짧은 추론 시간과 적은 에너지 소모의 장점을 갖는 스파이킹 인공신경망 설계 방법을 다룬다. 스파이킹 인공신경망은 인공신경망의 높은 에너지 소비 문제를 극복하기 위한 유망한 대안 중 하나이다. 기존 연구에서 심층 인공신경망을 정확도 손실없이 스파이킹 인공신경망으로 변환하는 방법이 발표되었다. 그러나 기존의 방법들은 rate coding을 사용하기 때문에 긴 추론 시간을 갖게 되고 이것이 많은 에너지 소모를 야기하게 되는 단점이 있다. 이 파트에서는 페이즈에 따라 다른 스파이크 가중치를 부여하는 방법으로 추론 시간을 크게 줄이는 방법을 제안한다. MNIST, SVHN, CIFAR-10, CIFAR-100 데이터셋에서의 실험 결과는 제안된 방법을 이용한 스파이킹 인공신경망이 기존 방법에 비해 큰 폭으로 추론 시간과 스파이크 발생 빈도를 줄여서 보다 에너지 효율적으로 동작함을 보여준다. 두번째 파트에서는 공정 변이가 있는 상황에서 동작하는 고에너지효율 아날로그 인공신경망 설계 방법을 다루고 있다. 인공신경망을 아날로그 회로를 사용하여 구현하면 높은 병렬성과 에너지 효율성을 얻을 수 있는 장점이 있다. 하지만, 아날로그 시스템은 노이즈에 취약한 중대한 결점을 가지고 있다. 이러한 노이즈 중 하나로 공정 변이를 들 수 있는데, 이는 아날로그 회로의 적정 동작 지점을 변화시켜 심각한 성능 저하 또는 오동작을 유발하는 원인이다. 이 파트에서는 ReRAM에 기반한 고에너지 효율 아날로그 이진 인공신경망을 구현하고, 공정 변이 문제를 해결하기 위해 활성도 일치 방법을 사용한 공정 변이 보상 기법을 제안한다. 제안된 인공신경망은 1T1R 구조의 ReRAM 배열과 차동증폭기를 이용한 뉴런을 이용하여 고밀도 집적과 고에너지 효율 동작이 가능하게 구성되었다. 또한, 아날로그 뉴런 회로의 공정 변이 취약성 문제를 해결하기 위해 이상적인 뉴런의 활성도와 동일한 활성도를 갖도록 뉴런의 바이어스를 조절하는 방법을 소개한다. 제안된 방법을 사용하여 32nm 공정에서 구현된 인공신경망은 3-sigma 지점에서 50% 문턱 전압 변이와 15%의 저항값 변이가 있는 상황에서도 MNIST에서 98.55%, CIFAR-10에서 89.63%의 정확도를 달성하였으며, 970 TOPS/W에 달하는 매우 높은 에너지 효율성을 달성하였다.Recently, deep learning has shown astounding performances on specific tasks such as image classification, speech recognition, and reinforcement learning. Some of the state-of-the-art deep neural networks have already gone over humans ability. However, neural networks involve tremendous number of high precision computations and frequent off-chip memory accesses with millions of parameters. It incurs problems of large area and exploding energy consumption, which hinder neural networks from being exploited in embedded systems. To cope with the problem, techniques for designing energy efficient neural networks are proposed. The first part of this dissertation addresses the design of spiking neural networks with weighted spikes which has advantages of shorter inference latency and smaller energy consumption compared to the conventional spiking neural networks. Spiking neural networks are being regarded as one of the promising alternative techniques to overcome the high energy costs of artificial neural networks. It is supported by many researches showing that a deep convolutional neural network can be converted into a spiking neural network with near zero accuracy loss. However, the advantage on energy consumption of spiking neural networks comes at a cost of long classification latency due to the use of Poisson-distributed spike trains (rate coding), especially in deep networks. We propose to use weighted spikes, which can greatly reduce the latency by assigning a different weight to a spike depending on which time phase it belongs. Experimental results on MNIST, SVHN, CIFAR-10, and CIFAR-100 show that the proposed spiking neural networks with weighted spikes achieve significant reduction in classification latency and number of spikes, which leads to faster and more energy-efficient spiking neural networks than the conventional spiking neural networks with rate coding. We also show that one of the state-of-the-art networks the deep residual network can be converted into spiking neural network without accuracy loss. The second part of this dissertation focuses on the design of highly energy-efficient analog neural networks in the presence of variations. Analog hardware accelerators for deep neural networks have taken center stage in the aspect of high parallelism and energy efficiency. However, a critical weakness of the analog hardware systems is vulnerability to noise. One of the biggest noise sources is a process variation. It is a big obstacle to using analog circuits since the variation shifts various parameters of analog circuits from the correct operating points, which causes severe performance degradation or even malfunction. To achieve high energy efficiency with analog neural networks, we propose resistive random access memory (ReRAM) based analog implementation of binarized neural networks (BNNs) with a novel variation compensation technique through activation matching (VCAM). The proposed architecture consists of 1-transistor-1-resistor (1T1R) structured ReRAM synaptic arrays and differential amplifier based neurons, which leads to high-density integration and energy efficiency. To cope with the vulnerability of analog neurons due to process variation, the biases of all neurons are adjusted in the direction that matches average output activation of ideal neurons without variation. The technique effectively restores the classification accuracy degraded by the variation. Experimental results on 32nm technology show that the proposed architecture achieves the classification accuracy of 98.55% on MNIST and 89.63% on CIFAR-10 in the presence of 50% threshold voltage variation and 15% resistance variation at 3-sigma point. It also achieves 970 TOPS/W energy efficiency with MLP on MNIST.1 Introduction 1 1.1 Deep Neural Networks with Weighted Spikes . . . . . . . . . . . . . 2 1.2 VCAM: Variation Compensation through Activation Matching for Analog Binarized Neural Networks . . . . . . . . . . . . . . . . . . . . . 5 2 Background 8 2.1 Spiking neural network . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Spiking neuron model . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Rate coding in SNNs . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Binarized neural networks . . . . . . . . . . . . . . . . . . . . . . . 13 2.5 Resistive random access memory . . . . . . . . . . . . . . . . . . . . 18 3 RelatedWork 22 3.1 Training SNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 SNNs with various spike coding schemes . . . . . . . . . . . . . . . 25 3.3 BNN implementations . . . . . . . . . . . . . . . . . . . . . . . . . 28 4 Deep Neural Networks withWeighted Spikes 33 4.1 SNN with weighted spikes . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.1 Weighted spikes . . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.2 Spiking neuron model for weighted spikes . . . . . . . . . . . 35 4.1.3 Noise spike . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1.4 Approximation of the ReLU activation . . . . . . . . . . . . 39 4.1.5 ANN-to-SNN conversion . . . . . . . . . . . . . . . . . . . . 41 4.2 Optimization techniques . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2.1 Skipping initial input currents in the output layer . . . . . . . 45 4.2.2 The number of phases in a period . . . . . . . . . . . . . . . 47 4.2.3 Accuracy-energy trade-off by early decision . . . . . . . . . . 50 4.2.4 Consideration on hardware implementation . . . . . . . . . . 52 4.3 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.4.1 Comparison between SNN-RC and SNN-WS . . . . . . . . . 56 4.4.2 Trade-off by early decision . . . . . . . . . . . . . . . . . . . 64 4.4.3 Comparison with other algorithms . . . . . . . . . . . . . . . 67 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5 VCAM: Variation Compensation through Activation Matching for Analog Binarized Neural Networks 71 5.1 Modification of Binarized Neural Network . . . . . . . . . . . . . . . 72 5.1.1 Binarized Neural Network . . . . . . . . . . . . . . . . . . . 72 5.1.2 Use of 0 and 1 Activations . . . . . . . . . . . . . . . . . . . 72 5.1.3 Removal of Batch Normalization Layer . . . . . . . . . . . . 73 5.2 Hardware Architecture . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.2.1 ReRAM Synaptic Array . . . . . . . . . . . . . . . . . . . . 75 5.2.2 Neuron Circuit . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.2.3 Issues with Neuron Circuit . . . . . . . . . . . . . . . . . . . 82 5.3 Variation Compensation . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.1 Variation Modeling . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.2 Impact of VT Variation . . . . . . . . . . . . . . . . . . . . . 87 5.3.3 Variation Compensation Techniques . . . . . . . . . . . . . . 88 5.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.4.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . 93 5.4.2 Accuracy of the Modified BNN Algorithm . . . . . . . . . . 94 5.4.3 Variation Compensation . . . . . . . . . . . . . . . . . . . . 95 5.4.4 Performance Comparison . . . . . . . . . . . . . . . . . . . . 99 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6 Conclusion 102Docto

    Hardware Considerations for Signal Processing Systems: A Step Toward the Unconventional.

    Full text link
    As we progress into the future, signal processing algorithms are becoming more computationally intensive and power hungry while the desire for mobile products and low power devices is also increasing. An integrated ASIC solution is one of the primary ways chip developers can improve performance and add functionality while keeping the power budget low. This work discusses ASIC hardware for both conventional and unconventional signal processing systems, and how integration, error resilience, emerging devices, and new algorithms can be leveraged by signal processing systems to further improve performance and enable new applications. Specifically this work presents three case studies: 1) a conventional and highly parallel mix signal cross-correlator ASIC for a weather satellite performing real-time synthetic aperture imaging, 2) an unconventional native stochastic computing architecture enabled by memristors, and 3) two unconventional sparse neural network ASICs for feature extraction and object classification. As improvements from technology scaling alone slow down, and the demand for energy efficient mobile electronics increases, such optimization techniques at the device, circuit, and system level will become more critical to advance signal processing capabilities in the future.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116685/1/knagphil_1.pd

    Approximation Opportunities in Edge Computing Hardware : A Systematic Literature Review

    Get PDF
    With the increasing popularity of the Internet of Things and massive Machine Type Communication technologies, the number of connected devices is rising. However, while enabling valuable effects to our lives, bandwidth and latency constraints challenge Cloud processing of their associated data amounts. A promising solution to these challenges is the combination of Edge and approximate computing techniques that allows for data processing nearer to the user. This paper aims to survey the potential benefits of these paradigms’ intersection. We provide a state-of-the-art review of circuit-level and architecture-level hardware techniques and popular applications. We also outline essential future research directions.publishedVersionPeer reviewe

    Deep Multimodality Image-Guided System for Assisting Neurosurgery

    Get PDF
    Intrakranielle Hirntumoren gehören zu den zehn häufigsten bösartigen Krebsarten und sind für eine erhebliche Morbidität und Mortalität verantwortlich. Die größte histologische Kategorie der primären Hirntumoren sind die Gliome, die ein äußerst heterogenes Erschei-nungsbild aufweisen und radiologisch schwer von anderen Hirnläsionen zu unterscheiden sind. Die Neurochirurgie ist meist die Standardbehandlung für neu diagnostizierte Gliom-Patienten und kann von einer Strahlentherapie und einer adjuvanten Temozolomid-Chemotherapie gefolgt werden. Die Hirntumorchirurgie steht jedoch vor großen Herausforderungen, wenn es darum geht, eine maximale Tumorentfernung zu erreichen und gleichzeitig postoperative neurologische Defizite zu vermeiden. Zwei dieser neurochirurgischen Herausforderungen werden im Folgenden vorgestellt. Erstens ist die manuelle Abgrenzung des Glioms einschließlich seiner Unterregionen aufgrund seines infiltrativen Charakters und des Vorhandenseins einer heterogenen Kontrastverstärkung schwierig. Zweitens verformt das Gehirn seine Form ̶ die so genannte "Hirnverschiebung" ̶ als Reaktion auf chirurgische Manipulationen, Schwellungen durch osmotische Medikamente und Anästhesie, was den Nutzen präopera-tiver Bilddaten für die Steuerung des Eingriffs einschränkt. Bildgesteuerte Systeme bieten Ärzten einen unschätzbaren Einblick in anatomische oder pathologische Ziele auf der Grundlage moderner Bildgebungsmodalitäten wie Magnetreso-nanztomographie (MRT) und Ultraschall (US). Bei den bildgesteuerten Instrumenten handelt es sich hauptsächlich um computergestützte Systeme, die mit Hilfe von Computer-Vision-Methoden die Durchführung perioperativer chirurgischer Eingriffe erleichtern. Die Chirurgen müssen jedoch immer noch den Operationsplan aus präoperativen Bildern gedanklich mit Echtzeitinformationen zusammenführen, während sie die chirurgischen Instrumente im Körper manipulieren und die Zielerreichung überwachen. Daher war die Notwendigkeit einer Bildführung während neurochirurgischer Eingriffe schon immer ein wichtiges Anliegen der Ärzte. Ziel dieser Forschungsarbeit ist die Entwicklung eines neuartigen Systems für die peri-operative bildgeführte Neurochirurgie (IGN), nämlich DeepIGN, mit dem die erwarteten Ergebnisse der Hirntumorchirurgie erzielt werden können, wodurch die Gesamtüberle-bensrate maximiert und die postoperative neurologische Morbidität minimiert wird. Im Rahmen dieser Arbeit werden zunächst neuartige Methoden für die Kernbestandteile des DeepIGN-Systems der Hirntumor-Segmentierung im MRT und der multimodalen präope-rativen MRT zur intraoperativen US-Bildregistrierung (iUS) unter Verwendung der jüngs-ten Entwicklungen im Deep Learning vorgeschlagen. Anschließend wird die Ergebnisvor-hersage der verwendeten Deep-Learning-Netze weiter interpretiert und untersucht, indem für den Menschen verständliche, erklärbare Karten erstellt werden. Schließlich wurden Open-Source-Pakete entwickelt und in weithin anerkannte Software integriert, die für die Integration von Informationen aus Tracking-Systemen, die Bildvisualisierung und -fusion sowie die Anzeige von Echtzeit-Updates der Instrumente in Bezug auf den Patientenbe-reich zuständig ist. Die Komponenten von DeepIGN wurden im Labor validiert und in einem simulierten Operationssaal evaluiert. Für das Segmentierungsmodul erreichte DeepSeg, ein generisches entkoppeltes Deep-Learning-Framework für die automatische Abgrenzung von Gliomen in der MRT des Gehirns, eine Genauigkeit von 0,84 in Bezug auf den Würfelkoeffizienten für das Bruttotumorvolumen. Leistungsverbesserungen wurden bei der Anwendung fort-schrittlicher Deep-Learning-Ansätze wie 3D-Faltungen über alle Schichten, regionenbasier-tes Training, fliegende Datenerweiterungstechniken und Ensemble-Methoden beobachtet. Um Hirnverschiebungen zu kompensieren, wird ein automatisierter, schneller und genauer deformierbarer Ansatz, iRegNet, für die Registrierung präoperativer MRT zu iUS-Volumen als Teil des multimodalen Registrierungsmoduls vorgeschlagen. Es wurden umfangreiche Experimente mit zwei Multi-Location-Datenbanken durchgeführt: BITE und RESECT. Zwei erfahrene Neurochirurgen führten eine zusätzliche qualitative Validierung dieser Studie durch, indem sie MRT-iUS-Paare vor und nach der deformierbaren Registrierung überlagerten. Die experimentellen Ergebnisse zeigen, dass das vorgeschlagene iRegNet schnell ist und die besten Genauigkeiten erreicht. Darüber hinaus kann das vorgeschlagene iRegNet selbst bei nicht trainierten Bildern konkurrenzfähige Ergebnisse liefern, was seine Allgemeingültigkeit unter Beweis stellt und daher für die intraoperative neurochirurgische Führung von Nutzen sein kann. Für das Modul "Erklärbarkeit" wird das NeuroXAI-Framework vorgeschlagen, um das Vertrauen medizinischer Experten in die Anwendung von KI-Techniken und tiefen neuro-nalen Netzen zu erhöhen. Die NeuroXAI umfasst sieben Erklärungsmethoden, die Visuali-sierungskarten bereitstellen, um tiefe Lernmodelle transparent zu machen. Die experimen-tellen Ergebnisse zeigen, dass der vorgeschlagene XAI-Rahmen eine gute Leistung bei der Extraktion lokaler und globaler Kontexte sowie bei der Erstellung erklärbarer Salienzkar-ten erzielt, um die Vorhersage des tiefen Netzwerks zu verstehen. Darüber hinaus werden Visualisierungskarten erstellt, um den Informationsfluss in den internen Schichten des Encoder-Decoder-Netzwerks zu erkennen und den Beitrag der MRI-Modalitäten zur end-gültigen Vorhersage zu verstehen. Der Erklärungsprozess könnte medizinischen Fachleu-ten zusätzliche Informationen über die Ergebnisse der Tumorsegmentierung liefern und somit helfen zu verstehen, wie das Deep-Learning-Modell MRT-Daten erfolgreich verar-beiten kann. Außerdem wurde ein interaktives neurochirurgisches Display für die Eingriffsführung entwickelt, das die verfügbare kommerzielle Hardware wie iUS-Navigationsgeräte und Instrumentenverfolgungssysteme unterstützt. Das klinische Umfeld und die technischen Anforderungen des integrierten multimodalen DeepIGN-Systems wurden mit der Fähigkeit zur Integration von (1) präoperativen MRT-Daten und zugehörigen 3D-Volumenrekonstruktionen, (2) Echtzeit-iUS-Daten und (3) positioneller Instrumentenver-folgung geschaffen. Die Genauigkeit dieses Systems wurde anhand eines benutzerdefi-nierten Agar-Phantom-Modells getestet, und sein Einsatz in einem vorklinischen Operati-onssaal wurde simuliert. Die Ergebnisse der klinischen Simulation bestätigten, dass die Montage des Systems einfach ist, in einer klinisch akzeptablen Zeit von 15 Minuten durchgeführt werden kann und mit einer klinisch akzeptablen Genauigkeit erfolgt. In dieser Arbeit wurde ein multimodales IGN-System entwickelt, das die jüngsten Fort-schritte im Bereich des Deep Learning nutzt, um Neurochirurgen präzise zu führen und prä- und intraoperative Patientenbilddaten sowie interventionelle Geräte in das chirurgi-sche Verfahren einzubeziehen. DeepIGN wurde als Open-Source-Forschungssoftware entwickelt, um die Forschung auf diesem Gebiet zu beschleunigen, die gemeinsame Nut-zung durch mehrere Forschungsgruppen zu erleichtern und eine kontinuierliche Weiter-entwicklung durch die Gemeinschaft zu ermöglichen. Die experimentellen Ergebnisse sind sehr vielversprechend für die Anwendung von Deep-Learning-Modellen zur Unterstützung interventioneller Verfahren - ein entscheidender Schritt zur Verbesserung der chirurgi-schen Behandlung von Hirntumoren und der entsprechenden langfristigen postoperativen Ergebnisse

    Compressive MRI with deep convolutional and attentive models

    Get PDF
    Since its advent in the last century, Magnetic Resonance Imaging (MRI) has demonstrated a significant impact on modern medicine and spectroscopy and witnessed widespread use in medical imaging and clinical practice, owing to the flexibility and excellent ability in viewing anatomical structures. Although it provides a non-invasive and ionizing radiation-free tool to create images of the anatomy of the human body being inspected, the long data acquisition process hinders its growth and development in time-critical applications. To shorten the scanning time and reduce the discomfort of patients, the sampling process can be accelerated by leaving out an amount of sampling steps and performing image reconstruction from a subset of measurements. However, the images created with under-sampled signals can suffer from strong aliasing artifacts which unfavorably affect the quality of diagnosis and treatment. Compressed sensing (CS) methods were introduced to alleviate the aliasing artifacts by reconstructing an image from the observed measurements via model-based optimization algorithms. Despite achieved success, the sparsity prior assumed by CS methods can be difficult to hold in real-world practice and challenging to capture complex anatomical structures. The iterative optimization algorithms are often computationally expensive and time-consuming, against the speed demand of modern MRI. Those factors limit the quality of reconstructed images and put restrictions on the achievable acceleration rates. This thesis mainly focuses on developing deep learning-based methods, specifically using modern over-parametrized models, for MRI reconstruction, by leveraging the powerful learning ability and representation capacity of such models. Firstly, we introduce an attentive selection generative adversarial network to achieve fine-grained reconstruction by performing large-field contextual information integration and attention selection mechanism. To incorporate domain-specific knowledge into the reconstruction procedure, an optimization-inspired deep cascaded framework is proposed with a novel deep data consistency block to leverage domain-specific knowledge and an adaptive spatial attention selection module to capture the correlations among high-resolution features, aiming to enhance the quality of recovered images. To efficiently utilize the contextual information hidden in the spatial dimensions, a novel region-guided channel-wise attention network is introduced to incorporate the spatial semantics into a channel-based attention mechanism, demonstrating a light-weight and flexible design to attain improved reconstruction performance. Secondly, a coil-agnostic reconstruction framework is introduced to solve the unknown forward process problem in parallel MRI reconstruction. To avoid the estimation of sensitivity maps, a novel data aggregation consistency block is proposed to approximately perform the data consistency enforcement without resorting to coil sensitivity information. A locality-aware spatial attention module is devised and embedded into the reconstruction pipeline to enhance the model performance by capturing spatial contextual information via data-adaptive kernel prediction. It is demonstrated by experiments that the proposed coil-agnostic method is robust and resilient to different machine configurations and outperforms other sensitivity estimation-based methods. Finally, the research work focusing on dynamic MRI reconstruction is presented. We introduce an optimization-inspired deep cascaded framework to recover a sequence of MRI images, which utilizes a novel mask-guided motion feature incorporation method to explicitly extract and incorporate the motion information into the reconstruction iterations, showing to better preserve the dynamic content. A spatio-temporal Fourier neural block is proposed and embedded into the network to improve the model performance by efficiently retrieving useful information in both spatial and temporal domains. It is demonstrated that the devised framework surpasses other competing methods and can generalize well on other reconstruction models and unseen data, validating its transferability and generalization capacity
    corecore