154 research outputs found

    Access Control in Wireless Sensor Networks

    Full text link
    Wireless sensor networks consist of a large amount of sensor nodes, small low-cost wireless computing devices equipped with different sensors. Sensor networks collect and process environmental data and can be used for habitat monitoring, precision agriculture, wildfire detection, structural health monitoring and many other applications. Securing sensor networks calls for novel solutions, especially because of their unattended deployment and strong resource limitations. Moreover, developing security solutions without knowing precisely against what threats the system should be protected is impossible. Thus, the first task in securing sensor networks is to define a realistic adversary model. We systematically investigate vulnerabilities in sensor networks, specifically focusing on physical attacks on sensor node hardware. These are all attacks that require direct physical access to the sensor nodes. Most severe attacks of this kind are also known as node capture, or node compromise. Based on the vulnerability analysis, we present a novel general adversary model for sensor networks. If the data collected within a sensor network is valuable or should be kept confidential then the data should be protected from unauthorized access. We determine security issues in the context of access control in sensor networks in presence of node capture attacks and develop protocols for broadcast authentication that constitute the core of our solutions for access control. We develop broadcast authentication protocols for the case where the adversary can capture up to some threshold t sensor nodes. The developed protocols offer absolute protection while not more than t nodes are captured, but their security breaks completely otherwise. Moreover, security in this case comes at a high cost, as the resource requirements for the protocols grow rapidly with t. One of the most popular ways to overcome impossibility or inefficiency of solutions in distributed systems is to make the protocol goals probabilistic. We therefore develop efficient probabilistic protocols for broadcast authentication. Security of these protocols degrades gracefully with the increasing number of captured nodes. We conclude that the perfect threshold security is less appropriate for sensor networks than the probabilistic approach. Gracefully degrading security offers better scalability and saves resources, and should be considered as a promising security paradigm for sensor networks

    Robust degradation and enhancement of robot mission behaviour in unpredictable environments

    Get PDF
    © 2015 ACM.Temporal logic based approaches that automatically generate controllers have been shown to be useful for mission level planning of motion, surveillance and navigation, among others. These approaches critically rely on the validity of the environment models used for synthesis. Yet simplifying assumptions are inevitable to reduce complexity and provide mission-level guarantees; no plan can guarantee results in a model of a world in which everything can go wrong. In this paper, we show how our approach, which reduces reliance on a single model by introducing a stack of models, can endow systems with incremental guarantees based on increasingly strengthened assumptions, supporting graceful degradation when the environment does not behave as expected, and progressive enhancement when it does

    Analytical models of a fault-tolerant multiple module microprocessor system

    Get PDF
    Imperial Users onl

    Bamboo ECC: Strong, safe, and flexible codes for reliable computer memory

    Full text link
    Growing computer system sizes and levels of integration have made memory reliability a primary concern, necessitating strong memory error protection. As such, large-scale systems typically employ error checking and correcting codes to trade redundant storage and band-width for increased reliability. While stronger memory protection will be needed to meet reliability targets in the future, it is undesirable to further increase the amount of storage and bandwidth spent on redundancy. We propose a novel family of single-tier ECC mecha-nisms called Bamboo ECC to simultaneously address the conflicting requirements of increasing reliability while maintaining or decreasing error protection overheads. Relative to the state-of-the-art single-tier error protection, Bamboo ECC codes have superior correction capabilities, all but elim-inate the risk of silent data corruption, and can also increase redun-dancy at a fine granularity, enabling more adaptive graceful down-grade schemes. These strength, safety, and flexibility advantages translate to a significantly more reliable memory system. To demon-strate this, we evaluate a family of Bamboo ECC organizations in the context of conventional 72b and 144b DRAM channels and show the significant error coverage and memory lifespan improvements of Bamboo ECC relative to existing SEC-DED, chipkill-correct and double-chipkill-correct schemes. 1

    Rubik: fast analytical power management for latency-critical systems

    Get PDF
    Latency-critical workloads (e.g., web search), common in datacenters, require stable tail (e.g., 95th percentile) latencies of a few milliseconds. Servers running these workloads are kept lightly loaded to meet these stringent latency targets. This low utilization wastes billions of dollars in energy and equipment annually. Applying dynamic power management to latency-critical workloads is challenging. The fundamental issue is coping with their inherent short-term variability: requests arrive at unpredictable times and have variable lengths. Without knowledge of the future, prior techniques either adapt slowly and conservatively or rely on application-specific heuristics to maintain tail latency. We propose Rubik, a fine-grain DVFS scheme for latency-critical workloads. Rubik copes with variability through a novel, general, and efficient statistical performance model. This model allows Rubik to adjust frequencies at sub-millisecond granularity to save power while meeting the target tail latency. Rubik saves up to 66% of core power, widely outperforms prior techniques, and requires no application-specific tuning. Beyond saving core power, Rubik robustly adapts to sudden changes in load and system performance. We use this capability to design RubikColoc, a colocation scheme that uses Rubik to allow batch and latency-critical work to share hardware resources more aggressively than prior techniques. RubikColoc reduces datacenter power by up to 31% while using 41% fewer servers than a datacenter that segregates latency-critical and batch work, and achieves 100% core utilization.National Science Foundation (U.S.) (Grant CCF-1318384

    A Fault Tolerant Core for Parallel Execution of Ultra Reduced Instruction Set (URISC) and MIPS Instructions

    Get PDF
    Modern safety critical systems require the ability to detect and handle situations where an error has occurred. Efficient coding and protection schemes are widely used to protect the communication links and memories of such systems. The remaining system components, and focus of this work, are primarily computation units where most protection schemes involve a high cost by fully duplicating the computation unit. Previous work presented the Ultra Reduced Instruction Set Co-processor (URISC) core that provides a low area overhead approach to detect and recover from errors in any core computation unit (touring complete). It executes URISC or MIPS instructions in order and no more than one instruction per cycle. This thesis analyses the overhead introduced in the previous core design to identify opportunities to accelerate the computation. We design and build an out of order core supporting both MIPS and URISC instructions. This new core effectively exploits the parallelism available in MIPS-URISC programs and significantly reduces the overhead introduced when checking or substituting URISC instructions for faulted MIPS instructions

    System-on-Chip design for reliability

    Get PDF

    Toward Reliable, Secure, and Energy-Efficient Multi-Core System Design

    Get PDF
    Computer hardware researchers have perennially focussed on improving the performance of computers while stipulating the energy consumption under a strict budget. While several innovations over the years have led to high performance and energy efficient computers, more challenges have also emerged as a fallout. For example, smaller transistor devices in modern multi-core systems are afflicted with several reliability and security concerns, which were inconceivable even a decade ago. Tackling these bottlenecks happens to negatively impact the power and performance of the computers. This dissertation explores novel techniques to gracefully solve some of the pressing challenges of the modern computer design. Specifically, the proposed techniques improve the reliability of on-chip communication fabric under a high power supply noise, increase the energy-efficiency of low-power graphics processing units, and demonstrate an unprecedented security loophole of the low-power computing paradigm through rigorous hardware-based experiments

    근사 컴퓨팅을 이용한 회로 노화 보상과 에너지 효율적인 신경망 구현

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 전기·정보공학부, 2020. 8. 이혁재.Approximate computing reduces the cost (energy and/or latency) of computations by relaxing the correctness (i.e., precision) of computations up to the level, which is dependent on types of applications. Moreover, it can be realized in various hierarchies of computing system design from circuit level to application level. This dissertation presents the methodologies applying approximate computing across such hierarchies; compensating aging-induced delay in logic circuit by dynamic computation approximation (Chapter 1), designing energy-efficient neural network by combining low-power and low-latency approximate neuron models (Chapter 2), and co-designing in-memory gradient descent module with neural processing unit so as to address a memory bottleneck incurred by memory I/O for high-precision data (Chapter 3). The first chapter of this dissertation presents a novel design methodology to turn the timing violation caused by aging into computation approximation error without the reliability guardband or increasing the supply voltage. It can be realized by accurately monitoring the critical path delay at run-time. The proposal is evaluated at two levels: RTL component level and system level. The experimental results at the RTL component level show a significant improvement in terms of (normalized) mean squared error caused by the timing violation and, at the system level, show that the proposed approach successfully transforms the aging-induced timing violation errors into much less harmful computation approximation errors, therefore it recovers image quality up to perceptually acceptable levels. It reduces the dynamic and static power consumption by 21.45% and 10.78%, respectively, with 0.8% area overhead compared to the conventional approach. The second chapter of this dissertation presents an energy-efficient neural network consisting of alternative neuron models; Stochastic-Computing (SC) and Spiking (SP) neuron models. SC has been adopted in various fields to improve the power efficiency of systems by performing arithmetic computations stochastically, which approximates binary computation in conventional computing systems. Moreover, a recent work showed that deep neural network (DNN) can be implemented in the manner of stochastic computing and it greatly reduces power consumption. However, Stochastic DNN (SC-DNN) suffers from problem of high latency as it processes only a bit per cycle. To address such problem, it is proposed to adopt Spiking DNN (SP-DNN) as an input interface for SC-DNN since SP effectively processes more bits per cycle than SC-DNN. Moreover, this chapter resolves the encoding mismatch problem, between two different neuron models, without hardware cost by compensating the encoding mismatch with synapse weight calibration. A resultant hybrid DNN (SPSC-DNN) consists of SP-DNN as bottom layers and SC-DNN as top layers. Exploiting the reduced latency from SP-DNN and low-power consumption from SC-DNN, the proposed SPSC-DNN achieves improved energy-efficiency with lower error-rate compared to SC-DNN and SP-DNN in same network configuration. The third chapter of this dissertation proposes GradPim architecture, which accelerates the parameter updates by in-memory processing which is codesigned with 8-bit floating-point training in Neural Processing Unit (NPU) for deep neural networks. By keeping the high precision processing algorithms in memory, such as the parameter update incorporating high-precision weights in its computation, the GradPim architecture can achieve high computational efficiency using 8-bit floating point in NPU and also gain power efficiency by eliminating massive high-precision data transfers between NPU and off-chip memory. A simple extension of DDR4 SDRAM utilizing bank-group parallelism makes the operation designs in processing-in-memory (PIM) module efficient in terms of hardware cost and performance. The experimental results show that the proposed architecture can improve the performance of the parameter update phase in the training by up to 40% and greatly reduce the memory bandwidth requirement while posing only a minimal amount of overhead to the protocol and the DRAM area.근사 컴퓨팅은 연산의 정확도의 손실을 어플리케이션 별 적절한 수준까지 허용함으로써 연산에 필요한 비용 (에너지나 지연시간)을 줄인다. 게다가, 근사 컴퓨팅은 컴퓨팅 시스템 설계의 회로 계층부터 어플리케이션 계층까지 다양한 계층에 적용될 수 있다. 본 논문에서는 근사 컴퓨팅 방법론을 다양한 시스템 설계의 계층에 적용하여 전력과 에너지 측면에서 이득을 얻을 수 있는 방법들을 제안하였다. 이는, 연산 근사화 (computation Approximation)를 통해 회로의 노화로 인해 증가된 지연시간을 추가적인 전력소모 없이 보상하는 방법과 (챕터 1), 근사 뉴런모델 (approximate neuron model)을 이용해 에너지 효율이 높은 신경망을 구성하는 방법 (챕터 2), 그리고 메모리 대역폭으로 인한 병목현상 문제를 높은 정확도 데이터를 활용한 연산을 메모리 내에서 수행함으로써 완화시키는 방법을 (챕터3) 제안하였다. 첫 번째 챕터는 회로의 노화로 인한 지연시간위반을 (timing violation) 설계마진이나 (reliability guardband) 공급전력의 증가 없이 연산오차 (computation approximation error)를 통해 보상하는 설계방법론 (design methodology)를 제안하였다. 이를 위해 주요경로의 (critical path) 지연시간을 동작시간에 정확하게 측정할 필요가 있다. 여기서 제안하는 방법론은 RTL component와 system 단계에서 평가되었다. RTL component 단계의 실험결과를 통해 제안한 방식이 표준화된 평균제곱오차를 (normalized mean squared error) 상당히 줄였음을 볼 수 있다. 그리고 system 단계에서는 이미지처리 시스템에서 이미지의 품질이 인지적으로 충분히 회복되는 것을 보임으로써 회로노화로 인해 발생한 지연시간위반 오차가 에러의 크기가 작은 연산오차로 변경되는 것을 확인 할 수 있었다. 결론적으로, 제안된 방법론을 따랐을 때 0.8%의 공간을 (area) 더 사용하는 비용을 지불하고 21.45%의 동적전력소모와 (dynamic power consumption) 10.78%의 정적전력소모의 (static power consumption) 감소를 달성할 수 있었다. 두 번째 챕터는 근사 뉴런모델을 활용하는 고-에너지효율의 신경망을 (neural network) 제안하였다. 본 논문에서 사용한 두 가지의 근사 뉴런모델은 확률컴퓨팅과 (stochastic computing) 스파이킹뉴런 (spiking neuron) 이론들을 기반으로 모델링되었다. 확률컴퓨팅은 산술연산들을 확률적으로 수행함으로써 이진연산을 낮은 전력소모로 수행한다. 최근에 확률컴퓨팅 뉴런모델을 이용하여 심층 신경망 (deep neural network)를 구현할 수 있다는 연구가 진행되었다. 그러나, 확률컴퓨팅을 뉴런모델링에 활용할 경우 심층신경망이 매 클락사이클마다 (clock cycle) 하나의 비트만을 (bit) 처리하므로, 지연시간 측면에서 매우 나쁠 수 밖에 없는 문제가 있다. 따라서 본 논문에서는 이러한 문제를 해결하기 위하여 스파이킹 뉴런모델로 구성된 스파이킹 심층신경망을 확률컴퓨팅을 활용한 심층신경망 구조와 결합하였다. 스파이킹 뉴런모델의 경우 매 클락사이클마다 여러 비트를 처리할 수 있으므로 심층신경망의 입력 인터페이스로 사용될 경우 지연시간을 줄일 수 있다. 하지만, 확률컴퓨팅 뉴런모델과 스파이킹 뉴런모델의 경우 부호화 (encoding) 방식이 다른 문제가 있다. 따라서 본 논문에서는 해당 부호화 불일치 문제를 모델의 파라미터를 학습할 때 고려함으로써, 파라미터들의 값이 부호화 불일치를 고려하여 조절 (calibration) 될 수 있도록 하여 문제를 해결하였다. 이러한 분석의 결과로, 앞 쪽에는 스파이킹 심층신경망을 배치하고 뒷 쪽애는 확률컴퓨팅 심층신경망을 배치하는 혼성신경망을 제안하였다. 혼성신경망은 스파이킹 심층신경망을 통해 매 클락사이클마다 처리되는 비트 양의 증가로 인한 지연시간 감소 효과와 확률컴퓨팅 심층신경망의 저전력 소모 특성을 모두 활용함으로써 각 심층신경망을 따로 사용하는 경우 대비 우수한 에너지 효율성을 비슷하거나 더 나은 정확도 결과를 내면서 달성한다. 세 번째 챕터는 심층신경망을 8비트 부동소숫점 연산으로 학습하는 신경망처리유닛의 (neural processing unit) 파라미터 갱신을 (parameter update) 메모리-내-연산으로 (in-memory processing) 가속하는 GradPIM 아키텍쳐를 제안하였다. GradPIM은 8비트의 낮은 정확도 연산은 신경망처리유닛에 남기고, 높은 정확도를 가지는 데이터를 활용하는 연산은 (파라미터 갱신) 메모리 내부에 둠으로써 신경망처리유닛과 메모리간의 데이터통신의 양을 줄여, 높은 연산효율과 전력효율을 달성하였다. 또한, GradPIM은 bank-group 수준의 병렬화를 이루어 내 높은 내부 대역폭을 활용함으로써 메모리 대역폭을 크게 확장시킬 수 있게 되었다. 또한 이러한 메모리 구조의 변경이 최소화되었기 때문에 추가적인 하드웨어 비용도 최소화되었다. 실험 결과를 통해 GradPIM이 최소한의 DRAM 프로토콜 변화와 DRAM칩 내의 공간사용을 통해 심층신경망 학습과정 중 파라미터 갱신에 필요한 시간을 40%만큼 향상시켰음을 보였다.Chapter I: Dynamic Computation Approximation for Aging Compensation 1 1.1 Introduction 1 1.1.1 Chip Reliability 1 1.1.2 Reliability Guardband 2 1.1.3 Approximate Computing in Logic Circuits 2 1.1.4 Computation approximation for Aging Compensation 3 1.1.5 Motivational Case Study 4 1.2 Previous Work 5 1.2.1 Aging-induced Delay 5 1.2.2 Delay-Configurable Circuits 6 1.3 Proposed System 8 1.3.1 Overview of the Proposed System 8 1.3.2 Proposed Adder 9 1.3.3 Proposed Multiplier 11 1.3.4 Proposed Monitoring Circuit 16 1.3.5 Aging Compensation Scheme 19 1.4 Design Methodology 20 1.5 Evaluation 24 1.5.1 Experimental setup 24 1.5.2 RTL component level Adder/Multiplier 27 1.5.3 RTL component level Monitoring circuit 30 1.5.4 System level 31 1.6 Summary 38 Chapter II: Energy-Efficient Neural Network by Combining Approximate Neuron Models 40 2.1 Introduction 40 2.1.1 Deep Neural Network (DNN) 40 2.1.2 Low-power designs for DNN 41 2.1.3 Stochastic-Computing Deep Neural Network 41 2.1.4 Spiking Deep Neural Network 43 2.2 Hybrid of Stochastic and Spiking DNNs 44 2.2.1 Stochastic-Computing vs Spiking Deep Neural Network 44 2.2.2 Combining Spiking Layers and Stochastic Layers 46 2.2.3 Encoding Mismatch 47 2.3 Evaluation 49 2.3.1 Latency and Test Error 49 2.3.2 Energy Efficiency 51 2.4 Summary 54 Chapter III: GradPIM: In-memory Gradient Descent in Mixed-Precision DNN Training 55 3.1 Introduction 55 3.1.1 Neural Processing Unit 55 3.1.2 Mixed-precision Training 56 3.1.3 Mixed-precision Training with In-memory Gradient Descent 57 3.1.4 DNN Parameter Update Algorithms 59 3.1.5 Modern DRAM Architecture 61 3.1.6 Motivation 63 3.2 Previous Work 65 3.2.1 Processing-In-Memory 65 3.2.2 Co-design Neural Processing Unit and Processing-In-Memory 66 3.2.3 Low-precision Computation in NPU 67 3.3 GradPIM 68 3.3.1 GradPIM Architecture 68 3.3.2 GradPIM Operations 69 3.3.3 Timing Considerations 70 3.3.4 Update Phase Procedure 73 3.3.5 Commanding GradPIM 75 3.4 NPU Co-design with GradPIM 76 3.4.1 NPU Architecture 76 3.4.2 Data Placement 79 3.5 Evaluation 82 3.5.1 Evaluation Methodology 82 3.5.2 Experimental Results 83 3.5.3 Sensitivity Analysis 88 3.5.4 Layer Characterizations 90 3.5.5 Distributed Data Parallelism 90 3.6 Summary 92 3.6.1 Discussion 92 Bibliography 113 요약 114Docto

    Development of methods for time efficient scatter correction and improved attenuation correction in time-of-flight PET/MR

    Get PDF
    In der vorliegenden Dissertation wurden zwei fortdauernde Probleme der Bildrekonstruktion in der time-of-flight (TOF) PET bearbeitet: Beschleunigung der TOF-Streukorrektur sowie Verbesserung der emissionsbasierten Schwächungskorrektur. Aufgrund der fehlenden Möglichkeit, die Photonenabschwächung direkt zu messen, ist eine Verbesserung der Schwächungskorrektur durch eine gemeinsame Rekonstruktion der Aktivitäts- und Schwächungskoeffizienten-Verteilung mittels der MLAA-Methode von besonderer Bedeutung für die PET/MRT, während eine Beschleunigung der TOF-Streukorrektur gleichermaßen auch für TOF-fähige PET/CT-Systeme relevant ist. Für das Erreichen dieser Ziele wurde in einem ersten Schritt die hochauflösende PET-Bildrekonstruktion THOR, die bereits zuvor in unserer Gruppe entwickelt wurde, angepasst, um die TOF-Information nutzen zu können, welche von allen modernen PET-Systemen zur Verfügung gestellt wird. Die Nutzung der TOF-Information in der Bildrekonstruktion führt zu reduziertem Bildrauschen und zu einer verbesserten Konvergenzgeschwindigkeit. Basierend auf diesen Anpassungen werden in der vorliegenden Arbeit neue Entwicklungen für eine Verbesserung der TOF-Streukorrektur und der MLAA-Rekonstruktion beschrieben. Es werden sodann Ergebnisse vorgestellt, welche mit den neuen Algorithmen am Philips Ingenuity PET/MRT-Gerät erzielt wurden, das gemeinsam vom Helmholtz-Zentrum Dresden-Rossendorf (HZDR) und dem Universitätsklinikum betrieben wird. Eine wesentliche Voraussetzung für eine quantitative TOF-Bildrekonstruktionen ist eine Streukorrektur, welche die TOF-Information mit einbezieht. Die derzeit übliche Referenzmethode hierfür ist eine TOF-Erweiterung des single scatter simulation Ansatzes (TOF-SSS). Diese Methode wurde im Rahmen der TOF-Erweiterung von THOR implementiert. Der größte Nachteil der TOF-SSS ist eine 3–7-fach erhöhte Rechenzeit für die Berechnung der Streuschätzung im Vergleich zur non-TOF-SSS, wodurch die Bildrekonstruktionsdauer deutlich erhöht wird. Um dieses Problem zu beheben, wurde eine neue, schnellere TOF-Streukorrektur (ISA) entwickelt und implementiert. Es konnte gezeigt werden, dass dieser neue Algorithmus eine brauchbare Alternative zur TOF-SSS darstellt, welche die Rechenzeit auf ein Fünftel reduziert, wobei mithilfe von ISA und TOF-SSS rekonstruierte Schnittbilder quantitativ ausgezeichnet übereinstimmen. Die Gesamtrekonstruktionszeit konnte mithilfe ISA bei Ganzkörperuntersuchungen insgesamt um den Faktor Zwei reduziert werden. Dies kann als maßgeblicher Fortschritt betrachtet werden, speziell im Hinblick auf die Nutzung fortgeschrittener Bildrekonstruktionsverfahren im klinischen Umfeld. Das zweite große Thema dieser Arbeit ist ein Beitrag zur verbesserten Schwächungskorrektur in der PET/MRT mittels MLAA-Rekonstruktion. Hierfür ist zunächst eine genaue Kenntnis der tatsächlichen Zeitauflösung in der betrachten PET-Aufnahme zwingend notwendig. Da die vom Hersteller zur Verfügung gestellten Zahlen nicht immer verlässlich sind und zudem die Zählratenabhängigkeit nicht berücksichtigen, wurde ein neuer Algorithmus entwickelt und implementiert, um die Zeitauflösung in Abhängigkeit von der Zählrate zu bestimmen. Dieser Algorithmus (MLRES) basiert auf dem maximum likelihood Prinzip und erlaubt es, die funktionale Abhängigkeit der Zeitauflösung des Philips Ingenuity PET/MRT von der Zählrate zu bestimmen. In der vorliegenden Arbeit konnte insbesondere gezeigt werden, dass sich die Zeitauflösung des Ingenuity PET/MRT im klinisch relevanten Zählratenbereich um mehr als 250 ps gegenüber der vom Hersteller genannten Auflösung von 550 ps verschlechtern kann, welche tatsächlich nur bei extrem niedrigen Zählraten erreicht wird. Basierend auf den oben beschrieben Entwicklungen konnte MLAA in THOR integriert werden. Die MLAA-Implementierung erlaubt die Generierung realistischer patientenspezifischer Schwächungsbilder. Es konnte insbesondere gezeigt werden, dass auch Knochen und Hohlräume korrekt identifiziert werden, was mittels MRT-basierter Schwächungskorrektur sehr schwierig oder sogar unmöglich ist. Zudem konnten wir bestätigen, dass es mit MLAA möglich ist, metallbedingte Artefakte zu reduzieren, die ansonsten in den MRT-basierten Schwächungsbildern immer zu finden sind. Eine detaillierte Analyse der Ergebnisse zeigte allerdings verbleibende Probleme bezüglich der globalen Skalierung und des lokalen Übersprechens zwischen Aktivitäts- und Schwächungsschätzung auf. Daher werden zusätzliche Entwicklungen erforderlich sein, um auch diese Defizite zu beheben.The present work addresses two persistent issues of image reconstruction for time-of-flight (TOF) PET: acceleration of TOF scatter correction and improvement of emission-based attenuation correction. Due to the missing capability to measure photon attenuation directly, improving attenuation correction by joint reconstruction of the activity and attenuation coefficient distribution using the MLAA technique is of special relevance for PET/MR while accelerating TOF scatter correction is of equal importance for TOF-capable PET/CT systems as well. To achieve the stated goals, in a first step the high-resolution PET image reconstruction THOR, previously developed in our group, was adapted to take advantage of the TOF information delivered by state-of-the-art PET systems. TOF-aware image reconstruction reduces image noise and improves convergence rate both of which is highly desirable. Based on these adaptations, this thesis describes new developments for improvement of TOF scatter correction and MLAA reconstruction and reports results obtained with the new algorithms on the Philips Ingenuity PET/MR jointly operated by the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and the University Hospital. A crucial requirement for quantitative TOF image reconstruction is TOF-aware scatter correction. The currently accepted reference method — the TOF extension of the single scatter simulation approach (TOF-SSS) — was implemented as part of the TOF-related modifications of THOR. The major drawback of TOF-SSS is a 3–7 fold increase in computation time required for the scatter estimation, compared to regular SSS, which in turn does lead to a considerable image reconstruction slowdown. This problem was addressed by development and implementation of a novel accelerated TOF scatter correction algorithm called ISA. This new algorithm proved to be a viable alternative to TOF-SSS and speeds up scatter correction by a factor of up to five in comparison to TOF-SSS. Images reconstructed using ISA are in excellent quantitative agreement with those obtained when using TOF-SSS while overall reconstruction time is reduced by a factor of two in whole-body investigations. This can be considered a major achievement especially with regard to the use of advanced image reconstruction in a clinical context. The second major topic of this thesis is contribution to improved attenuation correction in PET/MR by utilization of MLAA reconstruction. First of all, knowledge of the actual time resolution operational in the considered PET scan is mandatory for a viable MLAA implementation. Since vendor-provided figures regarding the time resolution are not necessarily reliable and do not cover count-rate dependent effects at all, a new algorithm was developed and implemented to determine the time resolution as a function of count rate. This algorithm (MLRES) is based on the maximum likelihood principle and allows to determine the functional dependency of the time resolution of the Philips Ingenuity PET/MR on the given count rate and to integrate this information into THOR. Notably, the present work proves that the time resolution of the Ingenuity PET/MR can degrade by more than 250 ps for the clinically relevant range of count rates in comparison to the vendor-provided figure of 550 ps which is only realized in the limit of extremely low count rates. Based on the previously described developments, MLAA could be integrated into THOR. The performed list-mode MLAA implementation is capable of deriving realistic, patient-specific attenuation maps. Especially, correct identification of osseous structures and air cavities could be demonstrated which is very difficult or even impossible with MR-based approaches to attenuation correction. Moreover, we have confirmed that MLAA is capable of reducing metal-induced artifacts which are otherwise present in MR-based attenuation maps. However, the detailed analysis of the obtained MLAA results revealed remaining problems regarding stability of global scaling as well as local cross-talk between activity and attenuation estimates. Therefore, further work beyond the scope of the present work will be necessary to address these remaining issues
    corecore