27 research outputs found

    Multilevel comparison of deep learning models for function quantification in cardiovascular magnetic resonance: On the redundancy of architectural variations

    Get PDF
    Background: Cardiac function quantification in cardiovascular magnetic resonance requires precise contouring of the heart chambers. This time-consuming task is increasingly being addressed by a plethora of ever more complex deep learning methods. However, only a small fraction of these have made their way from academia into clinical practice. In the quality assessment and control of medical artificial intelligence, the opaque reasoning and associated distinctive errors of neural networks meet an extraordinarily low tolerance for failure. Aim: The aim of this study is a multilevel analysis and comparison of the performance of three popular convolutional neural network (CNN) models for cardiac function quantification. Methods: U-Net, FCN, and MultiResUNet were trained for the segmentation of the left and right ventricles on short-axis cine images of 119 patients from clinical routine. The training pipeline and hyperparameters were kept constant to isolate the influence of network architecture. CNN performance was evaluated against expert segmentations for 29 test cases on contour level and in terms of quantitative clinical parameters. Multilevel analysis included breakdown of results by slice position, as well as visualization of segmentation deviations and linkage of volume differences to segmentation metrics via correlation plots for qualitative analysis. Results: All models showed strong correlation to the expert with respect to quantitative clinical parameters (r(z)(') = 0.978, 0.977, 0.978 for U-Net, FCN, MultiResUNet respectively). The MultiResUNet significantly underestimated ventricular volumes and left ventricular myocardial mass. Segmentation difficulties and failures clustered in basal and apical slices for all CNNs, with the largest volume differences in the basal slices (mean absolute error per slice: 4.2 +/- 4.5 ml for basal, 0.9 +/- 1.3 ml for midventricular, 0.9 +/- 0.9 ml for apical slices). Results for the right ventricle had higher variance and more outliers compared to the left ventricle. Intraclass correlation for clinical parameters was excellent (>= 0.91) among the CNNs. Conclusion: Modifications to CNN architecture were not critical to the quality of error for our dataset. Despite good overall agreement with the expert, errors accumulated in basal and apical slices for all models

    Learning Regularization Parameter-Maps for Variational Image Reconstruction using Deep Neural Networks and Algorithm Unrolling

    Get PDF
    We introduce a method for fast estimation of data-adapted, spatio-temporally dependent regularization parameter-maps for variational image reconstruction, focusing on total variation (TV)-minimization. Our approach is inspired by recent developments in algorithm unrolling using deep neural networks (NNs), and relies on two distinct sub-networks. The first sub-network estimates the regularization parameter-map from the input data. The second sub-network unrolls T iterations of an iterative algorithm which approximately solves the corresponding TV-minimization problem incorporating the previously estimated regularization parameter-map. The overall network is trained end-to-end in a supervised learning fashion using pairs of clean-corrupted data but crucially without the need of having access to labels for the optimal regularization parameter-maps. We prove consistency of the unrolled scheme by showing that the unrolled energy functional used for the supervised learning Γ-converges as T tends to infinity, to the corresponding functional that incorporates the exact solution map of the TV-minimization problem. We apply and evaluate our method on a variety of large scale and dynamic imaging problems in which the automatic computation of such parameters has been so far challenging: 2D dynamic cardiac MRI reconstruction, quantitative brain MRI reconstruction, low-dose CT and dynamic image denoising. The proposed method consistently improves the TV-reconstructions using scalar parameters and the obtained parameter-maps adapt well to each imaging problem and data by leading to the preservation of detailed features. Although the choice of the regularization parameter-maps is data-driven and based on NNs, the proposed algorithm is entirely interpretable since it inherits the properties of the respective iterative reconstruction method from which the network is implicitly defined

    Unrolled three-operator splitting for parameter-map learning in low dose X-ray CT reconstruction

    Get PDF
    We propose a method for fast and automatic estimation of spatially dependent regularization maps for total variation-based (TV) tomography reconstruction. The estimation is based on two distinct sub-networks, with the first sub-network estimating the regularization parameter-map from the input data while the second one unrolling T iterations of the Primal-Dual Three-Operator Splitting (PD3O) algorithm. The latter approximately solves the corresponding TV-minimization problem incorporating the previously estimated regularization parameter-map. The overall network is then trained end-to-end in a supervised learning fashion using pairs of clean-corrupted data but crucially without the need of having access to labels for the optimal regularization parameter-maps

    Hypertrophic cardiomyopathy is characterized by alterations of the mitochondrial calcium uniporter complex proteins: insights from patients with aortic valve stenosis versus hypertrophic obstructive cardiomyopathy

    Get PDF
    Introduction: Hypertrophies of the cardiac septum are caused either by aortic valve stenosis (AVS) or by congenital hypertrophic obstructive cardiomyopathy (HOCM). As they induce cardiac remodeling, these cardiac pathologies may promote an arrhythmogenic substrate with associated malignant ventricular arrhythmias and may lead to heart failure. While altered calcium (Ca2+) handling seems to be a key player in the pathogenesis, the role of mitochondrial calcium handling was not investigated in these patients to date.Methods: To investigate this issue, cardiac septal samples were collected from patients undergoing myectomy during cardiac surgery for excessive septal hypertrophy and/or aortic valve replacement, caused by AVS and HOCM. Septal specimens were matched with cardiac tissue obtained from post-mortem controls without cardiac diseases (Ctrl).Results and discussion: Patient characteristics and most of the echocardiographic parameters did not differ between AVS and HOCM. Most notably, the interventricular septum thickness, diastolic (IVSd), was the greatest in HOCM patients. Histological and molecular analyses showed a trend towards higher fibrotic burden in both pathologies, when compared to Ctrl. Most notably, the mitochondrial Ca2+ uniporter (MCU) complex associated proteins were altered in both pathologies of left ventricular hypertrophy (LVH). On the one hand, the expression pattern of the MCU complex subunits MCU and MICU1 were shown to be markedly increased, especially in AVS. On the other hand, PRMT-1, UCP-2, and UCP-3 declined with hypertrophy. These conditions were associated with an increase in the expression patterns of the Ca2+ uptaking ion channel SERCA2a in AVS (p = 0.0013), though not in HOCM, compared to healthy tissue. Our data obtained from human specimen from AVS or HOCM indicates major alterations in the expression of the mitochondrial calcium uniporter complex and associated proteins. Thus, in cardiac septal hypertrophies, besides modifications of cytosolic calcium handling, impaired mitochondrial uptake might be a key player in disease progression

    Extending Mondrian memory protection

    No full text

    Inspector Gadget : automated extraction of proprietary gadgets from malware binaries

    No full text

    Extending Mondrian memory protection

    No full text
    Zsfassung in dt. SpracheSpeicherschutz für Anwenderprogramme ist ein Konzept, das vom Großteil der heutzutage verwendeten Betriebssysteme bereitgestellt wird.Dieses ermöglicht es, für die einzelnen Speicherbereiche eines Prozesses unterschiedliche Zugriffsberechtigungen für Lese- und Schreib-Operationen sowie das Ausführen von Code zu setzen. Eine Erweiterung der traditionellen Schutzmechanismen ist Mondrian Memory Protection. Dieses Schema erlaubt das genaue Spezifizieren unterschiedlicher Berechtigungen auf Wort-Basis anstelle der traditionellen Speicherseiten-Basis. Allerdings ist auch hier die Spezifikation auf zwei Zugriffsbits limitiert. Zusätzlich ist die Bedeutung der einzelnen Bitkombinationen vorgegeben, was es unmöglich macht, damit komplexere Sicherheitstechniken, wie beispielsweise einen Race Condition Detector, zu implementieren.Der Ansatz, der in dieser Arbeit präsentiert wird, ist eine Erweiterung der einfachen Mondrian Memory Protection. Sie soll eine größere Flexibilität für Anwenderprogramme und das Betriebssystem ermöglichen. Aufbauend auf unserer Architektur zeigen wir die Implementierung von Mechanismen zum Schutz von heiklen Datenstrukturen im Heap und Stack Speicher. Des Weiteren präsentieren wir eine Technik zum Erkennen von Race Conditions, die auf der vorgeschlagenen Architektur basiert.Unsere Experimente beweisen, dass das System, bei akzeptablem Mehraufwand, den gewünschten Schutz und die Möglichkeit zum Erkennen von Race Conditions bietet. Zusätzlich zeigen die Ergebnisse, dass sogar große Systeme, wie die GNU C Bibliothek und der Apache Webserver, Probleme in Zusammenhang mit Race Conditions aufweisen.Most modern operating systems implement some sort of memory protection for user processes. Hence, it is possible to set access permissions that determine whether a region of memory allocated for a process can be read, written, or executed by this process. Mondrian memory protection is a technique that extends the traditional memory protection scheme and allows fine-grain permission settings. Instead of being able to set access permissions on a page-level, Mondrian memory protection supports different access permissions for individual words. However, this protection scheme is still limited to only two permission bits that have a pre-defined semantics. This is not sufficient to implement more complex security techniques, for example, a race condition detection system.The presented solution proposes an extension to the simple Mondrian protection scheme that provides more flexibility to user programs and the operating system. Based on our extended architecture, we implement mechanisms to protect sensitive data structures on the heap and on the stack.Moreover, we present the implementation of a technique to detect race conditions. Our experiments demonstrate that the system can provide the expected protection and ability to detect races with reasonable overheads. Furthermore, our results show that even large systems such as the GNU C Library and the Apache web server contain problems related to race conditions.7

    Behavior based malware analysis and detection

    No full text
    Zsfassung in dt. SpracheEine der größten Bedrohungen für Internetbenutzer stellt heutzutage unerwünschte Software, auch Schadsoftware oder Malware (aus dem Englischen malicious software, bösartige Programme) genannt, dar.Die zugrundeliegende Ursache einer Großzahl der alltäglichen Probleme, wiezum Beispiel unerwünschte Spam Emails oder Denial-of-Service Angriffe, bei denen die Erreichbarkeit von Internetseiten oder Servern beeinträchtigt wird, ist ebendiese Schadsoftware. Infizierte Computer schließen sich zu so genannten Botnetzen, einer Gruppe ferngesteuerter Systeme, zusammen, mit denen dessen Besitzer, der Botmaster, die Möglichkeit besitzt beliebige Ziele anzugreifen. Da sich diese Attacken über die letzten Jahre mehr und mehr gehäuft haben, beschäftigt sich eine Vielzahl an Forschungsgruppen mit dem Erkennen solcher Programme. Die traditionelle Herangehensweise von Anti-Viren Programmen zeigt hierbei jedoch die fundamentale Schwäche, dass sich deren Erkennungsmuster auf einzelne Instanzen beziehen und somit inhärent anfällig für Verschlüsselung oder Polymorphismus sind.Neuartige Systeme versuchen im Gegensatz dazu Sequenzen von Systemaufrufen zu erkennen. Dabei sind diese jedoch häufig ebenso leicht umgehbar wie ihre Vorgängersysteme, da Autoren von Schadsoftware die Abfolge dieser Aufrufe in ihren Programmen recht einfach abändern können. Um diese Nachteile zu vermeiden versuchen neuartige so genannte dynamische Anti-Viren Programme eine verhaltensorientierte Herangehensweise bei der Erkennung einzusetzen. Die Erkennungsrate solcher Systeme ist vielversprechend, allerdings erweisen sich diese Programme in der Praxis oft als zu langsam und benötigen zusätzlich komplizierte Virtualisierungstechnologien. In dieser Arbeit konzentrieren wir uns anfangs darauf, eine effektive und gleichzeitig effiziente Erkennungstechnologie zu entwickeln. Diese soll ohne spezielle Virtualisierungstechnologie funktionieren und in Zukunft traditionelle Anti-Viren Software vervollständigen. Dabei wird das Verhalten der Schadsoftware zuerst in einer kontrollierten Analyseumgebung präzise beobachtet um Datenflüsse zwischen individuellen Systemaufrufen zu identifizieren. Unser System extrahiert anschließend Slices aus diesen Datenflüssen, die spezifisches Verhalten der Schadsoftware darstellen. Auf Anwendersystemen kann ein speziell dafür entwickelter Scanner das Verhalten eines unbekannten Programmes mit in Slices gespeicherten Verhalten vergleichen und somit eine mögliche Infektion des Systems erkennen. Unsere Experimente zeigen, dass dieser Ansatz in der Praxis gute Ergebnisse liefert ohne dabei unnötig viel Rechenleistung zu beanspruchen.Eine weitere Hauptkomponente im Kampf gegen bösartige Software stellt die präzise Analyse dieser Programme dar. Nur indem Analysten das Verhalten solcher Systeme vollständig verstehen, können wirksame Gegenmaßnahmen getroffen werden. Heutzutage werden die wichtigsten Algorithmen, wie Beispielsweise verschlüsselte Updateverfahren oder das Erzeugen von DNS Domänen zum Auffinden von Command&Control Servern, oft manuell analysiert beziehungsweise rekonstruiert. Der zweite Teil dieser Arbeit beschäftigt sich damit solche Algorithmen automatisch aus Binärprogrammen zu extrahieren. Ein ausgewähltes Verhalten wird dabei in so genannte Gadgets extrahiert, die sämtliche dafür benötigten Programmteile enthalten und unabhängig vom ursprünglichen Programm ausgeführt werden können. Gadgets sind somit nützliche Bestandteile einer jeden Analyse von Schadsoftware in der Praxis, da durch deren Verwendung aufwändige manuelle Arbeitsschritte vermieden werden können. Experimente mit realer Schadsoftware zeigen, dass unser Ansatz vielseitige und praktische Anwendungsmöglichkeiten bietet.Sowohl unsere Technik zum Erkennen von Schadsoftware als auch das Extrahieren von Gadgets beruhen auf dynamischer Analyse. Die Vergangenheit hat jedoch gezeigt, dass auf neue Technologien häufig schnelle Reaktionen von Entwicklern dieser Schadsoftware erfolgen, weshalb bereits jetzt Methoden zur Umgehung der dynamischen Analyse in Malware Verwendung finden. Aus diesem Grund beschäftigt sich der abschließende Teil dieser Arbeit mit solchen Umgehungsmethoden. Ein Beispiel dafür ist stalling code, der typischerweise vor dem eigentlichen bösartigen Verhalten ausgeführt wird und dieses eine gewisse Zeit verzögert. Da die Analyse eines Programmes in der Regel nach kurzer Zeit abgebrochen wird, ermöglicht dieses Aufschieben, dass weder Erkennungsmuster noch Gadgets extrahiert werden können. Um dies zu verhindern stellt diese Arbeit einen neuartigen Ansatz zum Erkennen und Umgehen von stalling code dar indem blockierende Coderegionen ignoriert oder übersprungen werden. Unser Prototyp HASTEN erweitert das Analysesystem ANUBIS und zeigte praxistaugliche Ergebnisse in einem experimentellen Setup mit neuartiger Schadsoftware.Malware is one of the most serious security threats on the Internet today. In fact, most Internet problems such as spam emails and denial of service attacks have malware as their underlying cause. That is, computers that are infected with malware are often networked together to form botnets, and many attacks are launched using these malicious, attacker-controlled networks. With the increasing significance of malware in Internet attacks, much research has concentrated on developing techniques to mitigate malicious code. Unfortunately, current host-based detection approaches (i.e., anti-virus software) suffer from ineffective detection models. These models concentrate on the features of a specific malware instance, and are often easily evadable by obfuscation or polymorphism. Also, detectors that check for the presence of a sequence of system calls exhibited by a malware instance can be evaded by system call reordering.In order to address the shortcomings of ineffective models, several dynamic detection approaches have been proposed that aim to identify the behavior exhibited by a malware family. Although promising, these approaches are unfortunately too slow to be used as real-time detectors on the end host, and they often require cumbersome virtual machine technology.In a first part of this thesis, we propose a novel malware detection approach that is both effective and efficient, and thus, can be used to replace or complement traditional anti-virus software at the end host.Our approach first analyzes a malware program in a controlled environment to build a model that characterizes its behavior. Such models describe the information flows between the system calls essential to the malware's mission, and therefore, cannot be easily evaded by simple obfuscation or polymorphic techniques. Then, we extract the program slices responsible for such information flows. For detection, we execute these slices to match our models against the runtime behavior of an unknown program. Our experiments show that our approach can effectively detect running malicious code on an end user's host with a small overhead.Another important component in the fight against malicious software is the analysis of malware samples: Only if an analyst understands the behavior of a given sample, she can design appropriate countermeasures.Manual approaches are frequently used to analyze certain key algorithms, such as downloading of encoded updates, or generating new DNS domains for command and control purposes. In a second part, we present a novel approach to automatically extract, from a given binary executable, the algorithm related to a certain activity of the sample. We isolate and extract these instructions and generate a so-called gadget, i.e., a stand-alone component that encapsulates a specific behavior. We make sure that a gadget can autonomously perform a specific task by including all relevant code and data into the gadget so that it can be executed in a self-contained fashion. Gadgets are useful entities in analyzing malicious software: In particular, they are valuable for practitioners, as understanding a certain activity that is embedded in a binary sample (e.g., the update function) is still largely a manual and complex task. Our evaluation with several real-world samples demonstrates that our approach is versatile and useful in practice.Both systems, our malware detection technique and HASTEN alike, heavily rely on dynamic analysis of a sample. However, the past has show that whenever an anti-malware solution becomes popular, malware authors promptly react and modify their programs to evade these defense mechanisms. For example, recently, malware authors have increasingly started to create malicious code that can evade dynamic analysis. Thus, in a last part, we concentrate on evasion techniques that target these analysis systems. One recent form of evasion is stalling code.Stalling code is typically executed before any malicious behavior. The attacker's aim is to delay the execution of such activity long enough so that an automated dynamic analysis system fails to extract the interesting behavior. This work presents the first approach to detect and mitigate malicious stalling code, and to ensure forward progress within the amount of time allocated for the analysis of a sample. We built a prototype implementation, HASTEN, for our dynamic analysis systems ANUBIS. Experimental results show that our system works well in practice, and that it is able to detect additional malicious behavior in real-world malware samples.10
    corecore