1,098 research outputs found

    Spin-orbit torque induced electrical switching of antiferromagnetic MnN

    Full text link
    Electrical switching and readout of antiferromagnets allows to exploit the unique properties of antiferromagnetic materials in nanoscopic electronic devices. Here we report experiments on the spin-orbit torque induced electrical switching of a polycrystalline, metallic antiferromagnet with low anisotropy and high N\'eel temperature. We demonstrate the switching in a Ta / MnN / Pt trilayer system, deposited by (reactive) magnetron sputtering. The dependence of switching amplitude, efficiency, and relaxation are studied with respect to the MnN film thickness, sample temperature, and current density. Our findings are consistent with a thermal activation model and resemble to a large extent previous measurements on CuMnAs and Mn2_2Au, which exhibit similar switching characteristics due to an intrinsic spin-orbit torque.Comment: 7 pages, 5 figure

    "I must become something else": IdentitÀtskonstruktion und die Superheldenrolle in der Fernsehserie Arrow

    Get PDF
    This thesis deals with how the protagonist of the CW superhero TV series Arrow constructs his identity. Arrow tells the story of Oliver Queen, who returns home after being shipwrecked and assumed dead, to start working as the vigilante superhero Arrow. As a character leading a life split in two – a private and a superhero life – Oliver Queen has difficulties answering the question of who he really is, especially after his time as a castaway and his connection to vari-ous, more or less criminal organizations for which he had to take on different identities. The existing research on superheroes show that there are several explanations for how and why superheroes choose their identity in certain situations. But there is no consensus on how to determine a real identity. The first part of the thesis will examine the theoretical background, first in a general understanding and then applied to superheroes and Arrow, on identity con-struction, the role of the mask, and the definition of a superhero to point out the essential fac-tors for the identity work of superheroes. Using the theory of personal identity by JĂŒrgen Straub, which emphasizes the importance of narration, the following chapters show how the protagonist manages to construct a coherent identity for himself. In the second part, I will analyze the protagonist’s use of narration in the form of monologues and flashbacks in the context of his identity construction over the course of several years as well as his costume, which separates Oliver Queen from the superhero (Green) Arrow, the different roles and names he takes on and his relationships to other people. By telling his stories, remembering past mistakes and then acting differently in the present, he is able to combine his private with his superhero role

    The ethnic diversity and collective action survey (EDCAS): technical report

    Full text link
    "The EDCA-Survey is a large scale CATI telephone survey conducted in three countries: Germany, France and the Netherlands. The survey was designed to test theoretical arguments on the effects of ethnic diversity on social capital and civic engagement. This aim demands for a sophisticated design. The survey is not representative for the entire populations of Germany, France or the Netherlands. Instead, the basic population is the population over the age of 18 in 74 selected regions in Germany, France and the Netherlands that have sufficient language skills to conduct an interview in the language of their country of residence, or in the case of the oversample of people with Turkish migration background to conduct the interview in Turkish. The aim of the survey is to enable the comparison of these 74 regions, which vary on contextual characteristics of interest. In addition, the EDCA-Survey includes one oversample of migrants in general (24%) and an additional second oversample of Turkish migrants in particular (14%). The oversampling is the same within each of the 74 regions, each of which has about 100 observations and seven specially chosen cities even 500. This survey design is an important characteristic of the EDCA-Survey and distinguishes it from other available data. This is important since one aim of the EDCA-Survey is to enable the aggregation of contextual characteristics from the survey itself. Overall, 10.200 completed interviews were conducted - 7500 in Germany, 1400 in France and 1300 in the Netherlands." (author's abstract)"Der EDCA-Survey ist eine CATI gestĂŒtzte Telefonumfrage, die in Deutschland, Frankreich und den Niederlanden durchgefĂŒhrt wurde. Die Umfrage wurde mit dem Ziel erhoben, Effekte ethnischer DiversitĂ€t auf Sozialkapital und Zivilengagement zu untersuchen. Dieses Vorhaben setzt ein komplexes Surveydesign voraus. So ist die Umfrage nicht reprĂ€sentativ fĂŒr die Bevölkerungen von Deutschland, Frankreich und den Niederlanden. Stattdessen bildet die Grundgesamtheit die Bevölkerung von 74 ausgewĂ€hlten Regionen der drei LĂ€nder, die ĂŒber die Sprachfertigkeit verfĂŒgen, ein Interview in der Landessprache oder gegebenenfalls auf TĂŒrkisch zu fĂŒhren. Ziel ist der Vergleich dieser 74 Regionen, die sich hinsichtlich verschiedener Charakteristika unterscheiden. DarĂŒber hinaus weist der EDCA-Survey eine ĂŒberproportionale Stichprobe von Personen mit Migrationshintergrund (24%) und eine zweite ĂŒberproportionale Stichprobe von Personen mit tĂŒrkischem Migrationshintergrund (14%) auf. Diese ĂŒberproportionale Stichprobe wurde in jeder der 74 Regionen gezogen, in denen jeweils ca. 100 Interviews durchgefĂŒhrt wurden. In sieben speziell ausgesuchten Regionen wurden 500 Interviews gefĂŒhrt. Dieses Surveydesign ist ein zentrales Charakteristikum des EDCA-Surveys und ermöglicht die Aggregation von Kontextmerkmalen aus dem Survey. Insgesamt wurden 10.200 vollstĂ€ndige Interviews erhoben – 7500 in Deutschland, 1400 in Frankreich und 1300 in den Niederlanden." (Autorenreferat

    Trainable Joint Bilateral Filters for Enhanced Prediction Stability in Low-dose CT

    Get PDF
    Low-dose computed tomography (CT) denoising algorithms aim to enable reduced patient dose in routine CT acquisitions while maintaining high image quality. Recently, deep learning~(DL)-based methods were introduced, outperforming conventional denoising algorithms on this task due to their high model capacity. However, for the transition of DL-based denoising to clinical practice, these data-driven approaches must generalize robustly beyond the seen training data. We, therefore, propose a hybrid denoising approach consisting of a set of trainable joint bilateral filters (JBFs) combined with a convolutional DL-based denoising network to predict the guidance image. Our proposed denoising pipeline combines the high model capacity enabled by DL-based feature extraction with the reliability of the conventional JBF. The pipeline's ability to generalize is demonstrated by training on abdomen CT scans without metal implants and testing on abdomen scans with metal implants as well as on head CT data. When embedding two well-established DL-based denoisers (RED-CNN/QAE) in our pipeline, the denoising performance is improved by 10 %10\,\%/82 %82\,\% (RMSE) and 3 %3\,\%/81 %81\,\% (PSNR) in regions containing metal and by 6 %6\,\%/78 %78\,\% (RMSE) and 2 %2\,\%/4 %4\,\% (PSNR) on head CT data, compared to the respective vanilla model. Concluding, the proposed trainable JBFs limit the error bound of deep neural networks to facilitate the applicability of DL-based denoisers in low-dose CT pipelines

    A gradient-based approach to fast and accurate head motion compensation in cone-beam CT

    Full text link
    Cone-beam computed tomography (CBCT) systems, with their portability, present a promising avenue for direct point-of-care medical imaging, particularly in critical scenarios such as acute stroke assessment. However, the integration of CBCT into clinical workflows faces challenges, primarily linked to long scan duration resulting in patient motion during scanning and leading to image quality degradation in the reconstructed volumes. This paper introduces a novel approach to CBCT motion estimation using a gradient-based optimization algorithm, which leverages generalized derivatives of the backprojection operator for cone-beam CT geometries. Building on that, a fully differentiable target function is formulated which grades the quality of the current motion estimate in reconstruction space. We drastically accelerate motion estimation yielding a 19-fold speed-up compared to existing methods. Additionally, we investigate the architecture of networks used for quality metric regression and propose predicting voxel-wise quality maps, favoring autoencoder-like architectures over contracting ones. This modification improves gradient flow, leading to more accurate motion estimation. The presented method is evaluated through realistic experiments on head anatomy. It achieves a reduction in reprojection error from an initial average of 3mm to 0.61mm after motion compensation and consistently demonstrates superior performance compared to existing approaches. The analytic Jacobian for the backprojection operation, which is at the core of the proposed method, is made publicly available. In summary, this paper contributes to the advancement of CBCT integration into clinical workflows by proposing a robust motion estimation approach that enhances efficiency and accuracy, addressing critical challenges in time-sensitive scenarios.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Calibration by differentiation – Self‐supervised calibration for X‐ray microscopy using a differentiable cone‐beam reconstruction operator

    Get PDF
    High‐resolution X‐ray microscopy (XRM) is gaining interest for biological investigations of extremely small‐scale structures. XRM imaging of bones in living mice could provide new insights into the emergence and treatment of osteoporosis by observing osteocyte lacunae, which are holes in the bone of few micrometres in size. Imaging living animals at that resolution, however, is extremely challenging and requires very sophisticated data processing converting the raw XRM detector output into reconstructed images. This paper presents an open‐source, differentiable reconstruction pipeline for XRM data which analytically computes the final image from the raw measurements. In contrast to most proprietary reconstruction software, it offers the user full control over each processing step and, additionally, makes the entire pipeline deep learning compatible by ensuring differentiability. This allows fitting trainable modules both before and after the actual reconstruction step in a purely data‐driven way using the gradient‐based optimizers of common deep learning frameworks. The value of such differentiability is demonstrated by calibrating the parameters of a simple cupping correction module operating on the raw projection images using only a self‐supervisory quality metric based on the reconstructed volume and no further calibration measurements. The retrospective calibration directly improves image quality as it avoids cupping artefacts and decreases the difference in grey values between outer and inner bone by 68–94%. Furthermore, it makes the reconstruction process entirely independent of the XRM manufacturer and paves the way to explore modern deep learning reconstruction methods for arbitrary XRM and, potentially, other flat‐panel computed tomography systems. This exemplifies how differentiable reconstruction can be leveraged in the context of XRM and, hence, is an important step towards the goal of reducing the resolution limit of in vivo bone imaging to the single micrometre domain

    Ultralow‐parameter denoising: trainable bilateral filter layers in computed tomography

    Get PDF
    Background Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. Purpose Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. Methods This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. Results Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. Conclusions Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures

    On the Benefit of Dual-domain Denoising in a Self-supervised Low-dose CT Setting

    Full text link
    Computed tomography (CT) is routinely used for three-dimensional non-invasive imaging. Numerous data-driven image denoising algorithms were proposed to restore image quality in low-dose acquisitions. However, considerably less research investigates methods already intervening in the raw detector data due to limited access to suitable projection data or correct reconstruction algorithms. In this work, we present an end-to-end trainable CT reconstruction pipeline that contains denoising operators in both the projection and the image domain and that are optimized simultaneously without requiring ground-truth high-dose CT data. Our experiments demonstrate that including an additional projection denoising operator improved the overall denoising performance by 82.4-94.1%/12.5-41.7% (PSNR/SSIM) on abdomen CT and 1.5-2.9%/0.4-0.5% (PSNR/SSIM) on XRM data relative to the low-dose baseline. We make our entire helical CT reconstruction framework publicly available that contains a raw projection rebinning step to render helical projection data suitable for differentiable fan-beam reconstruction operators and end-to-end learning.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Exploring Epipolar Consistency Conditions for Rigid Motion Compensation in In-vivo X-ray Microscopy

    Full text link
    Intravital X-ray microscopy (XRM) in preclinical mouse models is of vital importance for the identification of microscopic structural pathological changes in the bone which are characteristic of osteoporosis. The complexity of this method stems from the requirement for high-quality 3D reconstructions of the murine bones. However, respiratory motion and muscle relaxation lead to inconsistencies in the projection data which result in artifacts in uncompensated reconstructions. Motion compensation using epipolar consistency conditions (ECC) has previously shown good performance in clinical CT settings. Here, we explore whether such algorithms are suitable for correcting motion-corrupted XRM data. Different rigid motion patterns are simulated and the quality of the motion-compensated reconstructions is assessed. The method is able to restore microscopic features for out-of-plane motion, but artifacts remain for more realistic motion patterns including all six degrees of freedom of rigid motion. Therefore, ECC is valuable for the initial alignment of the projection data followed by further fine-tuning of motion parameters using a reconstruction-based method
    • 

    corecore