93 research outputs found

    Irreversible transformation of ferromagnetic ordered stripe domains in single-shot IR pump - resonant X-ray scattering probe experiments

    Full text link
    The evolution of a magnetic domain structure upon excitation by an intense, femtosecond Infra-Red (IR) laser pulse has been investigated using single-shot based time-resolved resonant X-ray scattering at the X-ray Free Electron laser LCLS. A well-ordered stripe domain pattern as present in a thin CoPd alloy film has been used as prototype magnetic domain structure for this study. The fluence of the IR laser pump pulse was sufficient to lead to an almost complete quenching of the magnetization within the ultrafast demagnetization process taking place within the first few hundreds of femtoseconds following the IR laser pump pulse excitation. On longer time scales this excitation gave rise to subsequent irreversible transformations of the magnetic domain structure. Under our specific experimental conditions, it took about 2 nanoseconds before the magnetization started to recover. After about 5 nanoseconds the previously ordered stripe domain structure had evolved into a disordered labyrinth domain structure. Surprisingly, we observe after about 7 nanoseconds the occurrence of a partially ordered stripe domain structure reoriented into a novel direction. It is this domain structure in which the sample's magnetization stabilizes as revealed by scattering patterns recorded long after the initial pump-probe cycle. Using micro-magnetic simulations we can explain this observation based on changes of the magnetic anisotropy going along with heat dissipation in the film.Comment: 16 pages, 6 figure

    Laser-induced ultrafast demagnetization in the presence of a nanoscale magnetic domain network

    Get PDF
    International audienceFemtosecond magnetization phenomena have been challenging our understanding for over a decade. Most experiments have relied on infrared femtosecond lasers, limiting the spatial resolution to a few micrometres. With the advent of femtosecond X-ray sources, nanometric resolution can now be reached, which matches key length scales in femtomagnetism such as the travelling length of excited 'hot' electrons on a femtosecond timescale. Here we study laser-induced ultrafast demagnetization in [Co/Pd]30 multilayer films, which, for the first time, achieves a spatial resolution better than 100 nm by using femtosecond soft X-ray pulses. This allows us to follow the femtosecond demagnetization process in a magnetic system consisting of alternating nanometric domains of opposite magnetization. No modification of the magnetic structure is observed, but, in comparison with uniformly magnetized systems of similar composition, we find a significantly faster demagnetization time. We argue that this may be caused by direct transfer of spin angular momentum between neighbouring domains

    Kinetics of 13C-DHA before and during fish-oil supplementation in healthy older individuals

    Get PDF
    Background: Docosahexaenoic acid (DHA) kinetics appear to change with intake, which is an effect that we studied in an older population by using uniformly carbon-13–labeled DHA (13C-DHA). Objective: We evaluated the influence of a fish-oil supplement over 5 mo on the kinetics of 13C-DHA in older persons. Design: Thirty-four healthy, cognitively normal participants (12 men, 22 women) aged between 52 and 90 y were recruited. Two identical kinetic studies were performed, each with the use of a single oral dose of 40 mg 13C-DHA. The first kinetic study was performed before participants started taking a 5-mo supplementation that provided 1.4 g DHA/d plus 1.8 g eicosapentaenoic acid (EPA)/d (baseline); the second study was performed during the final month of supplementation (supplement). In both kinetic studies, blood and breath samples were collected ≀8 h and weekly over 4 wk to analyze 13C enrichment. Results: The time × supplement interaction for 13C-DHA in the plasma was not significant, but there were separate time and supplement effects (P < 0.0001). The area under the curve for plasma 13C-DHA was 60% lower while subjects were taking the supplement than at baseline (P < 0.0001). The uniformly carbon-13–labeled EPA concentration was 2.6 times as high 1 d posttracer while patients were taking the supplement as it was at baseline. The mean (±SEM) plasma 13C-DHA half-life was 4.5 ± 0.4 d at baseline compared with 3.0 ± 0.2 d while taking the supplement (P < 0.0001). Compared with baseline, the mean whole-body half-life was 61% lower while subjects were taking the supplement. The loss of 13C-DHA through ÎČ-oxidation to carbon dioxide labeled with carbon-13 increased from 0.085% of dose/h at baseline to 0.208% of dose/h while subjects were taking the supplement. Conclusions: In older persons, a supplement of 3.2 g EPA + DHA/d increased ÎČ-oxidation of 13C-DHA and shortened the plasma 13C-DHA half-life. Therefore, when circulating concentrations of EPA and DHA are increased, more DHA is available for ÎČ-oxidation. This trial was registered at clinicaltrials.gov as NCT01577004

    Improving Anchor-based Explanations

    Get PDF
    International audienceRule-based explanations are a popular method to understand the rationale behind the answers of complex machine learning (ML) classifiers. Recent approaches, such as Anchors, focus on local explanations based on if-then rules that are applicable in the vicinity of a target instance. This has proved effective at producing faithful explanations, yet anchor-based explanations are not free of limitations. These include long overly specific rules as well as explanations of low fidelity. This work presents two simple methods that can mitigate such issues on tabular and textual data. The first approach proposes a careful selection of the discretization method for numerical attributes in tabular datasets. The second one applies the notion of pertinent negatives to explanations on textual data. Our experimental evaluation shows the positive impact of our contributions on the quality of anchor-based explanations

    When Should We Use Linear Explanations?

    Get PDF
    International audienceThe increasing interest in transparent and fair AI systems has propelled the research in explainable AI (XAI). One of the main research lines in XAI is post-hoc explainability, the task of explaining the logic of an already deployed black-box model. This is usually achieved by learning an interpretable surrogate function that approximates the black box. Among the existing explanation paradigms, local linear explanations are one of the most popular due to their simplicity and fidelity. Despite their advantages, linear surrogates may not always be the most adapted method to produce reliable, i.e., unambiguous and faithful explanations. Hence, this paper introduces Adapted Post-hoc Explanations (APE), a novel method that characterizes the decision boundary of a black-box classifier and identifies when a linear model constitutes a reliable explanation. Besides, characterizing the black-box frontier allows us to provide complementary counterfactual explanations. Our experimental evaluation shows that APE identifies accurately the situations where linear surrogates are suitable while also providing meaningful counterfactual explanations

    Adaptation of AI Explanations to Users' Roles

    No full text
    International audienceSurrogate explanations approximate a complex model by training a simpler model over an interpretable space. Among these simpler models, we identify three kinds of surrogate methods: (a) feature-attribution, (b) example-based, and (c) rule-based explanations. Each surrogate approximates the complex model differently, and we hypothesise that this can impact how users interpret the explanation. Despite the numerous calls for introducing explanations for all, no prior work has compared the impact of these surrogates on specific user roles (e.g., domain expert, developer). In this article, we outline a study design to assess the impact of these three surrogate techniques across different user roles

    Adaptation of AI Explanations to Users' Roles

    No full text
    International audienceSurrogate explanations approximate a complex model by training a simpler model over an interpretable space. Among these simpler models, we identify three kinds of surrogate methods: (a) feature-attribution, (b) example-based, and (c) rule-based explanations. Each surrogate approximates the complex model differently, and we hypothesise that this can impact how users interpret the explanation. Despite the numerous calls for introducing explanations for all, no prior work has compared the impact of these surrogates on specific user roles (e.g., domain expert, developer). In this article, we outline a study design to assess the impact of these three surrogate techniques across different user roles

    Adaptation of AI Explanations to Users' Roles

    No full text
    International audienceSurrogate explanations approximate a complex model by training a simpler model over an interpretable space. Among these simpler models, we identify three kinds of surrogate methods: (a) feature-attribution, (b) example-based, and (c) rule-based explanations. Each surrogate approximates the complex model differently, and we hypothesise that this can impact how users interpret the explanation. Despite the numerous calls for introducing explanations for all, no prior work has compared the impact of these surrogates on specific user roles (e.g., domain expert, developer). In this article, we outline a study design to assess the impact of these three surrogate techniques across different user roles
    • 

    corecore