10 research outputs found

    Competing interactions in semiconductor quantum dots

    Get PDF
    We introduce an integrability-based method enabling the study of semiconductor quantum dot models incorporating both the full hyperfine interaction as well as a mean-field treatment of dipole-dipole interactions in the nuclear spin bath. By performing free induction decay and spin echo simulations we characterize the combined effect of both types of interactions on the decoherence of the electron spin, for external fields ranging from low to high values. We show that for spin echo simulations the hyperfine interaction is the dominant source of decoherence at short times for low fields, and competes with the dipole-dipole interactions at longer times. On the contrary, at high fields the main source of decay is due to the dipole-dipole interactions. In the latter regime an asymmetry in the echo is observed. Furthermore, the non-decaying fraction previously observed for zero field free induction decay simulations in quantum dots with only hyperfine interactions, is destroyed for longer times by the mean-field treatment of the dipolar interactions.Comment: 10 pages, 5 figures [v2: subsection and references added

    NFFA-Europe Pilot - D9.1 - Deployment of the VA service prototypes

    Get PDF
    This Deliverable presents the prototype of the first Virtual Access service: an instance of the MetaStore, named MetaRepo. The service has been installed as a virtual machine hosted at KIT, and has been connected to the Single-Sign-On. The service can be accessed using a Graphical User Interface. A monitoring service on the MetaRepo side makes a call to the API of the database test instance at each action performed in the MetaRepo by logged-in users, in order to keep track of the activities of the Virtual Access service

    NFFA-Europe Pilot - D16.3 - Identification of good practices for data provenance

    Get PDF
    Here we elaborate and implement FAIR-oriented procedures and recommendations to enforce data provenance in the NFFA scientific experiment’s workflow, from data creation to data usage. The set of procedures is developed by taking into account needs coming from various communities within NEP. Close attention is paid to identify and tailor existing electronic lab notebook (ELN) and laboratory information management system solutions for describing sample processing workflows and (semi-) automated metadata recording during the experiments as initial steps for implementing FAIR by design datasets

    D16.1 - Design of the service platform

    Get PDF
    This deliverable presents the initial design of the infrastructure for the NFFA-Europe Pilot (NEP). The infrastructure is planned to consist of diverse elements for the Data and the Metadata Management, as well as different services (in the frontend, in the backend, and for Virtual Access) which will be gradually developed and integrated in a seamless way. We distinguish between the basic elements, which are essential parts of the infrastructure planned in the NEP proposal, and additional elements which were not initially planned but might improve the interconnections and facilitate the Research Users, in case they will be developed as output of the scouting activities of the Task 16.4 of the Joint Activity 6 (Work Package 16). The elements of the infrastructure will be connected to each other and will be accessed by users or by other services thanks to interfaces

    NFFA-Europe Pilot - D16.2 - Report on the first data services

    Get PDF
    This document describes the initial set of data services available in NFFA Europe Pilot

    Truncated Conformal Space Approach for 2D Landau-Ginzburg Theories

    No full text
    We study the spectrum of Landau-Ginzburg theories in 1 + 1 dimensions using the truncated conformal space approach employing a compactified boson. We study these theories both in their broken and unbroken phases. We first demonstrate that we can reproduce the expected spectrum of a Phi(2) theory (i.e. a free massive boson) in this framework. We then turn to Phi(4) in its unbroken phase and compare our numerical results with the predictions of two-loop perturbation theory, finding excellent agreement. We then analyze the broken phase of Phi(4) where kink excitations together with their bound states are present. We confirm the semiclassical predictions for this model on the number of stable kink-antikink bound states. We also test the semiclassics in the double well phase of Phi(6) Landau-Ginzburg theory, again finding agreement

    ExaNeSt - Holistic Evaluation

    No full text
    This deliverable describes the holistic evaluation of the platform. Why is this needed? Because the complexity of an exascale HPC system requires more than a simple integration between different components, it is instead a complete and complex merging of different technologies with the aim of achieving the maximum efficiency of the entire system within the resource limitations of ExaNeSt. Applications have also a crucial role in this process, as the system must be usable in real cases, by real users, and not just ready for benchmarks. Consequently the evaluation of the performance cannot be limited to single parts (network, storage, FPGA, etc.) but instead the entire hardware and software components must be considered as an integrated entity. In this document we highlight how the different software and hardware components of ExaNeSt integrate to provide a single HPC platform (the ExaNeSt testbed) with maximum performance for applications, with converged virtualization and Data-Analytics and high energy efficiency achieved

    Next generation of Exascale-class systems:ExaNeSt project and the status of its interconnect and storage development

    Get PDF
    The ExaNeSt project started on December 2015 and is funded by EU H2020 research framework (call H2020-FETHPC-2014, n. 671553) to study the adoption of low-cost, Linux-based power-efficient 64-bit ARM processors clusters for Exascale-class systems. The ExaNeSt consortium pools partners with industrial and academic research expertise in storage, interconnects and applications that share a vision of an European Exascale-class supercomputer. The common goal is designing and implementing a physical rack prototype together with its cooling system, the non-volatile memory (NVM) architecture and a unified low-latency interconnect able to test different options for network and storage. Furthermore, the consortium goal is to provide real HPC applications to validate the system. In this paper we describe the unified data and storage network architecture, reporting on the status of development of different testbeds and highlighting preliminary benchmark results obtained through the execution of scientific, engineering and data analytics scalable application kernels
    corecore