457 research outputs found

    Quantum device fine-tuning using unsupervised embedding learning

    Full text link
    Quantum devices with a large number of gate electrodes allow for precise control of device parameters. This capability is hard to fully exploit due to the complex dependence of these parameters on applied gate voltages. We experimentally demonstrate an algorithm capable of fine-tuning several device parameters at once. The algorithm acquires a measurement and assigns it a score using a variational auto-encoder. Gate voltage settings are set to optimise this score in real-time in an unsupervised fashion. We report fine-tuning times of a double quantum dot device within approximately 40 min

    Sensitive radio-frequency read-out of quantum dots using an ultra-low-noise SQUID amplifier

    Get PDF
    Fault-tolerant spin-based quantum computers will require fast and accurate qubit readout. This can be achieved using radio-frequency reflectometry given sufficient sensitivity to the change in quantum capacitance associated with the qubit states. Here, we demonstrate a 23-fold improvement in capacitance sensitivity by supplementing a cryogenic semiconductor amplifier with a SQUID preamplifier. The SQUID amplifier operates at a frequency near 200 MHz and achieves a noise temperature below 600 mK when integrated into a reflectometry circuit, which is within a factor 120 of the quantum limit. It enables a record sensitivity to capacitance of 0.07 aF/ \sqrt{Hz}. The setup is used to acquire charge stability diagrams of a gate-defined double quantum dot in a short time with a signal-to-noise ration of about 38 in 1 μs of integration time

    Deep Reinforcement Learning for Efficient Measurement of Quantum Devices

    Get PDF
    Deep reinforcement learning is an emerging machine learning approach which can teach a computer to learn from their actions and rewards similar to the way humans learn from experience. It offers many advantages in automating decision processes to navigate large parameter spaces. This paper proposes a novel approach to the efficient measurement of quantum devices based on deep reinforcement learning. We focus on double quantum dot devices, demonstrating the fully automatic identification of specific transport features called bias triangles. Measurements targeting these features are difficult to automate, since bias triangles are found in otherwise featureless regions of the parameter space. Our algorithm identifies bias triangles in a mean time of less than 30 minutes, and sometimes as little as 1 minute. This approach, based on dueling deep Q-networks, can be adapted to a broad range of devices and target transport features. This is a crucial demonstration of the utility of deep reinforcement learning for decision making in the measurement and operation of quantum devices

    Machine learning enables completely automatic tuning of a quantum device faster than human experts

    Get PDF
    Variability is a problem for the scalability of semiconductor quantum devices. The parameter space is large, and the operating range is small. Our statistical tuning algorithm searches for specific electron transport features in gate-defined quantum dot devices with a gate voltage space of up to eight dimensions. Starting from the full range of each gate voltage, our machine learning algorithm can tune each device to optimal performance in a median time of under 70 minutes. This performance surpassed our best human benchmark (although both human and machine performance can be improved). The algorithm is approximately 180 times faster than an automated random search of the parameter space, and is suitable for different material systems and device architectures. Our results yield a quantitative measurement of device variability, from one device to another and after thermal cycling. Our machine learning algorithm can be extended to higher dimensions and other technologies

    Red Clump Morphology as Evidence Against a New Intervening Stellar Population as the Primary Source of Microlensing Toward the LMC

    Full text link
    We examine the morphology of the color-magnitude diagram (CMD) for core helium-burning (red clump) stars to test the recent suggestion by Zaritsky & Lin (1997) that an extension of the red clump in the Large Magellanic Cloud (LMC) toward brighter magnitudes is due to an intervening population of stars that is responsible for a significant fraction of the observed microlensing toward the LMC. Using our own CCD photometry of several fields across the LMC, we confirm the presence of this additional red clump feature, but conclude that it is caused by stellar evolution rather than a foreground population. We do this by demonstrating that the feature (1) is present in all our LMC fields, (2) is in precise agreement with the location of the blue loops in the isochrones of intermediate age red clump stars with the metallicity and age of the LMC, (3) has a relative density consistent with stellar evolution and LMC star formation history, and (4) is present in the Hipparcos CMD for the solar neighborhood where an intervening population cannot be invoked. Assuming there is no systematic shift in the model isochrones, which fit the Hipparcos data in detail, a distance modulus of μLMC=18.3\mu_{LMC} = 18.3 provides the best fit to our dereddened CMD.Comment: 21 pages LaTex (aaspp4.sty), including three tables and 9 figures (1 is .ps, 8 are .gif). Accepted for publication by Astronomical Journal on March 16, 1998. One error corrected and major revisions now lead to an even stronger argument for the stellar evolutionary origin of features in the LMC color magnitude diagram, claimed by others to be an intervening stellar population and major source of microlensing optical depth toward the LM

    Report of the Working Group on Commercial Catches (WGCATCH)

    Get PDF
    The Working Group on Commercial Catches (WGCATCH), chaired by Hans Gerritsen (Ireland) and Nuno Prista (Sweden), met in Lisbon, Portugal, 9–13 November 2015. WGCATCH is responsible for documenting national fishery sampling schemes, establishing best practice and guidelines on sampling and estimation procedures, and providing advice on other uses of fishery data. The meeting was attended by 30 participants from 15 countries. The group addressed a large number of terms of reference and the meeting was con-ducted through presentations, discussions and analysis of questionnaires. The main terms of reference were addressed in subgroups. The report is structured directly along the terms of reference and the main outcomes are listed below. Data collection schemes for small-scale fisheries WGCATCH provided descriptions of national small-scale fisheries through question-naires. An overview was obtained on the current data collection methods. Two major approaches were identified - census (e.g., sales, logbooks) and sampling methods (e.g., catch surveys) - and their main pros and cons were discussed. In most cases, specific sampling approaches are needed for these fisheries. The group developed a work plan to establish good-practice guidelines. Analysis of case studies of commercial fishery sampling designs and estimation Case studies of sampling designs and estimation involving megrim in divisions 7-8 were presented. A common theme is that issues with practical implementation of prob-ability-based sampling remain. WGCATCH summarized the main issues and provided a set of possible solutions. The group also provided guidance on dealing with previous data collected under métier-based sampling designs. Simulation models to investigate survey designs Several simulation studies were presented, most of them outlining the work of fishPi project (funded under MARE/2014/19) in evaluating regional sampling designs. A crit-ical review was carried out and WGCATCH produced general considerations and guidelines. WGCATCH recommends that these are taken into account when analysing the results of simulations of regional sampling design at RCM level. The affect of the landing obligation on catch sampling opportunities The affects on sampling and data quality of the current implementation of the landing obligation in the Baltic were reviewed. The group found that refusal rates for observer trips have increased to nearly 100% in at least one country, while in many other coun-tries on-board observer programmes did not suffer noticeable changes. WGCATCH established that the catches below the minimum size cannot be accurately estimated by sampling the landings below the minimum size because an unknown proportion of the catches may be discarded. The group also reiterated that it is important that the logbooks distinguish landings below and above the minimum size. Publication on statistically sound sampling schemes WGCATCH drafted detailed plans to produce a peer-reviewed paper in 2016. The pa-per will provide a synthesis of the evolution of sampling design towards best practice, illustrated with a number of concise case studies. Estimation procedures in the Regional Database (RDB) The work of WKRDB 2015 presented alongside existing and planned estimation pro-cedures in the RDB. Current work by Norway on a software package that will allow design-based estimation and optimization for stock assessment purposes was also pre-sented. The advantages of ensuring compatibility of this new software with the devel-opments currently planned for RDB-FishFrame are underscored. Repository of resources relevant to catch sampling WGCATCH initiated a repository with key resources; putting them into context with brief descriptions or review of each report, paper, book, website, software package etc. The intention is for this repository to be made available online by ICES. Sampling of incidental bycatches WGCATCH agreed to start routine documentation of sampling practices for bycatches of protected, endangered and threatened species (PETS) and rare fish species as well as routine evaluation of the limitations of current methods for collection and analysis. Training course on Design and Analysis of Statistical Sound catch sampling pro-grammes WGCATCH considered continuous training and expertise on sampling design, estima-tion and simulation to be the basis for successful implementation of statistical sound catch sampling programs. A new ICES Training Course in Design and Analysis of Sta-tistical Sound will take place at ICES HQ in Copenhagen, 12–16 September 2016. WGCATCH recommends that RCMs promote the attendance of these meetings among all MS involved

    Report of the Regional Co-ordination Meeting for the North Sea and Eastern Arctic (RCM NS&EA) 2013

    Get PDF
    Report of the Regional Co-ordination Meeting for the North Sea and Eastern Arctic (RCM NS&EA) 2013 final report European Fisheries Control Agency (EFCA) Vigo, Spain 09/09/2013-13/09/2013The Regional Coordination Meeting for the North Sea & Eastern Arctic (RCM NS&EA) was held in September 2013 in Vigo (Spain). The main task of the RCM’s is to coordinate the National Programmes (NP), which propose the national data collection to be carried out by the Member States (MS) under the EU Data Collection Framework (DCF). It was envisaged that, from 2104 onwards, data collection by the MS would be carried out under a new framework (DC-MAP). However, the legislation for this framework is not ready yet. Therefore the Commission has decided to extend the present DCF for the time being and the most recent NPs have been adopted for 2014. Since these NP have been adopted without any changes, there is no need for major coordinatio
    corecore