1,098 research outputs found

    Quantum device fine-tuning using unsupervised embedding learning

    Full text link
    Quantum devices with a large number of gate electrodes allow for precise control of device parameters. This capability is hard to fully exploit due to the complex dependence of these parameters on applied gate voltages. We experimentally demonstrate an algorithm capable of fine-tuning several device parameters at once. The algorithm acquires a measurement and assigns it a score using a variational auto-encoder. Gate voltage settings are set to optimise this score in real-time in an unsupervised fashion. We report fine-tuning times of a double quantum dot device within approximately 40 min

    Quantum-Assisted Learning of Hardware-Embedded Probabilistic Graphical Models

    Full text link
    Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.Comment: 17 pages, 8 figures. Minor further revisions. As published in Phys. Rev.

    Evaluation of synthetic and experimental training data in supervised machine learning applied to charge state detection of quantum dots

    Full text link
    Automated tuning of gate-defined quantum dots is a requirement for large scale semiconductor based qubit initialisation. An essential step of these tuning procedures is charge state detection based on charge stability diagrams. Using supervised machine learning to perform this task requires a large dataset for models to train on. In order to avoid hand labelling experimental data, synthetic data has been explored as an alternative. While providing a significant increase in the size of the training dataset compared to using experimental data, using synthetic data means that classifiers are trained on data sourced from a different distribution than the experimental data that is part of the tuning process. Here we evaluate the prediction accuracy of a range of machine learning models trained on simulated and experimental data and their ability to generalise to experimental charge stability diagrams in two dimensional electron gas and nanowire devices. We find that classifiers perform best on either purely experimental or a combination of synthetic and experimental training data, and that adding common experimental noise signatures to the synthetic data does not dramatically improve the classification accuracy. These results suggest that experimental training data as well as realistic quantum dot simulations and noise models are essential in charge state detection using supervised machine learning

    A general-purpose material property data extraction pipeline from large polymer corpora using Natural Language Processing

    Full text link
    The ever-increasing number of materials science articles makes it hard to infer chemistry-structure-property relations from published literature. We used natural language processing (NLP) methods to automatically extract material property data from the abstracts of polymer literature. As a component of our pipeline, we trained MaterialsBERT, a language model, using 2.4 million materials science abstracts, which outperforms other baseline models in three out of five named entity recognition datasets when used as the encoder for text. Using this pipeline, we obtained ~300,000 material property records from ~130,000 abstracts in 60 hours. The extracted data was analyzed for a diverse range of applications such as fuel cells, supercapacitors, and polymer solar cells to recover non-trivial insights. The data extracted through our pipeline is made available through a web platform at https://polymerscholar.org which can be used to locate material property data recorded in abstracts conveniently. This work demonstrates the feasibility of an automatic pipeline that starts from published literature and ends with a complete set of extracted material property information
    corecore