11 research outputs found

    Bridging the reality gap in quantum devices with physics-aware machine learning

    Get PDF
    The discrepancies between reality and simulation impede the optimization and scalability of solid-state quantum devices. Disorder induced by the unpredictable distribution of material defects is one of the major contributions to the reality gap. We bridge this gap using physics-aware machine learning, in particular, using an approach combining a physical model, deep learning, Gaussian random field, and Bayesian inference. This approach enables us to infer the disorder potential of a nanoscale electronic device from electron-transport data. This inference is validated by verifying the algorithm’s predictions about the gate-voltage values required for a laterally defined quantum-dot device in AlGaAs/GaAs to produce current features corresponding to a double-quantum-dot regime

    Random Fourier signature features

    Get PDF
    Tensor algebras give rise to one of the most powerful measures of similarity for sequences of arbitrary length called the signature kernel accompanied with attractive theoretical guarantees from stochastic analysis. Previous algorithms to compute the signature kernel scale quadratically in terms of the length and the number of the sequences. To mitigate this severe computational bottleneck, we develop a random Fourier feature-based acceleration of the signature kernel acting on the inherently non-Euclidean domain of sequences. We show uniform approximation guarantees for the proposed unbiased estimator of the signature kernel, while keeping its computation linear in the sequence length and number. In addition, combined with recent advances on tensor projections, we derive two even more scalable time series features with favourable concentration properties and computational complexity both in time and memory. Our empirical results show that the reduction in computational cost comes at a negligible price in terms of accuracy on moderate-sized datasets, and it enables one to scale to large datasets up to a million time series

    Inference of transport phenomena in quantum devices

    Get PDF
    This thesis is concerned with charge transport in electrostatically defined quantum dot devices. Such devices display a wide range of transport phenomena in both open and closed configurations. The transport regime can be tuned experimentally by controlling the voltages applied to gate electrodes, but the precise electrostatic landscape which determines the transport regime is unknown. This uncertainty is given by variations in device fabrication, material defects, and sources of electrostatic disorder. The research chapters of this thesis consider a range of transport regimes in quantum dot devices, and infer properties of the device using both experimental and theoretical techniques. The first research chapter considers the detection of single charge transport events through a double quantum dot. By fitting an open quantum systems model to the sub-attoampere currents measured, tunnel rates are inferred. The second results chapter considers an electrostatic simulation of a quantum dot device and how it can be accelerated using deep learning. This accelerated model is then used in the third results chapter, along with experimental measurements of the transport regime, to inform a Bayesian inference algorithm and produce a set of disorder potentials to narrow the gap between simulation and reality. The final results chapter develops a differentiable quantum master equation solver which is used for parameter estimation in a theoretical study of transport in single and double quantum dots

    Rethinking Attention with Performers

    Full text link
    We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.Comment: Published as a conference paper + oral presentation at ICLR 2021. 38 pages. See https://github.com/google-research/google-research/tree/master/protein_lm for protein language model code, and https://github.com/google-research/google-research/tree/master/performer for Performer code. See https://ai.googleblog.com/2020/10/rethinking-attention-with-performers.html for Google AI Blo
    corecore