4,890 research outputs found

    Product states optimize quantum pp-spin models for large pp

    Full text link
    We consider the problem of estimating the maximal energy of quantum pp-local spin glass random Hamiltonians, the quantum analogues of widely studied classical spin glass models. Denoting by E(p)E^*(p) the (appropriately normalized) maximal energy in the limit of a large number of qubits nn, we show that E(p)E^*(p) approaches 2log6\sqrt{2\log 6} as pp increases. This value is interpreted as the maximal energy of a much simpler so-called Random Energy Model, widely studied in the setting of classical spin glasses. Our most notable and (arguably) surprising result proves the existence of near-maximal energy states which are product states, and thus not entangled. Specifically, we prove that with high probability as nn\to\infty, for any E<E(p)E<E^*(p) there exists a product state with energy E\geq E at sufficiently large constant pp. Even more surprisingly, this remains true even when restricting to tensor products of Pauli eigenstates. Our approximations go beyond what is known from monogamy-of-entanglement style arguments -- the best of which, in this normalization, achieve approximation error growing with nn. Our results not only challenge prevailing beliefs in physics that extremely low-temperature states of random local Hamiltonians should exhibit non-negligible entanglement, but they also imply that classical algorithms can be just as effective as quantum algorithms in optimizing Hamiltonians with large locality -- though performing such optimization is still likely a hard problem. Our results are robust with respect to the choice of the randomness (disorder) and apply to the case of sparse random Hamiltonian using Lindeberg's interpolation method. The proof of the main result is obtained by estimating the expected trace of the associated partition function, and then matching its asymptotics with the extremal energy of product states using the second moment method.Comment: Added a disclaimer about error in current draf

    Adversarial Robustness Guarantees for Random Deep Neural Networks

    Full text link
    The reliability of deep learning algorithms is fundamentally challenged by the existence of adversarial examples, which are incorrectly classified inputs that are extremely close to a correctly classified input. We explore the properties of adversarial examples for deep neural networks with random weights and biases, and prove that for any p1p\ge1, the p\ell^p distance of any given input from the classification boundary scales as one over the square root of the dimension of the input times the p\ell^p norm of the input. The results are based on the recently proved equivalence between Gaussian processes and deep neural networks in the limit of infinite width of the hidden layers, and are validated with experiments on both random deep neural networks and deep neural networks trained on the MNIST and CIFAR10 datasets. The results constitute a fundamental advance in the theoretical understanding of adversarial examples, and open the way to a thorough theoretical characterization of the relation between network architecture and robustness to adversarial perturbations

    Efficient classical algorithms for simulating symmetric quantum systems

    Get PDF
    In light of recently proposed quantum algorithms that incorporate symmetries in the hope of quantum advantage, we show that with symmetries that are restrictive enough, classical algorithms can efficiently emulate their quantum counterparts given certain classical descriptions of the input. Specifically, we give classical algorithms that calculate ground states and time-evolved expectation values for permutation-invariant Hamiltonians specified in the symmetrized Pauli basis with runtimes polynomial in the system size. We use tensor-network methods to transform symmetry-equivariant operators to the block-diagonal Schur basis that is of polynomial size, and then perform exact matrix multiplication or diagonalization in this basis. These methods are adaptable to a wide range of input and output states including those prescribed in the Schur basis, as matrix product states, or as arbitrary quantum states when given the power to apply low depth circuits and single qubit measurements

    Quantum algorithms for group convolution, cross-correlation, and equivariant transformations

    Get PDF
    Group convolutions and cross-correlations, which are equivariant to the actions of group elements, are commonly used to analyze or take advantage of symmetries inherent in a given problem setting. Here, we provide efficient quantum algorithms for performing linear group convolutions and cross-correlations on data stored as quantum states. Runtimes for our algorithms are poly-logarithmic in the dimension of the group and the desired error of the operation. Motivated by the rich literature on quantum algorithms for solving algebraic problems, our theoretical framework opens a path for quantizing many algorithms in machine learning and numerical methods that employ group operations

    Block-encoding dense and full-rank kernels using hierarchical matrices: applications in quantum numerical linear algebra

    Get PDF
    Many quantum algorithms for numerical linear algebra assume black-box access to a block-encoding of the matrix of interest, which is a strong assumption when the matrix is not sparse. Kernel matrices, which arise from discretizing a kernel function k(x,x)k(x,x'), have a variety of applications in mathematics and engineering. They are generally dense and full-rank. Classically, the celebrated fast multipole method performs matrix multiplication on kernel matrices of dimension NN in time almost linear in NN by using the linear algebraic framework of hierarchical matrices. In light of this success, we propose a block-encoding scheme of the hierarchical matrix structure on a quantum computer. When applied to many physical kernel matrices, our method can improve the runtime of solving quantum linear systems of dimension NN to O(κpolylog(Nε))O(\kappa \operatorname{polylog}(\frac{N}{\varepsilon})), where κ\kappa and ε\varepsilon are the condition number and error bound of the matrix operation. This runtime is near-optimal and, in terms of NN, exponentially improves over prior quantum linear systems algorithms in the case of dense and full-rank kernel matrices. We discuss possible applications of our methodology in solving integral equations and accelerating computations in N-body problems

    Equivariant Polynomials for Graph Neural Networks

    Full text link
    Graph Neural Networks (GNN) are inherently limited in their expressive power. Recent seminal works (Xu et al., 2019; Morris et al., 2019b) introduced the Weisfeiler-Lehman (WL) hierarchy as a measure of expressive power. Although this hierarchy has propelled significant advances in GNN analysis and architecture developments, it suffers from several significant limitations. These include a complex definition that lacks direct guidance for model improvement and a WL hierarchy that is too coarse to study current GNNs. This paper introduces an alternative expressive power hierarchy based on the ability of GNNs to calculate equivariant polynomials of a certain degree. As a first step, we provide a full characterization of all equivariant graph polynomials by introducing a concrete basis, significantly generalizing previous results. Each basis element corresponds to a specific multi-graph, and its computation over some graph data input corresponds to a tensor contraction problem. Second, we propose algorithmic tools for evaluating the expressiveness of GNNs using tensor contraction sequences, and calculate the expressive power of popular GNNs. Finally, we enhance the expressivity of common GNN architectures by adding polynomial features or additional operations / aggregations inspired by our theory. These enhanced GNNs demonstrate state-of-the-art results in experiments across multiple graph learning benchmarks

    The SSL Interplay: Augmentations, Inductive Bias, and Generalization

    Full text link
    Self-supervised learning (SSL) has emerged as a powerful framework to learn representations from raw data without supervision. Yet in practice, engineers face issues such as instability in tuning optimizers and collapse of representations during training. Such challenges motivate the need for a theory to shed light on the complex interplay between the choice of data augmentation, network architecture, and training algorithm. We study such an interplay with a precise analysis of generalization performance on both pretraining and downstream tasks in a theory friendly setup, and highlight several insights for SSL practitioners that arise from our theory

    A systematic review of assessment approaches to predict opioid misuse in people with cancer.

    Full text link
    CONTEXT: Cancer prevalence is increasing, with many patients requiring opioid analgesia. Clinicians need to ensure patients receive adequate pain relief. However, opioid misuse is widespread, and cancer patients are at risk. OBJECTIVES: This study aims (1) to identify screening approaches that have been used to assess and monitor risk of opioid misuse in patients with cancer; (2) to compare the prevalence of risk estimated by each of these screening approaches; and (3) to compare risk factors among demographic and clinical variables associated with a positive screen on each of the approaches. METHODS: Medline, Cochrane Controlled Trial Register, PubMed, PsycINFO, and Embase databases were searched for articles reporting opioid misuse screening in cancer patients, along with handsearching the reference list of included articles. Bias was assessed using tools from the Joanna Briggs Suite. RESULTS: Eighteen studies met the eligibility criteria, evaluating seven approaches: Urine Drug Test (UDT) (n = 8); the Screener and Opioid Assessment for Patients with Pain (SOAPP) and two variants, Revised and Short Form (n = 6); the Cut-down, Annoyed, Guilty, Eye-opener (CAGE) tool and one variant, Adapted to Include Drugs (n = 6); the Opioid Risk Tool (ORT) (n = 4); Prescription Monitoring Program (PMP) (n = 3); the Screen for Opioid-Associated Aberrant Behavior Risk (SOABR) (n = 1); and structured/specialist interviews (n = 1). Eight studies compared two or more approaches. The rates of risk of opioid misuse in the studied populations ranged from 6 to 65%, acknowledging that estimates are likely to have varied partly because of how specific to opioids the screening approaches were and whether a single or multi-step approach was used. UDT prompted by an intervention or observation of aberrant opioid behaviors (AOB) were conclusive of actual opioid misuse found to be 6.5-24%. Younger age, found in 8/10 studies; personal or family history of anxiety or other mental ill health, found in 6/8 studies; and history of illicit drug use, found in 4/6 studies, showed an increased risk of misuse. CONCLUSIONS: Younger age, personal or familial mental health history, and history of illicit drug use consistently showed an increased risk of opioid misuse. Clinical suspicion of opioid misuse may be raised by data from PMP or any of the standardized list of AOBs. Clinicians may use SOAPP-R, CAGE-AID, or ORT to screen for increased risk and may use UDT to confirm suspicion of opioid misuse or monitor adherence. More research into this important area is required. SIGNIFICANCE OF RESULTS: This systematic review summarized the literature on the use of opioid misuse risk approaches in people with cancer. The rates of reported risk range from 6 to 65%; however, true rate may be closer to 6.5-24%. Younger age, personal or familial mental health history, and history of illicit drug use consistently showed an increased risk of opioid misuse. Clinicians may choose from several approaches. Limited data are available on feasibility and patient experience. PROSPERO registration number. CRD42020163385

    Self-Supervised Learning with Lie Symmetries for Partial Differential Equations

    Full text link
    Machine learning for differential equations paves the way for computationally efficient alternatives to numerical solvers, with potentially broad impacts in science and engineering. Though current algorithms typically require simulated training data tailored to a given setting, one may instead wish to learn useful information from heterogeneous sources, or from real dynamical systems observations that are messy or incomplete. In this work, we learn general-purpose representations of PDEs from heterogeneous data by implementing joint embedding methods for self-supervised learning (SSL), a framework for unsupervised representation learning that has had notable success in computer vision. Our representation outperforms baseline approaches to invariant tasks, such as regressing the coefficients of a PDE, while also improving the time-stepping performance of neural solvers. We hope that our proposed methodology will prove useful in the eventual development of general-purpose foundation models for PDEs

    GENETIC ANALYSIS FOR GRAIN YIELD AND VARIOUS MORPHOLOGICAL TRAITS IN MAIZE (ZEA MAYS L.) UNDER NORMAL AND WATER STRESS ENVIRONMENTS

    Get PDF
    ABSTRACT A genetic analysis study was carried out for various morphological traits in a complete 8 × 8 diallel cross of maize inbred lines under normal irrigation and drought conditions. Estimation of genetic components of variation and graphical presentation deduced that most of the traits like days to pollen shed, anthesis-silking interval, ear height, kernel rows per ear, 100-kernel weight, shelling percentage, grain yield per plant showed over-dominance type of inheritance under both normal and drought conditions unlike leaf rolling which showed partial dominance under normal but over-dominance type of inheritance under drought conditions. It can be inferred that because of over-dominance nature of inheritance of most of the yield related traits, heterosis breeding can be pursued to exploit high yielding hybrids with considerable drought tolerance
    corecore