2,700 research outputs found

    Cosmological constraints from Radial Baryon Acoustic Oscillation measurements and Observational Hubble data

    Full text link
    We use the Radial Baryon Acoustic Oscillation (RBAO) measurements, distant type Ia supernovae (SNe Ia), the observational H(z)H(z) data (OHD) and the Cosmic Microwave Background (CMB) shift parameter data to constrain cosmological parameters of Λ\LambdaCDM and XCDM cosmologies and further examine the role of OHD and SNe Ia data in cosmological constraints. We marginalize the likelihood function over hh by integrating the probability density Peχ2/2P\propto e^{-\chi^{2}/2} to obtain the best fitting results and the confidence regions in the ΩmΩΛ\Omega_{m}-\Omega_{\Lambda} plane.With the combination analysis for both of the {\rm Λ\Lambda}CDM and XCDM models, we find that the confidence regions of 68.3%, 95.4% and 99.7% levels using OHD+RBAO+CMB data are in good agreement with that of SNe Ia+RBAO+CMB data which is consistent with the result of Lin et al's work. With more data of OHD, we can probably constrain the cosmological parameters using OHD data instead of SNe Ia data in the future.Comment: 8 pages, 6 figures, 2 tables, accepted for publication in Physics Letters

    Information Scrambling in Quantum Neural Networks

    Get PDF
    The quantum neural network is one of the promising applications for near-term noisy intermediate-scale quantum computers. A quantum neural network distills the information from the input wave function into the output qubits. In this Letter, we show that this process can also be viewed from the opposite direction: the quantum information in the output qubits is scrambled into the input. This observation motivates us to use the tripartite information—a quantity recently developed to characterize information scrambling—to diagnose the training dynamics of quantum neural networks. We empirically find strong correlation between the dynamical behavior of the tripartite information and the loss function in the training process, from which we identify that the training process has two stages for randomly initialized networks. In the early stage, the network performance improves rapidly and the tripartite information increases linearly with a universal slope, meaning that the neural network becomes less scrambled than the random unitary. In the latter stage, the network performance improves slowly while the tripartite information decreases. We present evidences that the network constructs local correlations in the early stage and learns large-scale structures in the latter stage. We believe this two-stage training dynamics is universal and is applicable to a wide range of problems. Our work builds bridges between two research subjects of quantum neural networks and information scrambling, which opens up a new perspective to understand quantum neural networks

    A MAN AHEAD OF HIS TIME: LEE KUAN YEW’S IRON FIRST BENEATH THE VELVET GLOVE

    Get PDF
    This study examines the role of Lee Kuan Yew, the founding father of Singapore, in the country\u27s development from a third-world nation to a first-world economic powerhouse. Lee Kuan Yew was the Prime Minister of Singapore for over three decades and was responsible for implementing policies that transformed Singapore\u27s economy, infrastructure, education, and social systems. This paper analyzes the various policies and strategies, as well as personal values and ideologies, adopted and implemented by Lee Kuan Yew that were instrumental in Singapore\u27s growth. Additionally, the paper discusses the challenges faced by Lee Kuan Yew during his leadership in his public and private life. It concludes that Lee Kuan Yew\u27s leadership and vision, marked by his single-minded focus on growth, efficiency and order, were pivotal to Singapore\u27s transformation into a prosperous and modern state, and his legacy continues to shape the identity and character of the nation to this day

    Deep Learning Topological Invariants of Band Insulators

    Full text link
    In this work we design and train deep neural networks to predict topological invariants for one-dimensional four-band insulators in AIII class whose topological invariant is the winding number, and two-dimensional two-band insulators in A class whose topological invariant is the Chern number. Given Hamiltonians in the momentum space as the input, neural networks can predict topological invariants for both classes with accuracy close to or higher than 90%, even for Hamiltonians whose invariants are beyond the training data set. Despite the complexity of the neural network, we find that the output of certain intermediate hidden layers resembles either the winding angle for models in AIII class or the solid angle (Berry curvature) for models in A class, indicating that neural networks essentially capture the mathematical formula of topological invariants. Our work demonstrates the ability of neural networks to predict topological invariants for complicated models with local Hamiltonians as the only input, and offers an example that even a deep neural network is understandable.Comment: 8 pages, 5 figure

    Estimating price impact via deep reinforcement learning

    Get PDF
    corecore