2,335 research outputs found

    Efficient Variational Bayesian Structure Learning of Dynamic Graphical Models

    Full text link
    Estimating time-varying graphical models are of paramount importance in various social, financial, biological, and engineering systems, since the evolution of such networks can be utilized for example to spot trends, detect anomalies, predict vulnerability, and evaluate the impact of interventions. Existing methods require extensive tuning of parameters that control the graph sparsity and temporal smoothness. Furthermore, these methods are computationally burdensome with time complexity O(NP^3) for P variables and N time points. As a remedy, we propose a low-complexity tuning-free Bayesian approach, named BADGE. Specifically, we impose temporally-dependent spike-and-slab priors on the graphs such that they are sparse and varying smoothly across time. A variational inference algorithm is then derived to learn the graph structures from the data automatically. Owning to the pseudo-likelihood and the mean-field approximation, the time complexity of BADGE is only O(NP^2). Additionally, by identifying the frequency-domain resemblance to the time-varying graphical models, we show that BADGE can be extended to learning frequency-varying inverse spectral density matrices, and yields graphical models for multivariate stationary time series. Numerical results on both synthetic and real data show that that BADGE can better recover the underlying true graphs, while being more efficient than the existing methods, especially for high-dimensional cases

    Large-Scale Multi-Label Learning with Incomplete Label Assignments

    Full text link
    Multi-label learning deals with the classification problems where each instance can be assigned with multiple labels simultaneously. Conventional multi-label learning approaches mainly focus on exploiting label correlations. It is usually assumed, explicitly or implicitly, that the label sets for training instances are fully labeled without any missing labels. However, in many real-world multi-label datasets, the label assignments for training instances can be incomplete. Some ground-truth labels can be missed by the labeler from the label set. This problem is especially typical when the number instances is very large, and the labeling cost is very high, which makes it almost impossible to get a fully labeled training set. In this paper, we study the problem of large-scale multi-label learning with incomplete label assignments. We propose an approach, called MPU, based upon positive and unlabeled stochastic gradient descent and stacked models. Unlike prior works, our method can effectively and efficiently consider missing labels and label correlations simultaneously, and is very scalable, that has linear time complexities over the size of the data. Extensive experiments on two real-world multi-label datasets show that our MPU model consistently outperform other commonly-used baselines

    Higher-order solutions to non-Markovian quantum dynamics via hierarchical functional derivative

    Full text link
    Solving realistic quantum systems coupled to an environment is a challenging task. Here we develop a hierarchical functional derivative (HFD) approach for efficiently solving the non-Markovian quantum trajectories of an open quantum system embedded in a bosonic bath. An explicit expression for arbitrary order HFD equation is derived systematically. Moreover, it is found that for an analytically solvable model, this hierarchical equation naturally terminates at a given order and thus becomes exactly solvable. This HFD approach provides a systematic method to study the non-Markovian quantum dynamics of an open system coupled to a bosonic environment.Comment: 5 pages, 2 figure

    Building quantum neural networks based on swap test

    Get PDF
    Artificial neural network, consisting of many neurons in different layers, is an important method to simulate humain brain. Usually, one neuron has two operations: one is linear, the other is nonlinear. The linear operation is inner product and the nonlinear operation is represented by an activation function. In this work, we introduce a kind of quantum neuron whose inputs and outputs are quantum states. The inner product and activation operator of the quantum neurons can be realized by quantum circuits. Based on the quantum neuron, we propose a model of quantum neural network in which the weights between neurons are all quantum states. We also construct a quantum circuit to realize this quantum neural network model. A learning algorithm is proposed meanwhile. We show the validity of learning algorithm theoretically and demonstrate the potential of the quantum neural network numerically.Comment: 10 pages, 13 figure

    Dynamical invariants in non-Markovian quantum state diffusion equation

    Full text link
    We find dynamical invariants for open quantum systems described by the non-Markovian quantum state diffusion (QSD) equation. In stark contrast to closed systems where the dynamical invariant can be identical to the system density operator, these dynamical invariants no longer share the equation of motion for the density operator. Moreover, the invariants obtained with from bi-orthonormal basis can be used to render an exact solution to the QSD equation and the corresponding non-Markovian dynamics without using master equations or numerical simulations. Significantly we show that we can apply these dynamic invariants to reverse-engineering a Hamiltonian that is capable of driving the system to the target state, providing a novel way to design control strategy for open quantum systems.Comment: 6 pages, 2 figure
    corecore