3,742 research outputs found

    Computation in Finitary Stochastic and Quantum Processes

    Full text link
    We introduce stochastic and quantum finite-state transducers as computation-theoretic models of classical stochastic and quantum finitary processes. Formal process languages, representing the distribution over a process's behaviors, are recognized and generated by suitable specializations. We characterize and compare deterministic and nondeterministic versions, summarizing their relative computational power in a hierarchy of finitary process languages. Quantum finite-state transducers and generators are a first step toward a computation-theoretic analysis of individual, repeatedly measured quantum dynamical systems. They are explored via several physical systems, including an iterated beam splitter, an atom in a magnetic field, and atoms in an ion trap--a special case of which implements the Deutsch quantum algorithm. We show that these systems' behaviors, and so their information processing capacity, depends sensitively on the measurement protocol.Comment: 25 pages, 16 figures, 1 table; http://cse.ucdavis.edu/~cmg; numerous corrections and update

    Random projections: data perturbation for classification problems

    Get PDF
    Random projections offer an appealing and flexible approach to a wide range of large-scale statistical problems. They are particularly useful in high-dimensional settings, where we have many covariates recorded for each observation. In classification problems there are two general techniques using random projections. The first involves many projections in an ensemble -- the idea here is to aggregate the results after applying different random projections, with the aim of achieving superior statistical accuracy. The second class of methods include hashing and sketching techniques, which are straightforward ways to reduce the complexity of a problem, perhaps therefore with a huge computational saving, while approximately preserving the statistical efficiency.Comment: 24 pages, 4 figure

    Ensemble deep learning: A review

    Get PDF
    Ensemble learning combines several individual models to obtain better generalization performance. Currently, deep learning models with multilayer processing architecture is showing better performance as compared to the shallow or traditional classification models. Deep ensemble learning models combine the advantages of both the deep learning models as well as the ensemble learning such that the final model has better generalization performance. This paper reviews the state-of-art deep ensemble models and hence serves as an extensive summary for the researchers. The ensemble models are broadly categorised into ensemble models like bagging, boosting and stacking, negative correlation based deep ensemble models, explicit/implicit ensembles, homogeneous /heterogeneous ensemble, decision fusion strategies, unsupervised, semi-supervised, reinforcement learning and online/incremental, multilabel based deep ensemble models. Application of deep ensemble models in different domains is also briefly discussed. Finally, we conclude this paper with some future recommendations and research directions
    corecore