352,240 research outputs found

    Collective effects enhancing power and efficiency

    Get PDF
    Energy conversion is most efficient for micro or nano machines with tight coupling between input and output power. To reach meaningful amounts of power, ensembles of NN such machines must be considered. We use a model system to demonstrate that interactions between NN tightly coupled nanomachines can enhance the power output per machine. Furthermore, while interactions break tight coupling and thus lower efficiency in finite ensembles, the macroscopic limit (N→∞N \rightarrow \infty) restores it and enhances both the efficiency and the output power per nanomachine.Comment: 5 pages, 3 figure

    Quantum Turing Machines Computations and Measurements

    Full text link
    Contrary to the classical case, the relation between quantum programming languages and quantum Turing Machines (QTM) has not being fully investigated. In particular, there are features of QTMs that have not been exploited, a notable example being the intrinsic infinite nature of any quantum computation. In this paper we propose a definition of QTM, which extends and unifies the notions of Deutsch and Bernstein and Vazirani. In particular, we allow both arbitrary quantum input, and meaningful superpositions of computations, where some of them are "terminated" with an "output", while others are not. For some infinite computations an "output" is obtained as a limit of finite portions of the computation. We propose a natural and robust observation protocol for our QTMs, that does not modify the probability of the possible outcomes of the machines. Finally, we use QTMs to define a class of quantum computable functions---any such function is a mapping from a general quantum state to a probability distribution of natural numbers. We expect that our class of functions, when restricted to classical input-output, will be not different from the set of the recursive functions.Comment: arXiv admin note: substantial text overlap with arXiv:1504.02817 To appear on MDPI Applied Sciences, 202

    Transductions Computed by One-Dimensional Cellular Automata

    Full text link
    Cellular automata are investigated towards their ability to compute transductions, that is, to transform inputs into outputs. The families of transductions computed are classified with regard to the time allowed to process the input and to compute the output. Since there is a particular interest in fast transductions, we mainly focus on the time complexities real time and linear time. We first investigate the computational capabilities of cellular automaton transducers by comparing them to iterative array transducers, that is, we compare parallel input/output mode to sequential input/output mode of massively parallel machines. By direct simulations, it turns out that the parallel mode is not weaker than the sequential one. Moreover, with regard to certain time complexities cellular automaton transducers are even more powerful than iterative arrays. In the second part of the paper, the model in question is compared with the sequential devices single-valued finite state transducers and deterministic pushdown transducers. It turns out that both models can be simulated by cellular automaton transducers faster than by iterative array transducers.Comment: In Proceedings AUTOMATA&JAC 2012, arXiv:1208.249

    Optimal N-to-M Cloning of Quantum Coherent States

    Get PDF
    The cloning of continuous quantum variables is analyzed based on the concept of Gaussian cloning machines, i.e., transformations that yield copies that are Gaussian mixtures centered on the state to be copied. The optimality of Gaussian cloning machines that transform N identical input states into M output states is investigated, and bounds on the fidelity of the process are derived via a connection with quantum estimation theory. In particular, the optimal N-to-M cloning fidelity for coherent states is found to be equal to MN/(MN+M-N).Comment: 3 pages, RevTe

    Correlation of internal representations in feed-forward neural networks

    Full text link
    Feed-forward multilayer neural networks implementing random input-output mappings develop characteristic correlations between the activity of their hidden nodes which are important for the understanding of the storage and generalization performance of the network. It is shown how these correlations can be calculated from the joint probability distribution of the aligning fields at the hidden units for arbitrary decoder function between hidden layer and output. Explicit results are given for the parity-, and-, and committee-machines with arbitrary number of hidden nodes near saturation.Comment: 6 pages, latex, 1 figur

    Geometry and Expressive Power of Conditional Restricted Boltzmann Machines

    Full text link
    Conditional restricted Boltzmann machines are undirected stochastic neural networks with a layer of input and output units connected bipartitely to a layer of hidden units. These networks define models of conditional probability distributions on the states of the output units given the states of the input units, parametrized by interaction weights and biases. We address the representational power of these models, proving results their ability to represent conditional Markov random fields and conditional distributions with restricted supports, the minimal size of universal approximators, the maximal model approximation errors, and on the dimension of the set of representable conditional distributions. We contribute new tools for investigating conditional probability models, which allow us to improve the results that can be derived from existing work on restricted Boltzmann machine probability models.Comment: 30 pages, 5 figures, 1 algorith

    Monoidal computer III: A coalgebraic view of computability and complexity

    Full text link
    Monoidal computer is a categorical model of intensional computation, where many different programs correspond to the same input-output behavior. The upshot of yet another model of computation is that a categorical formalism should provide a much needed high level language for theory of computation, flexible enough to allow abstracting away the low level implementation details when they are irrelevant, or taking them into account when they are genuinely needed. A salient feature of the approach through monoidal categories is the formal graphical language of string diagrams, which supports visual reasoning about programs and computations. In the present paper, we provide a coalgebraic characterization of monoidal computer. It turns out that the availability of interpreters and specializers, that make a monoidal category into a monoidal computer, is equivalent with the existence of a *universal state space*, that carries a weakly final state machine for any pair of input and output types. Being able to program state machines in monoidal computers allows us to represent Turing machines, to capture their execution, count their steps, as well as, e.g., the memory cells that they use. The coalgebraic view of monoidal computer thus provides a convenient diagrammatic language for studying computability and complexity.Comment: 34 pages, 24 figures; in this version: added the Appendi
    • …
    corecore