10,832 research outputs found

    Non-locality in theories without the no-restriction hypothesis

    Full text link
    The framework of generalized probabilistic theories (GPT) is a widely-used approach for studying the physical foundations of quantum theory. The standard GPT framework assumes the no-restriction hypothesis, in which the state space of a physical theory determines the set of measurements. However, this assumption is not physically motivated. In Janotta and Lal [Phys. Rev. A 87, 052131 (2013)], it was shown how this assumption can be relaxed, and how such an approach can be used to describe new classes of probabilistic theories. This involves introducing a new, more general, definition of maximal joint state spaces, which we call the generalised maximal tensor product. Here we show that the generalised maximal tensor product recovers the standard maximal tensor product when at least one of the systems in a bipartite scenario obeys the no-restriction hypothesis. We also show that, under certain conditions, relaxing the no-restriction hypothesis for a given state space does not allow for stronger non-locality, although the generalized maximal tensor product may allow new joint states.Comment: In Proceedings QPL 2013, arXiv:1412.791

    {\AA}ngstr\"om-scale chemically powered motors

    Full text link
    Like their larger micron-scale counterparts, {\AA}ngstr\"om-scale chemically self-propelled motors use asymmetric catalytic activity to produce self-generated concentration gradients that lead to directed motion. Unlike their micron-scale counterparts, the sizes of {\AA}ngstr\"om-scale motors are comparable to the solvent molecules in which they move, they are dominated by fluctuations, and they operate on very different time scales. These new features are studied using molecular dynamics simulations of small sphere dimer motors. We show that the ballistic regime is dominated by the thermal speed but the diffusion coefficients of these motors are orders of magnitude larger than inactive dimers. Such small motors may find applications in nano-confined systems or perhaps eventually in the cell.Comment: 6 pages, 8 figure

    Random Information Spread in Networks

    Full text link
    Let G=(V,E) be an undirected loopless graph with possible parallel edges and s and t be two vertices of G. Assume that vertex s is labelled at the initial time step and that every labelled vertex copies its labelling to neighbouring vertices along edges with one labelled endpoint independently with probability p in one time step. In this paper, we establish the equivalence between the expected s-t first arrival time of the above spread process and the notion of the stochastic shortest s-t path. Moreover, we give a short discussion of analytical results on special graphs including the complete graph and s-t series-parallel graphs. Finally, we propose some lower bounds for the expected s-t first arrival time.Comment: 17 pages, 1 figur

    A Behavioral and Neural Evaluation of Prospective Decision-Making under Risk

    Get PDF
    Making the best choice when faced with a chain of decisions requires a person to judge both anticipated outcomes and future actions. Although economic decision-making models account for both risk and reward in single-choice contexts, there is a dearth of similar knowledge about sequential choice. Classical utility-based models assume that decision-makers select and follow an optimal predetermined strategy, regardless of the particular order in which options are presented. An alternative model involves continuously reevaluating decision utilities, without prescribing a specific future set of choices. Here, using behavioral and functional magnetic resonance imaging (fMRI) data, we studied human subjects in a sequential choice task and use these data to compare alternative decision models of valuation and strategy selection. We provide evidence that subjects adopt a model of reevaluating decision utilities, in which available strategies are continuously updated and combined in assessing action values. We validate this model by using simultaneously acquired fMRI data to show that sequential choice evokes a pattern of neural response consistent with a tracking of anticipated distribution of future reward, as expected in such a model. Thus, brain activity evoked at each decision point reflects the expected mean, variance, and skewness of possible payoffs, consistent with the idea that sequential choice evokes a prospective evaluation of both available strategies and possible outcomes

    Unexpected properties of bandwidth choice when smoothing discrete data for constructing a functional data classifier

    Get PDF
    The data functions that are studied in the course of functional data analysis are assembled from discrete data, and the level of smoothing that is used is generally that which is appropriate for accurate approximation of the conceptually smooth functions that were not actually observed. Existing literature shows that this approach is effective, and even optimal, when using functional data methods for prediction or hypothesis testing. However, in the present paper we show that this approach is not effective in classification problems. There a useful rule of thumb is that undersmoothing is often desirable, but there are several surprising qualifications to that approach. First, the effect of smoothing the training data can be more significant than that of smoothing the new data set to be classified; second, undersmoothing is not always the right approach, and in fact in some cases using a relatively large bandwidth can be more effective; and third, these perverse results are the consequence of very unusual properties of error rates, expressed as functions of smoothing parameters. For example, the orders of magnitude of optimal smoothing parameter choices depend on the signs and sizes of terms in an expansion of error rate, and those signs and sizes can vary dramatically from one setting to another, even for the same classifier.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1158 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Learning a Policy for Opportunistic Active Learning

    Full text link
    Active learning identifies data points to label that are expected to be the most useful in improving a supervised model. Opportunistic active learning incorporates active learning into interactive tasks that constrain possible queries during interactions. Prior work has shown that opportunistic active learning can be used to improve grounding of natural language descriptions in an interactive object retrieval task. In this work, we use reinforcement learning for such an object retrieval task, to learn a policy that effectively trades off task completion with model improvement that would benefit future tasks.Comment: EMNLP 2018 Camera Read
    • …
    corecore