32,004 research outputs found

    An Adaptive Entanglement Distillation Scheme Using Quantum Low Density Parity Check Codes

    Full text link
    Quantum low density parity check (QLDPC) codes are useful primitives for quantum information processing because they can be encoded and decoded efficiently. Besides, the error correcting capability of a few QLDPC codes exceeds the quantum Gilbert-Varshamov bound. Here, we report a numerical performance analysis of an adaptive entanglement distillation scheme using QLDPC codes. In particular, we find that the expected yield of our adaptive distillation scheme to combat depolarization errors exceed that of Leung and Shor whenever the error probability is less than about 0.07 or greater than about 0.28. This finding illustrates the effectiveness of using QLDPC codes in entanglement distillation.Comment: 12 pages, 6 figure

    Minority Game With Peer Pressure

    Full text link
    To study the interplay between global market choice and local peer pressure, we construct a minority-game-like econophysical model. In this so-called networked minority game model, every selfish player uses both the historical minority choice of the population and the historical choice of one's neighbors in an unbiased manner to make decision. Results of numerical simulation show that the level of cooperation in the networked minority game differs remarkably from the original minority game as well as the prediction of the crowd-anticrowd theory. We argue that the deviation from the crowd-anticrowd theory is due to the negligence of the effect of a four point correlation function in the effective Hamiltonian of the system.Comment: 10 pages, 3 figures in revtex 4.

    Expert system development for probabilistic load simulation

    Get PDF
    A knowledge based system LDEXPT using the intelligent data base paradigm was developed for the Composite Load Spectra (CLS) project to simulate the probabilistic loads of a space propulsion system. The knowledge base approach provides a systematic framework of organizing the load information and facilitates the coupling of the numerical processing and symbolic (information) processing. It provides an incremental development environment for building generic probabilistic load models and book keeping the associated load information. A large volume of load data is stored in the data base and can be retrieved and updated by a built-in data base management system. The data base system standardizes the data storage and retrieval procedures. It helps maintain data integrity and avoid data redundancy. The intelligent data base paradigm provides ways to build expert system rules for shallow and deep reasoning and thus provides expert knowledge to help users to obtain the required probabilistic load spectra

    Probabilistic load simulation: Code development status

    Get PDF
    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base

    Modeling the Psychology of Consumer and Firm Behavior with Behavioral Economics

    Get PDF
    Marketing is an applied science that tries to explain and influence how firms and consumers actually behave in markets. Marketing models are usually applications of economic theories. These theories are general and produce precise predictions, but they rely on strong assumptions of rationality of consumers and firms. Theories based on rationality limits could prove similarly general and precise, while grounding theories in psychological plausibility and explaining facts which are puzzles for the standard approach. Behavioral economics explores the implications of limits of rationality. The goal is to make economic theories more plausible while maintaining formal power and accurate prediction of field data. This review focuses selectively on six types of models used in behavioral economics that can be applied to marketing. Three of the models generalize consumer preference to allow (1) sensitivity to reference points (and loss-aversion); (2) social preferences toward outcomes of others; and (3) preference for instant gratification (quasi-hyperbolic discounting). The three models are applied to industrial channel bargaining, salesforce compensation, and pricing of virtuous goods such as gym memberships. The other three models generalize the concept of gametheoretic equilibrium, allowing decision makers to make mistakes (quantal response equilibrium), encounter limits on the depth of strategic thinking (cognitive hierarchy), and equilibrate by learning from feedback (self-tuning EWA). These are applied to marketing strategy problems involving differentiated products, competitive entry into large and small markets, and low-price guarantees. The main goal of this selected review is to encourage marketing researchers of all kinds to apply these tools to marketing. Understanding the models and applying them is a technical challenge for marketing modelers, which also requires thoughtful input from psychologists studying details of consumer behavior. As a result, models like these could create a common language for modelers who prize formality and psychologists who prize realism

    Individual Differences in EWA Learning with Partial Payoff Information

    Get PDF
    We extend experience-weighted attraction (EWA) learning to games in which only the set of possible foregone payoffs from unchosen strategies are known, and estimate parameters separately for each player to study heterogeneity. We assume players estimate unknown foregone payoffs from a strategy, by substituting the last payoff actually received from that strategy, by clairvoyantly guessing the actual foregone payoff, or by averaging the set of possible foregone payoffs conditional on the actual outcomes. All three assumptions improve predictive accuracy of EWA. Individual parameter estimates suggest that players cluster into two separate subgroups (which differ from traditional reinforcement and belief learning)
    • …
    corecore