206 research outputs found

    Volatile Decision Dynamics: Experiments, Stochastic Description, Intermittency Control, and Traffic Optimization

    Full text link
    The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc., they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified ways of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems, and for information service providers. One of the promising fields of application is traffic optimization.Comment: For related work see http://www.helbing.or

    Occasional errors can benefit coordination

    Get PDF
    The chances solving a problem that involves coordination between people are increased by introducing robotic players that sometimes make mistakes. This finding has implications for real-world coordination problems

    Geometric representations for minimalist grammars

    Full text link
    We reformulate minimalist grammars as partial functions on term algebras for strings and trees. Using filler/role bindings and tensor product representations, we construct homomorphisms for these data structures into geometric vector spaces. We prove that the structure-building functions as well as simple processors for minimalist languages can be realized by piecewise linear operators in representation space. We also propose harmony, i.e. the distance of an intermediate processing step from the final well-formed state in representation space, as a measure of processing complexity. Finally, we illustrate our findings by means of two particular arithmetic and fractal representations.Comment: 43 pages, 4 figure

    Technology Adoption in Critical Mass Games: Theory and Experimental Evidence

    Full text link
    We analyze the choices between two technologies A and B that both exhibit network effects. We introduce a critical mass game in which coordination on either one of the standards constitutes a Nash equilibrium outcome while coordination on standard B is assumed to be payoff-dominant. We present a heuristic definition of a critical mass and show that the critical mass is inversely related to the mixed strategy equilibrium. We show that the critical mass is closely related to the risk dominance criterion, the global game theory, and the maximin criterion. We present experimental evidence that both the relative degree of payoff dominance and risk dominance explain players' choices. We finally show that users' adoption behavior induces firms to select a relatively unrisky technology which minimizes the problem of coordination failure to the benefit of consumers

    The Effects of Social Ties on Coordination: Conceptual Foundations for an Empirical Analysis

    Get PDF
    International audienceThis paper investigates the influence that social ties can have on behavior. After defining the concept of social ties that we consider, we introduce an original model of social ties. The impact of such ties on social preferences is studied in a coordination game with outside option. We provide a detailed game theoretical analysis of this game while considering various types of players, i.e., self-interest maximizing, inequity averse, and fair agents. In addition to these approaches that require strategic reasoning in order to reach some equilibrium, we also present an alternative hypothesis that relies on the concept of team reasoning. After having discussed the differences between the latter and our model of social ties, we show how an experiment can be designed so as to discriminate among the models presented in the paper

    Validation of the LUMIPULSE automated immunoassay for the measurement of core AD biomarkers in cerebrospinal fluid

    Get PDF
    OBJECTIVES: The core cerebrospinal fluid (CSF) biomarkers; total tau (tTau), phospho-tau (pTau), amyloid β 1-42 (Aβ 1-42), and the Aβ 1-42/Aβ 1-40 ratio have transformed Alzheimer's disease (AD) research and are today increasingly used in clinical routine laboratories as diagnostic tools. Fully automated immunoassay instruments with ready-to-use assay kits and calibrators has simplified their analysis and improved reproducibility of measurements. We evaluated the analytical performance of the fully automated immunoassay instrument LUMIPULSE G (Fujirebio) for measurement of the four core AD CSF biomarkers and determined cutpoints for AD diagnosis. METHODS: Comparison of the LUMIPULSE G assays was performed with the established INNOTEST ELISAs (Fujirebio) for hTau Ag, pTau 181, β-amyloid 1-42, and with V-PLEX Plus Aβ Peptide Panel 1 (6E10) (Meso Scale Discovery) for Aβ 1-42/Aβ 1-40, as well as with a LC-MS reference method for Aβ 1-42. Intra- and inter-laboratory reproducibility was evaluated for all assays. Clinical cutpoints for Aβ 1-42, tTau, and pTau was determined by analysis of three cohorts of clinically diagnosed patients, comprising 651 CSF samples. For the Aβ 1-42/Aβ 1-40 ratio, the cutpoint was determined by mixture model analysis of 2,782 CSF samples. RESULTS: The LUMIPULSE G assays showed strong correlation to all other immunoassays (r>0.93 for all assays). The repeatability (intra-laboratory) CVs ranged between 2.0 and 5.6%, with the highest variation observed for β-amyloid 1-40. The reproducibility (inter-laboratory) CVs ranged between 2.1 and 6.5%, with the highest variation observed for β-amyloid 1-42. The clinical cutpoints for AD were determined to be 409 ng/L for total tau, 50.2 ng/L for pTau 181, 526 ng/L for β-amyloid 1-42, and 0.072 for the Aβ 1-42/Aβ 1-40 ratio. CONCLUSIONS: Our results suggest that the LUMIPULSE G assays for the CSF AD biomarkers are fit for purpose in clinical laboratory practice. Further, they corroborate earlier presented reference limits for the biomarkers

    A brain-inspired cognitive system that mimics the dynamics of human thought

    Get PDF
    In recent years, some impressive AI systems have been built that can play games and answer questions about large quantities of data. However, we are still a very long way from AI systems that can think and learn in a human-like way. We have a great deal of information about how the brain works and can simulate networks of hundreds of millions of neurons. So it seems likely that we could use our neuroscientific knowledge to build brain-inspired artificial intelligence that acts like humans on similar timescales. This paper describes an AI system that we have built using a brain-inspired network of artificial spiking neurons. On a word recognition and colour naming task our system behaves like human subjects on a similar timescale. In the longer term, this type of AI technology could lead to more flexible general purpose artificial intelligence and to more natural human-computer interaction

    The nature of the memory trace and its neurocomputational implications

    Get PDF
    The brain processes underlying cognitive tasks must be very robust. Disruptions such as the destruction of large numbers of neurons, or the impact of alcohol and lack of sleep do not have negative effects except when they occur in an extreme form. This robustness implies that the parameters determining the functioning of networks of individual neurons must have large ranges or there must exist stabilizing mechanisms that keep the functioning of a network within narrow bounds. The simulation of a minimal neuronal architecture necessary to study cognitive tasks is described, which consists of a loop of three cell-assemblies. A crucial factor in this architecture is the critical threshold of a cell-assembly. When activated at a level above the critical threshold, the activation in a cell-assembly is subject to autonomous growth, which leads to an oscillation in the loop. When activated below the critical threshold, excitation gradually extinguishes. In order to circumvent the large parameter space of spiking neurons, a rate-dependent model of neuronal firing was chosen. The resulting parameter space of 12 parameters was explored by means of a genetic algorithm. The ranges of the parameters for which the architecture produced the required oscillations and extinctions, turned out to be relatively narrow. These ranges remained narrow when a stabilizing mechanism, controlling the total amount of activation, was introduced. The architecture thus shows chaotic behaviour. Given the overall stability of the operation of the brain, it can be concluded that there must exist other mechanisms that make the network robust. Three candidate mechanisms are discussed: synaptic scaling, synaptic homeostasis, and the synchronization of neural spikes
    • …
    corecore