78 research outputs found

    Illusory versus Genuine Control in Agent-Based Games

    Full text link
    In the Minority, Majority and Dollar Games (MG, MAJG, G),syntheticagentscompeteforrewards,ateachtime−stepactinginaccordwiththepreviouslybest−performingoftheirlimitedsetsofstrategies.Differentcomponentsand/oraspectsofreal−worldfinancialmarketsaremodelledbythesegames.IntheMG,agentscompeteforscarceresources;intheMAJGgentsimitatethegroupinthehopeofexploitingatrend;intheG), synthetic agents compete for rewards, at each time-step acting in accord with the previously best-performing of their limited sets of strategies. Different components and/or aspects of real-world financial markets are modelled by these games. In the MG, agents compete for scarce resources; in the MAJG gents imitate the group in the hope of exploiting a trend; in the G agents attempt to successfully predict and benefit from trends as well as changes in the direction of a market. It has been previously shown that in the MG for a reasonable number of preliminary time steps preceding equilibrium (Time Horizon MG, THMG), agents' attempt to optimize their gains by active strategy selection is ``illusory'': The calculated hypothetical gains of their individual strategies is greater on average than agents' actual average gains. Furthermore, if a small proportion of agents deliberately choose and act in accord with their seemingly worst performing strategy, these outper-form all other agents on average, and even attain mean positive gain, otherwise rare for agents in the MG. This latter phenomenon raises the question as to how well the optimization procedure works in the MAJG and G.WedemonstratethattheillusionofcontrolisabsentinMAJGandG. We demonstrate that the illusion of control is absent in MAJG and G. In other words, low-entropy (more informative) strategies under-perform high-entropy (or random) strategies in the MG but outperform them in the MAJG and $G. This provides further clarification of the kinds of situations subject to genuine control, and those not, in set-ups a priori defined to emphasize the importance of optimization.Comment: 22 page

    Reverse Engineering Financial Markets with Majority and Minority Games Using Genetic Algorithms

    Get PDF
    Using virtual stock markets with artificial interacting software investors, aka agent-based models, we present a method to reverse engineer real-world financial time series. We model financial markets as made of a large number of interacting boundedly rational agents. By optimizing the similarity between the actual data and that generated by the reconstructed virtual stock market, we obtain parameters and strategies, which reveal some of the inner workings of the target stock market. We validate our approach by out-of-sample predictions of directional moves of the Nasdaq Composite Inde

    Illusory versus genuine control in agent-based games

    Get PDF
    In the Minority, Majority and Dollar Games (MG, MAJG, G)agentscompeteforrewards,actinginaccordwiththepreviouslybest−performingoftheirstrategies.Differentaspects/kindsofreal−worldmarketsaremodelledbythesegames.IntheMG,agentscompeteforscarceresources;intheMAJGagentsimitatethegrouptoexploitatrend;intheG) agents compete for rewards, acting in accord with the previously best-performing of their strategies. Different aspects/kinds of real-world markets are modelled by these games. In the MG, agents compete for scarce resources; in the MAJG agents imitate the group to exploit a trend; in the G agents attempt to predict and benefit both from trends and changes in the direction of a market. It has been previously shown that in the MG for a reasonable number of preliminary time steps preceding equilibrium (Time Horizon MG, THMG), agents' attempt to optimize their gains by active strategy selection is "illusory”: the hypothetical gains of their strategies is greater on average than agents' actual average gains. Furthermore, if a small proportion of agents deliberately choose and act in accord with their seemingly worst performing strategy, these outperform all other agents on average, and even attain mean positive gain, otherwise rare for agents in the MG. This latter phenomenon raises the question as to how well the optimization procedure works in the THMAJG and THG.WedemonstratethattheillusionofcontrolisabsentinTHMAJGandTHG. We demonstrate that the illusion of control is absent in THMAJG and THG. This provides further clarification of the kinds of situations subject to genuine control, and those not, in set-ups a priori defined to emphasize the importance of optimizatio

    "Illusion of control" in Time-Horizon Minority and ParrondoGames

    Get PDF
    Abstract.: Human beings like to believe they are in control of their destiny. This ubiquitous trait seems to increase motivation and persistence, and is probably evolutionarily adaptive [J.D. Taylor, S.E. Brown, Psych. Bull. 103, 193 (1988); A. Bandura, Self-efficacy: the exercise of control (WH Freeman, New York, 1997)]. But how good really is our ability to control? How successful is our track record in these areas? There is little understanding of when and under what circumstances we may over-estimate [E. Langer, J. Pers. Soc. Psych. 7, 185 (1975)] or even lose our ability to control and optimize outcomes, especially when they are the result of aggregations of individual optimization processes. Here, we demonstrate analytically using the theory of Markov Chains and by numerical simulations in two classes of games, the Time-Horizon Minority Game [M.L. Hart, P. Jefferies, N.F. Johnson, Phys. A 311, 275 (2002)] and the Parrondo Game [J.M.R. Parrondo, G.P. Harmer, D. Abbott, Phys. Rev. Lett. 85, 5226 (2000); J.M.R. Parrondo, How to cheat a bad mathematician (ISI, Italy, 1996)], that agents who optimize their strategy based on past information may actually perform worse than non-optimizing agents. In other words, low-entropy (more informative) strategies under-perform high-entropy (or random) strategies. This provides a precise definition of the "illusion of control” in certain set-ups a priori defined to emphasize the importance of optimizatio

    "Illusion of control" in Minority and Parrondo Games

    Full text link
    Human beings like to believe they are in control of their destiny. This ubiquitous trait seems to increase motivation and persistence, and is probably evolutionarily adaptive. But how good really is our ability to control? How successful is our track record in these areas? There is little understanding of when and under what circumstances we may over-estimate or even lose our ability to control and optimize outcomes, especially when they are the result of aggregations of individual optimization processes. Here, we demonstrate analytically using the theory of Markov Chains and by numerical simulations in two classes of games, the Minority game and the Parrondo Games, that agents who optimize their strategy based on past information actually perform worse than non-optimizing agents. In other words, low-entropy (more informative) strategies under-perform high-entropy (or random) strategies. This provides a precise definition of the "illusion of control" in set-ups a priori defined to emphasize the importance of optimization.Comment: 17 pages, four figures, 1 tabl

    Two TPX2-Dependent Switches Control the Activity of Aurora A

    Get PDF
    Aurora A is an important oncogenic kinase for mitotic spindle assembly and a potentially attractive target for human cancers. Its activation could be regulated by ATP cycle and its activator TPX2. To understand the activation mechanism of Aurora A, a series of 20 ns molecular dynamics (MD) simulations were performed on both the wild-type kinase and its mutants. Analyzing the three dynamic trajectories (Aurora A-ATP, Aurora A-ADP, and Aurora A-ADP-TPX2) at the residue level, for the first time we find two TPX2-dependent switches, i.e., switch-1 (Lys-143) and switch-2 (Arg-180), which are tightly associated with Aurora A activation. In the absence of TPX2, Lys-143 exhibits a “closed” state, and becomes hydrogen-bonded to ADP. Once TPX2 binding occurs, switch-1 is forced to “open” the binding site, thus pulling ADP away from Aurora A. Without facilitation of TPX2, switch-2 exits in an “open” conformation which accompanies the outward-flipping movement of P·Thr288 (in an inactive conformation), leading to the crucial phosphothreonine exposed and accessible for deactivation. However, with the binding of TPX2, switch-2 is forced to undergo a “closed” movement, thus capturing P·Thr288 into a buried position and locking its active conformation. Analysis of two Aurora A (K143A and R180A) mutants for the two switches further verifies their functionality and reliability in controlling Aurora activity. Our systems therefore suggest two switches determining Aurora A activation, which are important for the development of aurora kinase inhibitors

    The STAR-RICH Detector

    Get PDF
    The STAR-RICH detector extends the particle idenfication capabilities of the STAR spectrometer for charged hadrons at mid-rapidity. It allows identification of pions and kaons up to ~3 GeV/c and protons up to ~5 GeV/c. The characteristics and performance of the device in the inaugural RHIC run are described

    Identification of High p⊄\rm p_{\perp} Particles with the STAR-RICH Detector

    Full text link
    The STAR-RICH detector extends the particle identification capapbilities of the STAR experiment for charged hadrons at mid-rapidity. This detector represents the first use of a proximity-focusing CsI-based RICH detector in a collider experiment. It provides identification of pions and kaons up to 3 GeV/c and protons up to 5 GeV/c. The characteristics and performance of the device in the inaugural RHIC run are described.Comment: 6 pages, 6 figure
    • 

    corecore