42 research outputs found
Illusory versus Genuine Control in Agent-Based Games
In the Minority, Majority and Dollar Games (MG, MAJG, G agents attempt to
successfully predict and benefit from trends as well as changes in the
direction of a market. It has been previously shown that in the MG for a
reasonable number of preliminary time steps preceding equilibrium (Time Horizon
MG, THMG), agents' attempt to optimize their gains by active strategy selection
is ``illusory'': The calculated hypothetical gains of their individual
strategies is greater on average than agents' actual average gains.
Furthermore, if a small proportion of agents deliberately choose and act in
accord with their seemingly worst performing strategy, these outper-form all
other agents on average, and even attain mean positive gain, otherwise rare for
agents in the MG. This latter phenomenon raises the question as to how well the
optimization procedure works in the MAJG and G. In other words, low-entropy (more
informative) strategies under-perform high-entropy (or random) strategies in
the MG but outperform them in the MAJG and $G. This provides further
clarification of the kinds of situations subject to genuine control, and those
not, in set-ups a priori defined to emphasize the importance of optimization.Comment: 22 page
Illusory versus genuine control in agent-based games
In the Minority, Majority and Dollar Games (MG, MAJG, G agents attempt to predict and benefit both from trends and changes in the direction of a market. It has been previously shown that in the MG for a reasonable number of preliminary time steps preceding equilibrium (Time Horizon MG, THMG), agents' attempt to optimize their gains by active strategy selection is "illusory”: the hypothetical gains of their strategies is greater on average than agents' actual average gains. Furthermore, if a small proportion of agents deliberately choose and act in accord with their seemingly worst performing strategy, these outperform all other agents on average, and even attain mean positive gain, otherwise rare for agents in the MG. This latter phenomenon raises the question as to how well the optimization procedure works in the THMAJG and THG. This provides further clarification of the kinds of situations subject to genuine control, and those not, in set-ups a priori defined to emphasize the importance of optimizatio
"Illusion of control" in Time-Horizon Minority and ParrondoGames
Abstract.: Human beings like to believe they are in control of their destiny. This ubiquitous trait seems to increase motivation and persistence, and is probably evolutionarily adaptive [J.D. Taylor, S.E. Brown, Psych. Bull. 103, 193 (1988); A. Bandura, Self-efficacy: the exercise of control (WH Freeman, New York, 1997)]. But how good really is our ability to control? How successful is our track record in these areas? There is little understanding of when and under what circumstances we may over-estimate [E. Langer, J. Pers. Soc. Psych. 7, 185 (1975)] or even lose our ability to control and optimize outcomes, especially when they are the result of aggregations of individual optimization processes. Here, we demonstrate analytically using the theory of Markov Chains and by numerical simulations in two classes of games, the Time-Horizon Minority Game [M.L. Hart, P. Jefferies, N.F. Johnson, Phys. A 311, 275 (2002)] and the Parrondo Game [J.M.R. Parrondo, G.P. Harmer, D. Abbott, Phys. Rev. Lett. 85, 5226 (2000); J.M.R. Parrondo, How to cheat a bad mathematician (ISI, Italy, 1996)], that agents who optimize their strategy based on past information may actually perform worse than non-optimizing agents. In other words, low-entropy (more informative) strategies under-perform high-entropy (or random) strategies. This provides a precise definition of the "illusion of control” in certain set-ups a priori defined to emphasize the importance of optimizatio
"Illusion of control" in Minority and Parrondo Games
Human beings like to believe they are in control of their destiny. This
ubiquitous trait seems to increase motivation and persistence, and is probably
evolutionarily adaptive. But how good really is our ability to control? How
successful is our track record in these areas? There is little understanding of
when and under what circumstances we may over-estimate or even lose our ability
to control and optimize outcomes, especially when they are the result of
aggregations of individual optimization processes. Here, we demonstrate
analytically using the theory of Markov Chains and by numerical simulations in
two classes of games, the Minority game and the Parrondo Games, that agents who
optimize their strategy based on past information actually perform worse than
non-optimizing agents. In other words, low-entropy (more informative)
strategies under-perform high-entropy (or random) strategies. This provides a
precise definition of the "illusion of control" in set-ups a priori defined to
emphasize the importance of optimization.Comment: 17 pages, four figures, 1 tabl
Two TPX2-Dependent Switches Control the Activity of Aurora A
Aurora A is an important oncogenic kinase for mitotic spindle assembly and a potentially attractive target for human cancers. Its activation could be regulated by ATP cycle and its activator TPX2. To understand the activation mechanism of Aurora A, a series of 20 ns molecular dynamics (MD) simulations were performed on both the wild-type kinase and its mutants. Analyzing the three dynamic trajectories (Aurora A-ATP, Aurora A-ADP, and Aurora A-ADP-TPX2) at the residue level, for the first time we find two TPX2-dependent switches, i.e., switch-1 (Lys-143) and switch-2 (Arg-180), which are tightly associated with Aurora A activation. In the absence of TPX2, Lys-143 exhibits a “closed” state, and becomes hydrogen-bonded to ADP. Once TPX2 binding occurs, switch-1 is forced to “open” the binding site, thus pulling ADP away from Aurora A. Without facilitation of TPX2, switch-2 exits in an “open” conformation which accompanies the outward-flipping movement of P·Thr288 (in an inactive conformation), leading to the crucial phosphothreonine exposed and accessible for deactivation. However, with the binding of TPX2, switch-2 is forced to undergo a “closed” movement, thus capturing P·Thr288 into a buried position and locking its active conformation. Analysis of two Aurora A (K143A and R180A) mutants for the two switches further verifies their functionality and reliability in controlling Aurora activity. Our systems therefore suggest two switches determining Aurora A activation, which are important for the development of aurora kinase inhibitors
The STAR-RICH Detector
The STAR-RICH detector extends the particle idenfication capabilities of the STAR spectrometer for charged hadrons at mid-rapidity. It allows identification of pions and kaons up to ~3 GeV/c and protons up to ~5 GeV/c. The characteristics and performance of the device in the inaugural RHIC run are described
Identification of High Particles with the STAR-RICH Detector
The STAR-RICH detector extends the particle identification capapbilities of
the STAR experiment for charged hadrons at mid-rapidity. This detector
represents the first use of a proximity-focusing CsI-based RICH detector in a
collider experiment. It provides identification of pions and kaons up to 3
GeV/c and protons up to 5 GeV/c. The characteristics and performance of the
device in the inaugural RHIC run are described.Comment: 6 pages, 6 figure
Two TPX2-Dependent Switches Control the Activity of Aurora A
Aurora A is an important oncogenic kinase for mitotic spindle assembly and a potentially attractive target for human cancers. Its activation could be regulated by ATP cycle and its activator TPX2. To understand the activation mechanism of Aurora A, a series of 20 ns molecular dynamics (MD) simulations were performed on both the wild-type kinase and its mutants. Analyzing the three dynamic trajectories (Aurora A-ATP, Aurora A-ADP, and Aurora A-ADP-TPX2) at the residue level, for the first time we find two TPX2-dependent switches, i.e., switch-1 (Lys-143) and switch-2 (Arg-180), which are tightly associated with Aurora A activation. In the absence of TPX2, Lys-143 exhibits a “closed” state, and becomes hydrogen-bonded to ADP. Once TPX2 binding occurs, switch-1 is forced to “open” the binding site, thus pulling ADP away from Aurora A. Without facilitation of TPX2, switch-2 exits in an “open” conformation which accompanies the outward-flipping movement of P·Thr288 (in an inactive conformation), leading to the crucial phosphothreonine exposed and accessible for deactivation. However, with the binding of TPX2, switch-2 is forced to undergo a “closed” movement, thus capturing P·Thr288 into a buried position and locking its active conformation. Analysis of two Aurora A (K143A and R180A) mutants for the two switches further verifies their functionality and reliability in controlling Aurora activity. Our systems therefore suggest two switches determining Aurora A activation, which are important for the development of aurora kinase inhibitors
The nucleoporin ALADIN regulates Aurora A localization to ensure robust mitotic spindle formation
The formation of the mitotic spindle is a complex process that requires massive cellular reorganization. Regulation by mitotic kinases controls this entire process. One of these mitotic controllers is Aurora A kinase, which is itself highly regulated. In this study, we show that the nuclear pore protein ALADIN is a novel spatial regulator of Aurora A. Without ALADIN, Aurora A spreads from centrosomes onto spindle microtubules, which affects the distribution of a subset of microtubule regulators and slows spindle assembly and chromosome alignment. ALADIN interacts with inactive Aurora A and is recruited to the spindle pole after Aurora A inhibition. Of interest, mutations in ALADIN cause triple A syndrome. We find that some of the mitotic phenotypes that we observe after ALADIN depletion also occur in cells from triple A syndrome patients, which raises the possibility that mitotic errors may underlie part of the etiology of this syndrome