4,663 research outputs found

    Is It Real, or Is It Randomized?: A Financial Turing Test

    Full text link
    We construct a financial "Turing test" to determine whether human subjects can differentiate between actual vs. randomized financial returns. The experiment consists of an online video-game (http://arora.ccs.neu.edu) where players are challenged to distinguish actual financial market returns from random temporal permutations of those returns. We find overwhelming statistical evidence (p-values no greater than 0.5%) that subjects can consistently distinguish between the two types of time series, thereby refuting the widespread belief that financial markets "look random." A key feature of the experiment is that subjects are given immediate feedback regarding the validity of their choices, allowing them to learn and adapt. We suggest that such novel interfaces can harness human capabilities to process and extract information from financial data in ways that computers cannot.Comment: 12 pages, 6 figure

    A Computational View of Market Efficiency

    Get PDF
    We propose to study market efficiency from a computational viewpoint. Borrowing from theoretical computer science, we define a market to be \emph{efficient with respect to resources SS} (e.g., time, memory) if no strategy using resources SS can make a profit. As a first step, we consider memory-mm strategies whose action at time tt depends only on the mm previous observations at times t−m,...,t−1t-m,...,t-1. We introduce and study a simple model of market evolution, where strategies impact the market by their decision to buy or sell. We show that the effect of optimal strategies using memory mm can lead to "market conditions" that were not present initially, such as (1) market bubbles and (2) the possibility for a strategy using memory mâ€Č>mm' > m to make a bigger profit than was initially possible. We suggest ours as a framework to rationalize the technological arms race of quantitative trading firms

    Child Abuse Investigations: How CPS and Law Enforcement Engage in Collaboration

    Get PDF
    Child welfare social workers (CPS) and law enforcement professionals are the sole professional groups in California assigned the task of investigating child physical and sexual abuse allegations. Both professional groups report that child-well-being is the ultimate outcome desired when addressing the needs of vulnerable and “at risk” children. Despite this shared vision CPS and law enforcement professionals also described competing outcomes that are often contradictory; particularly in how each group characterizes different professional responsibilities in achieving child well-being. For example, CPS describes the dual responsibilities of preventing children from further harm while at the same time identifying factors that led to the abuse and providing non-punitive services aimed at preserving and strengthening family ties; including maintaining the children safely in their homes whenever possible. On the other hand, law enforcement’s view of child abuse as a crime shapes their perception of how things are handled. Law enforcement has the responsibility for collecting criminal evidence that frequently results in the offending parent being prosecuted and spending time in jail, possibly dismantling the family unit. Understanding how these two professional groups collaborate to execute their conflicting, professional responsibilities forms the overall focus of this study. Child welfare social workers and law enforcement professionals were recruited from Riverside and San Bernardino Counties to participate in the study. Theoretical sampling, snowball sampling, and convenience sampling techniques were used to ensure that data was collected from a minimum of 20 participants who were identified as subject matter experts. Data was collected through face-to-face interviews using semi-structured interview guides. Transcribed interviews were entered into the QSR*NVIVO 8 software program for data management and to provide an audit trail. Seven major themes emerged from the data. Findings revealed that CPS and law enforcement professionals do not collaborate; they cooperate and coordinate on an inconsistent basis. Overall, dissimilar professional standards engendered conflict and negative perceptions of each other producing poor working relationships. However, the research revealed that the working relationship between the two entities seems to improve when they are co-located/share the same physical workplace. More research is recommended to determine if such working arrangement impacts collaboration

    Dynamical Generation of Noiseless Quantum Subsystems

    Get PDF
    We present control schemes for open quantum systems that combine decoupling and universal control methods with coding procedures. By exploiting a general algebraic approach, we show how appropriate encodings of quantum states result in obtaining universal control over dynamically-generated noise-protected subsystems with limited control resources. In particular, we provide an efficient scheme for performing universal encoded quantum computation in a wide class of systems subjected to linear non-Markovian quantum noise and supporting Heisenberg-type internal Hamiltonians.Comment: 4 pages, no figures; REVTeX styl

    Faculty Recital:Lawrence W. Kinney, Viola

    Get PDF
    Centennial Lecture Hall March 8, 1968 8:15p.m

    Introduction to Quantum Error Correction

    Get PDF
    In this introduction we motivate and explain the ``decoding'' and ``subsystems'' view of quantum error correction. We explain how quantum noise in QIP can be described and classified, and summarize the requirements that need to be satisfied for fault tolerance. Considering the capabilities of currently available quantum technology, the requirements appear daunting. But the idea of ``subsystems'' shows that these requirements can be met in many different, and often unexpected ways.Comment: 44 pages, to appear in LA Science. Hyperlinked PDF at http://www.c3.lanl.gov/~knill/qip/ecprhtml/ecprpdf.pdf, HTML at http://www.c3.lanl.gov/~knill/qip/ecprhtm

    Partial Information Decomposition as a Unified Approach to the Specification of Neural Goal Functions

    Get PDF
    In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g.sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of \textit{multiple} inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax, coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy'. [...]Comment: 21 pages, 4 figures, appendi

    Mixing in Non-Quasirandom Groups

    Get PDF
    We initiate a systematic study of mixing in non-quasirandom groups. Let A and B be two independent, high-entropy distributions over a group G. We show that the product distribution AB is statistically close to the distribution F(AB) for several choices of G and F, including: 1) G is the affine group of 2x2 matrices, and F sets the top-right matrix entry to a uniform value, 2) G is the lamplighter group, that is the wreath product of ?? and ?_{n}, and F is multiplication by a certain subgroup, 3) G is H? where H is non-abelian, and F selects a uniform coordinate and takes a uniform conjugate of it. The obtained bounds for (1) and (2) are tight. This work is motivated by and applied to problems in communication complexity. We consider the 3-party communication problem of deciding if the product of three group elements multiplies to the identity. We prove lower bounds for the groups above, which are tight for the affine and the lamplighter groups

    Need for cognitive closure modulates how perceptual decisions are affected by task difficulty and outcome relevance

    Get PDF
    The aim of this study was to assess the extent to which Need for Cognitive Closure (NCC), an individual-level epistemic motivation, can explain inter-individual variability in the cognitive effort invested on a perceptual decision making task (the random motion task). High levels of NCC are manifested in a preference for clarity, order and structure and a desire for firm and stable knowledge. The study evaluated how NCC moderates the impact of two variables known to increase the amount of cognitive effort invested on a task, namely task ambiguity (i.e., the difficulty of the perceptual discrimination) and outcome relevance (i.e., the monetary gain associated with a correct discrimination). Based on previous work and current design, we assumed that reaction times (RTs) on our motion discrimination task represent a valid index of effort investment. Task ambiguity was associated with increased cognitive effort in participants with low or medium NCC but, interestingly, it did not affect the RTs of participants with high NCC. A different pattern of association was observed for outcome relevance; high outcome relevance increased cognitive effort in participants with moderate or high NCC, but did not affect the performance of low NCC participants. In summary, the performance of individuals with low NCC was affected by task difficulty but not by outcome relevance, whereas individuals with high NCC were influenced by outcome relevance but not by task difficulty; only participants with medium NCC were affected by both task difficulty and outcome relevance. These results suggest that perceptual decision making is influenced by the interaction between context and NC

    Advances in decoherence control

    Full text link
    I address the current status of dynamical decoupling techniques in terms of required control resources and feasibility. Based on recent advances in both improving the theoretical design and assessing the control performance for specific noise models, I argue that significant progress may still be possible on the road of implementing decoupling under realistic constraints.Comment: 14 pages, 3 encapsulated eps figures. To appear in Journal of Modern Optics, Special Proceedings Volume of the XXXIV Winter Colloquium on the Physics of Quantum Electronics, Snowbird, Jan 200
    • 

    corecore