350 research outputs found

    Reconciling intuitive physics and Newtonian mechanics for colliding objects

    Get PDF
    People have strong intuitions about the influence objects exert upon one another when they collide. Because people's judgments appear to deviate from Newtonian mechanics, psychologists have suggested that people depend on a variety of task-specific heuristics. This leaves open the question of how these heuristics could be chosen, and how to integrate them into a unified model that can explain human judgments across a wide range of physical reasoning tasks. We propose an alternative framework, in which people's judgments are based on optimal statistical inference over a Newtonian physical model that incorporates sensory noise and intrinsic uncertainty about the physical properties of the objects being viewed. This noisy Newton framework can be applied to a multitude of judgments, with people's answers determined by the uncertainty they have for physical variables and the constraints of Newtonian mechanics. We investigate a range of effects in mass judgments that have been taken as strong evidence for heuristic use and show that they are well explained by the interplay between Newtonian constraints and sensory uncertainty. We also consider an extended model that handles causality judgments, and obtain good quantitative agreement with human judgments across tasks that involve different judgment types with a single consistent set of parameters

    Breakdown of the Slowly Varying Amplitude Approximation: Generation of Backward Traveling Second Harmonic Light

    Get PDF
    By numerically solving the nonlinear field equations, we simulate second-harmonic generation by laser pulses within a nonlinear medium without making the usual slowly-varying-amplitude approximation, an approximation which may fail when laser pulses of moderate intensity or ultrashort duration are used to drive a nonlinear process. Under these conditions we show that a backward-traveling, second-harmonic wave is created, and that the magnitude of this wave is indicative of the breakdown of the slowly-varying-amplitude approximation. Conditions necessary for experimental detection of this wave are discussed

    Recombination elevates the effective evolutionary rate and facilitates the establishment of HIV-1 infection in infants after mother-to-child transmission

    Get PDF
    BACKGROUND: Previous studies have demonstrated that single HIV-1 genotypes are commonly transmitted from mother to child, but such analyses primarily used single samples from mother and child. It is possible that in a single sample, obtained early after infection, only the most replication competent virus is detected even when other forms may have been transmitted. Such forms may have advantages later in infection, and may thus be detected in follow-up samples. Because HIV-1 frequently recombines, phylogenetic analyses that ignore recombination may miss transmission of multiple forms if they recombine after transmission. Moreover, recombination may facilitate adaptation, thus providing an advantage in establishing infection. The effect of recombination on viral evolution in HIV-1 infected children has not been well defined. RESULTS: We analyzed full-length env sequences after single genome amplification from the plasma of four subtype B HIV-1 infected women (11-67 env clones from 1 time point within a month prior to delivery) and their non-breastfed, intrapartum-infected children (3-6 longitudinal time points per child starting at the time of HIV-1 diagnosis). To address the potential beneficial or detrimental effects of recombination, we used a recently developed hierarchical recombination detection method based on the pairwise homoplasy index (PHI)-test. Recombination was observed in 9-67% of the maternal sequences and in 25-60% of the child sequences. In the child, recombination only occurred between variants that had evolved after transmission; taking recombination into account, we identified transmission of only 1 or 2 phylogenetic lineages from mother to child. Effective HIV-1 evolutionary rates of HIV-1 were initially high in the child and slowed over time (after 1000 days). Recombination was associated with elevated evolutionary rates. CONCLUSIONS: Our results confirm that 1-2 variants are typically transmitted from mothers to their newborns. They also demonstrate that early abundant recombination elevates the effective evolutionary rate, suggesting that recombination increases the rate of adaptation in HIV-1 evolution

    Reply to Rouder (2014) : good frequentist properties raise confidence

    Get PDF
    Established psychological results have been called into question by demonstrations that statistical significance is easy to achieve, even in the absence of an effect. One often-warned-against practice, choosing when to stop the experiment on the basis of the results, is guaranteed to produce significant results. In response to these demonstrations, Bayes factors have been proposed as an antidote to this practice, because they are invariant with respect to how an experiment was stopped. Should researchers only care about the resulting Bayes factor, without concern for how it was produced? Yu, Sprenger, Thomas, and Dougherty (2014) and Sanborn and Hills (2014) demonstrated that Bayes factors are sometimes strongly influenced by the stopping rules used. However, Rouder (2014) has provided a compelling demonstration that despite this influence, the evidence supplied by Bayes factors remains correct. Here we address why the ability to influence Bayes factors should still matter to researchers, despite the correctness of the evidence. We argue that good frequentist properties mean that results will more often agree with researchers’ statistical intuitions, and good frequentist properties control the number of studies that will later be refuted. Both help raise confidence in psychological results

    Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, appendix A

    Get PDF
    A generic computer simulation for manipulator systems (ROBSIM) was implemented and the specific technologies necessary to increase the role of automation in various missions were developed. The specific items developed were: (1) Capability for definition of a manipulator system consisting of multiple arms, load objects, and an environment; (2) Capability for kinematic analysis, requirements analysis, and response simulation of manipulator motion; (3) Postprocessing options such as graphic replay of simulated motion and manipulator parameter plotting; (4) Investigation and simulation of various control methods including manual force/torque and active compliance control; (5) Evaluation and implementation of three obstacle avoidance methods; (6) Video simulation and edge detection; and (7) Software simulation validation. This appendix is the user's guide and includes examples of program runs and outputs as well as instructions for program use

    Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation

    Get PDF
    A generic computer simulation for manipulator systems (ROBSIM) was implemented and the specific technologies necessary to increase the role of automation in various missions were developed. The specific items developed are: (1) capability for definition of a manipulator system consisting of multiple arms, load objects, and an environment; (2) capability for kinematic analysis, requirements analysis, and response simulation of manipulator motion; (3) postprocessing options such as graphic replay of simulated motion and manipulator parameter plotting; (4) investigation and simulation of various control methods including manual force/torque and active compliances control; (5) evaluation and implementation of three obstacle avoidance methods; (6) video simulation and edge detection; and (7) software simulation validation

    The Future of the Joint Warfighting Headquarters: An Alternative Approach to the Joint Task Force

    Get PDF
    The US military must create standing, numbered, and regionally aligned Joint warfighting headquarters— American Expeditionary Forces (AEFs)—around a command council and a staff organized into Joint centers and cells. Calls for standing Joint force headquarters are not new, but the demonstrated military effectiveness of the Joint Task Force (JTF) model coupled with increasing service-specific resource requirements and tightening fiscal constraints have resulted in little evolution in joint force headquarters construction since the end of World War II. Analysis of the historical record has shown that joint warfighting is best conducted with a Joint warfighting command subordinate to the geographic combatant commands. However, the Joint Task Force model is problematic because the ad-hoc, post-crisis activation of JTFs, along with their antiquated command and control structure, inherently puts the United States at a strategic and operational disadvantage. In the future, the US military will primarily maintain its competitive advantage, especially in great-power competition, by being a superior and sustainable joint force sooner than its adversaries. The proposed AEFs draw on generations of hard-earned experience to maintain and grow American supremacy in Joint warfighting in an increasingly dangerous world.https://press.armywarcollege.edu/monographs/1949/thumbnail.jp

    Exploring the hierarchical structure of human plans via program generation

    Full text link
    Human behavior is inherently hierarchical, resulting from the decomposition of a task into subtasks or an abstract action into concrete actions. However, behavior is typically measured as a sequence of actions, which makes it difficult to infer its hierarchical structure. In this paper, we explore how people form hierarchically-structured plans, using an experimental paradigm that makes hierarchical representations observable: participants create programs that produce sequences of actions in a language with explicit hierarchical structure. This task lets us test two well-established principles of human behavior: utility maximization (i.e. using fewer actions) and minimum description length (MDL; i.e. having a shorter program). We find that humans are sensitive to both metrics, but that both accounts fail to predict a qualitative feature of human-created programs, namely that people prefer programs with reuse over and above the predictions of MDL. We formalize this preference for reuse by extending the MDL account into a generative model over programs, modeling hierarchy choice as the induction of a grammar over actions. Our account can explain the preference for reuse and provides the best prediction of human behavior, going beyond simple accounts of compressibility to highlight a principle that guides hierarchical planning
    • …
    corecore