826 research outputs found

    A New Perspective on the Nonextremal Enhancon Solution

    Full text link
    We discuss the nonextremal generalisation of the enhancon mechanism. We find that the nonextremal shell branch solution does not violate the Weak Energy Condition when the nonextremality parameter is small, in contrast to earlier discussions of this subject. We show that this physical shell branch solution fills the mass gap between the extremal enhancon solution and the nonextremal horizon branch solution.Comment: 10 pages, 3 figures, reference adde

    Aspects of D-Branes as BPS monopoles

    Get PDF
    We investigate some of the properties of D-brane configurations which behave as BPS monopoles. The two D-brane configurations we will study are the enhançon and D-strings attached to D3-branes.We will start by investigating D3-branes wrapped on a K3 manifold, which are known as enhançons. They look like regions of enhanced gauge symmetry in the directions transverse to the branes, and therefore behave as BPS monopoles. We calculate the metric on moduli space for n enhançons, following the methods used by Ferrell and Eardley for black holes. We expect the result to be the higher-dimensional generalisation of the Taub-NUT metric, which is the metric on moduli space for n BPS monopoles. Next we will study D-strings attached to D3-branes; the ends of the D-strings behave as BPS monopoles of the world volume gauge theory living on the D3-branes. In fact the D-string/D3-brane system is a physical realisation of the ADHMN construction for BPS monopoles. We aim to test this correspondence by calculating the energy radiated during D-string scattering, working with the non-Abelian Born-Infeld action for D-strings. We will then compare our result to the equivalent monopole calculation of Manton and Samols

    Travellers' diarrhoea.

    Get PDF

    The ‘strength of weak ties’ among female baboons : fitness-related benefits of social bonds

    Get PDF
    Thanks to Cape Nature Conservation for permission to work at De Hoop, and to all the graduate students and field assistants who contributed to our long-term data-base. LB was supported by NSERC Canada Research Chair and Discovery Programs; SPH was supported by the NRF (South Africa) and NSERC Discovery Grants during the writing of this manuscript. We are grateful to one anonymous reviewer and, in particular, Lauren Brent for invaluable feedback on earlier drafts of our manuscript.Peer reviewedPostprin

    Athletic Training and Physical Therapy Junior Faculty Member Preparation: Perceptions of Doctoral Programs and Clinical Practice

    Get PDF
    Background: Institutions of higher education suffer from a shortage of appropriately prepared faculty members in athletic training and physical therapy programs. Both professional programs have recently undergone curricular reform and degree change. We sought gain an understanding of the preparation mechanisms experienced by athletic training and physical therapy practitioners for their junior faculty positions. Method: Twenty-six athletic trainers and physical therapists participated in this phenomenological study. Data from one-on-one phone interviews were analyzed following the inductive process of interpretive phenomenological analysis. Content experts, pilot interviews, multiple analysts and member checking ensured trustworthiness. Results: Findings indicate two primary mechanisms prepared the practitioners to become junior faculty members: doctoral degree programs and clinical practice. Doctoral degree programs did not provide experiences for all future faculty roles. Hands-on patient care practice provided participants the context for their teaching and confidence in knowledge aptitude. Conclusion: Doctoral institutions should provide a variety of hands on active learning experiences to doctoral students. Future faculty members can maximize the amount of time they provide clinical care to patients, following the attainment of their professional credential. Clinical competence and proficiency will serve as the foundational basis for their future teaching endeavors and may increase credibility and respect

    Landmark models for optimizing the use of repeated measurements of risk factors in electronic health records to predict future disease risk

    Get PDF
    The benefits of using electronic health records for disease risk screening and personalized heathcare decisions are becoming increasingly recognized. We present a computationally feasible statistical approach to address the methodological challenges in utilizing historical repeat measures of multiple risk factors recorded in electronic health records to systematically identify patients at high risk of future disease. The approach is principally based on a two-stage dynamic landmark model. The first stage estimates current risk factor values from all available historical repeat risk factor measurements by landmark-age-specific multivariate linear mixed-effects models with correlated random-intercepts, which account for sporadically recorded repeat measures, unobserved data and measurements errors. The second stage predicts future disease risk from a sex-stratified Cox proportional hazards model, with estimated current risk factor values from the first stage. Methods are exemplified by developing and validating a dynamic 10-year cardiovascular disease risk prediction model using electronic primary care records for age, diabetes status, hypertension treatment, smoking status, systolic blood pressure, total and high-density lipoprotein cholesterol from 41,373 individuals in 10 primary care practices in England and Wales contributing to The Health Improvement Network (1997-2016). Using cross-validation, the model was well-calibrated (Brier score = 0.041 [95%CI: 0.039, 0.042]) and had good discrimination (C-index = 0.768 [95%CI: 0.759, 0.777]).This work was funded by the Medical Research Council (MRC) (grant MR/K014811/1). J.B. was supported by an MRC fellowship (grant G0902100) and the MRC Unit Program (grant MC_UU_00002/5). R.H.K. was supported by an MRC Methodology Fellowship (grant MR/M014827/1)

    Actionable Guidance for High-Consequence AI Risk Management: Towards Standards Addressing AI Catastrophic Risks

    Full text link
    Artificial intelligence (AI) systems can provide many beneficial capabilities but also risks of adverse events. Some AI systems could present risks of events with very high or catastrophic consequences at societal scale. The US National Institute of Standards and Technology (NIST) is developing the NIST Artificial Intelligence Risk Management Framework (AI RMF) as voluntary guidance on AI risk assessment and management for AI developers and others. For addressing risks of events with catastrophic consequences, NIST indicated a need to translate from high level principles to actionable risk management guidance. In this document, we provide detailed actionable-guidance recommendations focused on identifying and managing risks of events with very high or catastrophic consequences, intended as a risk management practices resource for NIST for AI RMF version 1.0 (scheduled for release in early 2023), or for AI RMF users, or for other AI risk management guidance and standards as appropriate. We also provide our methodology for our recommendations. We provide actionable-guidance recommendations for AI RMF 1.0 on: identifying risks from potential unintended uses and misuses of AI systems; including catastrophic-risk factors within the scope of risk assessments and impact assessments; identifying and mitigating human rights harms; and reporting information on AI risk factors including catastrophic-risk factors. In addition, we provide recommendations on additional issues for a roadmap for later versions of the AI RMF or supplementary publications. These include: providing an AI RMF Profile with supplementary guidance for cutting-edge increasingly multi-purpose or general-purpose AI. We aim for this work to be a concrete risk-management practices contribution, and to stimulate constructive dialogue on how to address catastrophic risks and associated issues in AI standards.Comment: 55 pages; updated throughout for general consistency with NIST AI RMF 2nd Draft, minor revisions to section numbering and language, typo fixes, additions to acknowledgments and reference

    Sample Size Estimation using a Latent Variable Model for Mixed Outcome Co-Primary, Multiple Primary and Composite Endpoints

    Get PDF
    Mixed outcome endpoints that combine multiple continuous and discrete components to form co-primary, multiple primary or composite endpoints are often employed as primary outcome measures in clinical trials. There are many advantages to joint modelling the individual outcomes using a latent variable framework, however in order to make use of the model in practice we require techniques for sample size estimation. In this paper we show how the latent variable model can be applied to the three types of joint endpoints and propose appropriate hypotheses, power and sample size estimation methods for each. We illustrate the techniques using a numerical example based on the four dimensional endpoint in the MUSE trial and find that the sample size required for the co-primary endpoint is larger than that required for the individual endpoint with the smallest effect size. Conversely, the sample size required for the multiple primary endpoint is reduced from that required for the individual outcome with the largest effect size. We show that the analytical technique agrees with the empirical power from simulation studies. We further illustrate the reduction in required sample size that may be achieved in trials of mixed outcome composite endpoints through a simulation study and find that the sample size primarily depends on the components driving response and the correlation structure and much less so on the treatment effect structure in the individual endpoints
    corecore