409 research outputs found

    Estimating the Location and Spatial Extent of a Covert Anthrax Release

    Get PDF
    Rapidly identifying the features of a covert release of an agent such as anthrax could help to inform the planning of public health mitigation strategies. Previous studies have sought to estimate the time and size of a bioterror attack based on the symptomatic onset dates of early cases. We extend the scope of these methods by proposing a method for characterizing the time, strength, and also the location of an aerosolized pathogen release. A back-calculation method is developed allowing the characterization of the release based on the data on the first few observed cases of the subsequent outbreak, meteorological data, population densities, and data on population travel patterns. We evaluate this method on small simulated anthrax outbreaks (about 25–35 cases) and show that it could date and localize a release after a few cases have been observed, although misspecifications of the spore dispersion model, or the within-host dynamics model, on which the method relies can bias the estimates. Our method could also provide an estimate of the outbreak's geographical extent and, as a consequence, could help to identify populations at risk and, therefore, requiring prophylactic treatment. Our analysis demonstrates that while estimates based on the first ten or 15 observed cases were more accurate and less sensitive to model misspecifications than those based on five cases, overall mortality is minimized by targeting prophylactic treatment early on the basis of estimates made using data on the first five cases. The method we propose could provide early estimates of the time, strength, and location of an aerosolized anthrax release and the geographical extent of the subsequent outbreak. In addition, estimates of release features could be used to parameterize more detailed models allowing the simulation of control strategies and intervention logistics

    Enzymatic capacities of metabolic fuel use in cuttlefish (Sepia officinalis) and responses to food deprivation: insight into the metabolic organization and starvation survival strategy of cephalopods

    Get PDF
    Food limitation is a common challenge for animals. Cephalopods are sensitive to starvation because of high metabolic rates and growth rates related to their "live fast, die young" life history. We investigated how enzymatic capacities of key metabolic pathways are modulated during starvation in the common cuttlefish (Sepia officinalis) to gain insight into the metabolic organization of cephalopods and their strategies for coping with food limitation. In particular, lipids have traditionally been considered unimportant fuels in cephalopods, yet, puzzlingly, many species (including cuttlefish) mobilize the lipid stores in their digestive gland during starvation. Using a comprehensive multi-tissue assay of enzymatic capacities for energy metabolism, we show that, during long-term starvation (12 days), glycolytic capacity for glucose use is decreased in cuttlefish tissues, while capacities for use of lipid-based fuels (fatty acids and ketone bodies) and amino acid fuels are retained or increased. Specifically, the capacity to use the ketone body acetoacetate as fuel is widespread across tissues and gill has a previously unrecognized capacity for fatty acid catabolism, albeit at low rates. The capacity for de novo glucose synthesis (gluconeogenesis), important for glucose homeostasis, likely is restricted to the digestive gland, contrary to previous reports of widespread gluconeogenesis among cephalopod tissues. Short-term starvation (3-5 days) had few effects on enzymatic capacities. Similar to vertebrates, lipid-based fuels, putatively mobilized from fat stores in the digestive gland, appear to be important energy sources for cephalopods, especially during starvation when glycolytic capacity is decreased perhaps to conserve available glucose

    Strict evolutionary conservation followed rapid gene loss on human and rhesus Y chromosomes

    Get PDF
    The human X and Y chromosomes evolved from an ordinary pair of autosomes during the past 200–300 million years[superscript 1, 2, 3]. The human MSY (male-specific region of Y chromosome) retains only three percent of the ancestral autosomes’ genes owing to genetic decay[superscript 4, 5]. This evolutionary decay was driven by a series of five ‘stratification’ events. Each event suppressed X–Y crossing over within a chromosome segment or ‘stratum’, incorporated that segment into the MSY and subjected its genes to the erosive forces that attend the absence of crossing over[superscript 2, 6]. The last of these events occurred 30 million years ago, 5 million years before the human and Old World monkey lineages diverged. Although speculation abounds regarding ongoing decay and looming extinction of the human Y chromosome[superscript 7, 8, 9, 10], remarkably little is known about how many MSY genes were lost in the human lineage in the 25 million years that have followed its separation from the Old World monkey lineage. To investigate this question, we sequenced the MSY of the rhesus macaque, an Old World monkey, and compared it to the human MSY. We discovered that during the last 25 million years MSY gene loss in the human lineage was limited to the youngest stratum (stratum 5), which comprises three percent of the human MSY. In the older strata, which collectively comprise the bulk of the human MSY, gene loss evidently ceased more than 25 million years ago. Likewise, the rhesus MSY has not lost any older genes (from strata 1–4) during the past 25 million years, despite its major structural differences to the human MSY. The rhesus MSY is simpler, with few amplified gene families or palindromes that might enable intrachromosomal recombination and repair. We present an empirical reconstruction of human MSY evolution in which each stratum transitioned from rapid, exponential loss of ancestral genes to strict conservation through purifying selection

    A pragmatic randomised controlled trial of the Welsh National Exercise Referral Scheme: protocol for trial and integrated economic and process evaluation

    Get PDF
    Background: The benefits to health of a physically active lifestyle are well established and there is evidence that a sedentary lifestyle plays a significant role in the onset and progression of chronic disease. Despite a recognised need for effective public health interventions encouraging sedentary people with a medical condition to become more active, there are few rigorous evaluations of their effectiveness. Following NICE guidance, the Welsh national exercise referral scheme was implemented within the context of a pragmatic randomised controlled trial. Methods/Design: The randomised controlled trial, with nested economic and process evaluations, recruited 2,104 inactive men and women aged 16+ with coronary heart disease (CHD) risk factors and/or mild to moderate depression, anxiety or stress. Participants were recruited from 12 local health boards in Wales and referred directly by health professionals working in a range of health care settings. Consenting participants were randomised to either a 16 week tailored exercise programme run by qualified exercise professionals at community sports centres (intervention), or received an information booklet on physical activity (control). A range of validated measures assessing physical activity, mental health, psycho-social processes and health economics were administered at 6 and 12 months, with the primary 12 month outcome measure being 7 day Physical Activity Recall. The process evaluation explored factors determining the effectiveness or otherwise of the scheme, whilst the economic evaluation determined the relative cost-effectiveness of the scheme in terms of public spending. Discussion: Evaluation of such a large scale national public health intervention presents methodological challenges in terms of trial design and implementation. This study was facilitated by early collaboration with social research and policy colleagues to develop a rigorous design which included an innovative approach to patient referral and trial recruitment, a comprehensive process evaluation examining intervention delivery and an integrated economic evaluation. This will allow a unique insight into the feasibility, effectiveness and cost effectiveness of a national exercise referral scheme for participants with CHD risk factors or mild to moderate anxiety, depression, or stress and provides a potential model for future policy evaluations. Trial registration: Current Controlled Trials ISRCTN4768044

    Re-examining the Unified Theory of Acceptance and Use of Technology (UTAUT): Towards a Revised Theoretical Model

    Get PDF
    YesBased on a critical review of the Unified Theory of Acceptance and Use of Technology (UTAUT), this study first formalized an alternative theoretical model for explaining the acceptance and use of information system (IS) and information technology (IT) innovations. The revised theoretical model was then empirically examined using a combination of meta-analysis and structural equation modelling (MASEM) techniques. The meta-analysis was based on 1600 observations on 21 relationships coded from 162 prior studies on IS/IT acceptance and use. The SEM analysis showed that attitude: was central to behavioural intentions and usage behaviours, partially mediated the effects of exogenous constructs on behavioural intentions, and had a direct influence on usage behaviours. A number of implications for theory and practice are derived based on the findings

    Developmental malformation of the corpus callosum: a review of typical callosal development and examples of developmental disorders with callosal involvement

    Get PDF
    This review provides an overview of the involvement of the corpus callosum (CC) in a variety of developmental disorders that are currently defined exclusively by genetics, developmental insult, and/or behavior. I begin with a general review of CC development, connectivity, and function, followed by discussion of the research methods typically utilized to study the callosum. The bulk of the review concentrates on specific developmental disorders, beginning with agenesis of the corpus callosum (AgCC)—the only condition diagnosed exclusively by callosal anatomy. This is followed by a review of several genetic disorders that commonly result in social impairments and/or psychopathology similar to AgCC (neurofibromatosis-1, Turner syndrome, 22q11.2 deletion syndrome, Williams yndrome, and fragile X) and two forms of prenatal injury (premature birth, fetal alcohol syndrome) known to impact callosal development. Finally, I examine callosal involvement in several common developmental disorders defined exclusively by behavioral patterns (developmental language delay, dyslexia, attention-deficit hyperactive disorder, autism spectrum disorders, and Tourette syndrome)

    Evidence-based Kernels: Fundamental Units of Behavioral Influence

    Get PDF
    This paper describes evidence-based kernels, fundamental units of behavioral influence that appear to underlie effective prevention and treatment for children, adults, and families. A kernel is a behavior–influence procedure shown through experimental analysis to affect a specific behavior and that is indivisible in the sense that removing any of its components would render it inert. Existing evidence shows that a variety of kernels can influence behavior in context, and some evidence suggests that frequent use or sufficient use of some kernels may produce longer lasting behavioral shifts. The analysis of kernels could contribute to an empirically based theory of behavioral influence, augment existing prevention or treatment efforts, facilitate the dissemination of effective prevention and treatment practices, clarify the active ingredients in existing interventions, and contribute to efficiently developing interventions that are more effective. Kernels involve one or more of the following mechanisms of behavior influence: reinforcement, altering antecedents, changing verbal relational responding, or changing physiological states directly. The paper describes 52 of these kernels, and details practical, theoretical, and research implications, including calling for a national database of kernels that influence human behavior

    Measurement of the tt¯tt¯ production cross section in pp collisions at √s=13 TeV with the ATLAS detector

    Get PDF
    A measurement of four-top-quark production using proton-proton collision data at a centre-of-mass energy of 13 TeV collected by the ATLAS detector at the Large Hadron Collider corresponding to an integrated luminosity of 139 fb−1 is presented. Events are selected if they contain a single lepton (electron or muon) or an opposite-sign lepton pair, in association with multiple jets. The events are categorised according to the number of jets and how likely these are to contain b-hadrons. A multivariate technique is then used to discriminate between signal and background events. The measured four-top-quark production cross section is found to be 26+17−15 fb, with a corresponding observed (expected) significance of 1.9 (1.0) standard deviations over the background-only hypothesis. The result is combined with the previous measurement performed by the ATLAS Collaboration in the multilepton final state. The combined four-top-quark production cross section is measured to be 24+7−6 fb, with a corresponding observed (expected) signal significance of 4.7 (2.6) standard deviations over the background-only predictions. It is consistent within 2.0 standard deviations with the Standard Model expectation of 12.0 ± 2.4 fb

    Measurements of Higgs bosons decaying to bottom quarks from vector boson fusion production with the ATLAS experiment at √=13TeV

    Get PDF
    The paper presents a measurement of the Standard Model Higgs Boson decaying to b-quark pairs in the vector boson fusion (VBF) production mode. A sample corresponding to 126 fb−1 of s√=13TeV proton–proton collision data, collected with the ATLAS experiment at the Large Hadron Collider, is analyzed utilizing an adversarial neural network for event classification. The signal strength, defined as the ratio of the measured signal yield to that predicted by the Standard Model for VBF Higgs production, is measured to be 0.95+0.38−0.36 , corresponding to an observed (expected) significance of 2.6 (2.8) standard deviations from the background only hypothesis. The results are additionally combined with an analysis of Higgs bosons decaying to b-quarks, produced via VBF in association with a photon

    Muon reconstruction and identification efficiency in ATLAS using the full Run 2 pp collision data set at \sqrt{s}=13 TeV

    Get PDF
    This article documents the muon reconstruction and identification efficiency obtained by the ATLAS experiment for 139 \hbox {fb}^{-1} of pp collision data at \sqrt{s}=13 TeV collected between 2015 and 2018 during Run 2 of the LHC. The increased instantaneous luminosity delivered by the LHC over this period required a reoptimisation of the criteria for the identification of prompt muons. Improved and newly developed algorithms were deployed to preserve high muon identification efficiency with a low misidentification rate and good momentum resolution. The availability of large samples of Z\rightarrow \mu \mu and J/\psi \rightarrow \mu \mu decays, and the minimisation of systematic uncertainties, allows the efficiencies of criteria for muon identification, primary vertex association, and isolation to be measured with an accuracy at the per-mille level in the bulk of the phase space, and up to the percent level in complex kinematic configurations. Excellent performance is achieved over a range of transverse momenta from 3 GeV to several hundred GeV, and across the full muon detector acceptance of |\eta |<2.7
    corecore