72 research outputs found

    Leopard Panthera pardus density and survival in an ecosystem with depressed abundance of prey and dominant competitors

    Get PDF
    The leopard Panthera pardus is in range-wide decline, and many populations are highly threatened. Prey depletion is a major cause of global carnivore declines, but the response of leopard survival and density to this threat is unclear: by reducing the density of a dominant competitor (the lion Panthera leo) prey depletion could create both costs and benefits for subordinate competitors. We used capture-recapture models fitted to data from a 7-year camera-trap study in Kafue National Park, Zambia, to obtain baseline estimates of leopard population density and sex-specific apparent survival rates. Kafue is affected by prey depletion, and densities of large herbivores preferred by lions have declined more than the densities of smaller herbivores preferred by leopards. Lion density is consequently low. Estimates of leopard density were comparable to ecosystems with more intensive protection and favourable prey densities. However, our study site is located in an area with good ecological conditions and high levels of protection relative to other portions of the ecosystem, so extrapolating our estimates across the Park or into adjacent Game Management Areas would not be valid. Our results show that leopard density and survival within north-central Kafue remain good despite prey depletion, perhaps because (1) prey depletion has had weaker effects on preferred leopard prey compared to larger prey preferred by lions, and (2) the density of dominant competitors is consequently low. Our results show that the effects of prey depletion can be more complex than uniform decline of all large carnivore species, and warrant further investigation

    Crossbow Volume 1

    Get PDF
    Student Integrated ProjectIncludes supplementary materialDistributing naval combat power into many small ships and unmanned air vehicles that capitalize on emerging technology offers a transformational way to think about naval combat in the littorals in the 2020 time frame. Project CROSSBOW is an engineered systems of systems that proposes to use such distributed forces to provide forward presence to gain and maiantain access, to provide sea control, and to project combat power in the littoral regions of the world. Project CROSSBOW is the result of a yearlong, campus-wide, integrated research systems engineering effort involving 40 student researchers and 15 supervising faculty members. This report (Volume I) summarizes the CROSSBOW project. It catalogs the major features of each of the components, and includes by reference a separate volume for each of the major systems (ships, aircraft, and logistics). It also prresents the results of the mission and campaign analysis that informed the trade-offs between these components. It describes certain functions of CROSSBOW in detail through specialized supporting studies. The student work presented here is technologically feasible, integrated and imaginative. The student project cannot by itself provide definitive designs or analyses covering such a broad topic. It does strongly suggest that the underlying concepts have merit and deserve further serious study by the Navy as it transforms itself

    Severe Asthma Standard-of-Care Background Medication Reduction With Benralizumab: ANDHI in Practice Substudy

    Get PDF
    Background: The phase IIIb, randomized, parallel-group, placebo-controlled ANDHI double-blind (DB) study extended understanding of the efficacy of benralizumab for patients with severe eosinophilic asthma. Patients from ANDHI DB could join the 56-week ANDHI in Practice (IP) single-arm, open-label extension substudy. Objective: Assess potential for standard-of-care background medication reductions while maintaining asthma control with benralizumab. Methods: Following ANDHI DB completion, eligible adults were enrolled in ANDHI IP. After an 8-week run-in with benralizumab, there were 5 visits to potentially reduce background asthma medications for patients achieving and maintaining protocol-defined asthma control with benralizumab. Main outcome measures for non-oral corticosteroid (OCS)-dependent patients were the proportions with at least 1 background medication reduction (ie, lower inhaled corticosteroid dose, background medication discontinuation) and the number of adapted Global Initiative for Asthma (GINA) step reductions at end of treatment (EOT). Main outcomes for OCS-dependent patients were reductions in daily OCS dosage and proportion achieving OCS dosage of 5 mg or lower at EOT. Results: For non-OCS-dependent patients, 53.3% (n = 208 of 390) achieved at least 1 background medication reduction, increasing to 72.6% (n = 130 of 179) for patients who maintained protocol-defined asthma control at EOT. A total of 41.9% (n = 163 of 389) achieved at least 1 adapted GINA step reduction, increasing to 61.8% (n = 110 of 178) for patients with protocol-defined EOT asthma control. At ANDHI IP baseline, OCS dosages were 5 mg or lower for 40.4% (n = 40 of 99) of OCS-dependent patients. Of OCS-dependent patients, 50.5% (n = 50 of 99) eliminated OCS and 74.7% (n = 74 of 99) achieved dosages of 5 mg or lower at EOT. Conclusions: These findings demonstrate benralizumab's ability to improve asthma control, thereby allowing background medication reduction

    Multi-messenger observations of a binary neutron star merger

    Get PDF
    On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta

    Multimessenger Search for Sources of Gravitational Waves and High-Energy Neutrinos: Results for Initial LIGO-Virgo and IceCube

    Get PDF
    We report the results of a multimessenger search for coincident signals from the LIGO and Virgo gravitational-wave observatories and the partially completed IceCube high-energy neutrino detector, including periods of joint operation between 2007-2010. These include parts of the 2005-2007 run and the 2009-2010 run for LIGO-Virgo, and IceCube's observation periods with 22, 59 and 79 strings. We find no significant coincident events, and use the search results to derive upper limits on the rate of joint sources for a range of source emission parameters. For the optimistic assumption of gravitational-wave emission energy of 10−210^{-2}\,M⊙_\odotc2^2 at ∌150\sim 150\,Hz with ∌60\sim 60\,ms duration, and high-energy neutrino emission of 105110^{51}\,erg comparable to the isotropic gamma-ray energy of gamma-ray bursts, we limit the source rate below 1.6×10−21.6 \times 10^{-2}\,Mpc−3^{-3}yr−1^{-1}. We also examine how combining information from gravitational waves and neutrinos will aid discovery in the advanced gravitational-wave detector era

    Searching for stochastic gravitational waves using data from the two colocated LIGO Hanford detectors

    Get PDF
    Searches for a stochastic gravitational-wave background (SGWB) using terrestrial detectors typically involve cross-correlating data from pairs of detectors. The sensitivity of such cross-correlation analyses depends, among other things, on the separation between the two detectors: the smaller the separation, the better the sensitivity. Hence, a colocated detector pair is more sensitive to a gravitational-wave background than a noncolocated detector pair. However, colocated detectors are also expected to suffer from correlated noise from instrumental and environmental effects that could contaminate the measurement of the background. Hence, methods to identify and mitigate the effects of correlated noise are necessary to achieve the potential increase in sensitivity of colocated detectors. Here we report on the first SGWB analysis using the two LIGO Hanford detectors and address the complications arising from correlated environmental noise. We apply correlated noise identification and mitigation techniques to data taken by the two LIGO Hanford detectors, H1 and H2, during LIGO’s fifth science run. At low frequencies, 40–460 Hz, we are unable to sufficiently mitigate the correlated noise to a level where we may confidently measure or bound the stochastic gravitational-wave signal. However, at high frequencies, 460–1000 Hz, these techniques are sufficient to set a 95% confidence level upper limit on the gravitational-wave energy density of Ω(f) < 7.7 × 10[superscript -4](f/900  Hz)[superscript 3], which improves on the previous upper limit by a factor of ~180. In doing so, we demonstrate techniques that will be useful for future searches using advanced detectors, where correlated noise (e.g., from global magnetic fields) may affect even widely separated detectors.National Science Foundation (U.S.)United States. National Aeronautics and Space AdministrationCarnegie TrustDavid & Lucile Packard FoundationAlfred P. Sloan Foundatio

    Constraints on cosmic strings using data from the first Advanced LIGO observing run

    Get PDF
    Cosmic strings are topological defects which can be formed in grand unified theory scale phase transitions in the early universe. They are also predicted to form in the context of string theory. The main mechanism for a network of Nambu-Goto cosmic strings to lose energy is through the production of loops and the subsequent emission of gravitational waves, thus offering an experimental signature for the existence of cosmic strings. Here we report on the analysis conducted to specifically search for gravitational-wave bursts from cosmic string loops in the data of Advanced LIGO 2015-2016 observing run (O1). No evidence of such signals was found in the data, and as a result we set upper limits on the cosmic string parameters for three recent loop distribution models. In this paper, we initially derive constraints on the string tension GΌ and the intercommutation probability, using not only the burst analysis performed on the O1 data set but also results from the previously published LIGO stochastic O1 analysis, pulsar timing arrays, cosmic microwave background and big-bang nucleosynthesis experiments. We show that these data sets are complementary in that they probe gravitational waves produced by cosmic string loops during very different epochs. Finally, we show that the data sets exclude large parts of the parameter space of the three loop distribution models we consider

    Search for High-energy Neutrinos from Binary Neutron Star Merger GW170817 with ANTARES, IceCube, and the Pierre Auger Observatory

    Get PDF

    Comparing methods for clinical investigator site inspection selection: a comparison of site selection methods of investigators in clinical trials

    Get PDF
    Background During the past two decades, the number and complexity of clinical trials have risen dramatically increasing the difficulty of choosing sites for inspection. FDA’s resources are limited and so sites should be chosen with care. Purpose To determine if data mining techniques and/or unsupervised statistical monitoring can assist with the process of identifying potential clinical sites for inspection. Methods Five summary-level clinical site datasets from four new drug applications (NDA) and one biologics license application (BLA), where the FDA had performed or had planned site inspections, were used. The num- ber of sites inspected and the results of the inspections were blinded to the researchers. Five supervised learning models from the previous two years (2016–2017) of an on-going research project were used to predict site inspections results, i.e., No Action Indicated (NAI), Voluntary Action Indicated (VAI), or Official Action Indicated (OAI). Statistical Monitoring Applied to Research Trials (SMARTTM) software for unsupervised statistical monitoring software developed by CluePoints (Mont-Saint-Guibert, Belgium) was utilized to identify atypical centers (via a p-value approach) within a study.Finally, Clinical Investigator Site Selection Tool (CISST), devel- oped by the Center for Drug Evaluation and Research (CDER), was used to calculate the total risk of each site thereby providing a framework for site selection. The agreement between the predictions of these methods was compared. The overall accuracy and sensitivity of the methods were gra- phically compared. Results Spearman’s rank order correlation was used to examine the agree- ment between the SMARTTM analysis (CluePoints’ software) and the CISST analysis. The average aggregated correlation between the p-values (SMARTTM) and total risk scores (CISST) for all five studies was 0.21, and range from −0.41 to 0.50. The Random Forest models for 2016 and 2017 showed the highest aggregated mean agreement (65.1%) amongst out- comes (NAI, VAI, OAI) for the three available studies. While there does not appear to be a single most accurate approach, the performance of methods under certain circumstances is discussed later in this paper. Limitations Classifier models based on data mining techniques require historical data (i.e., training data) to develop the model. There is a possibility that sites in the five-summary level datasets were included in the training datasets for the models from the previous year’s research which could result in spurious confirmation of predictive ability. Additionally, the CISST was utilized in three of the five site selection processes, possibly biasing the data. Conclusion The agreement between methods was lower than expected and no single method emerged as the most accurate
    • 

    corecore