861 research outputs found

    Paper Session II-B - Application of Information Technology to the National Launch System

    Get PDF
    The information needs of the National Launch System program had their beginnings with the Advanced Launch System (ALS). The Technical Reference Document for ALS called for a Unified Information System (UNIS) to provide, in a timely manner, all the information required to manage, design, manufacture, integrate, test, launch, operate, and support the ALS. UNIS 9 was to provide the link between distributed, heterogeneous workstations which were to make up both the ground and flight information systems. In addition, there was to be an Advanced Launch System Model (ALSYM), a set of computerized submodels, or tools, which would work together to simulate all aspects of the ALS. These conceptual requirements were transitioned to the NLS program, and UNIS and the system simulation exist today. The current version of the NLS UNIS links geographically dispersed users to databases, analysis tools, program management tools, and communications devices. UNIS development is continuing to provide the ultimate capabilities which were described in the ALS Technical Reference Document. The approach to that development, as well as the current and planned capabilities are described. The ALSYM requirement transitioned as a requirement for a largescale, end-to-end simulation of the Space Transportation Main Engine (STME) development program, named STESYM. The approach being used to satisfy that requirement incorporates object-oriented programming, discrete-event simulation, and knowledge-based techniques to produce a simulation that captures the technical characteristics of the hardware, the processing flows, and the scheduling requirements. The outputs of the simulation will include subsystem and system reliabilities, process infrastructure statistics, schedule performance statistics, and costs. Together, UNIS and STESYM will provide program managers, engineers, logisticians, and other program participants with communications connectivity and the information to support STME program analysis

    A field and video-annotation guide for baited remote underwater stereo-video surveys of demersal fish assemblages

    Get PDF
    Researchers TL, BG, JW, NB and JM were supported by the Marine Biodiversity Hub through funding from the Australian Government's National Environmental Science Program. Data validation scripts and GlobalArchive.org were supported by the Australian Research Data Commons, the Gorgon-Barrow Island Gorgon Barrow Island Net Conservation Benefits Fund, administered by the Government of Western Australia and the BHP/UWA Biodiversity and Societal Benefits of Restricted Access Areas collaboration.1. Baited remote underwater stereo-video systems (stereo-BRUVs) are a popular tool to sample demersal fish assemblages and gather data on their relative abundance and body-size structure in a robust, cost-effective, and non-invasive manner. Given the rapid uptake of the method, subtle differences have emerged in the way stereo-BRUVs are deployed and how the resulting imagery are annotated. These disparities limit the interoperability of datasets obtained across studies, preventing broad-scale insights into the dynamics of ecological systems. 2. We provide the first globally accepted guide for using stereo-BRUVs to survey demersal fish assemblages and associated benthic habitats. 3. Information on stereo-BRUV design, camera settings, field operations, and image annotation are outlined. Additionally, we provide links to protocols for data validation, archiving, and sharing. 4. Globally, the use of stereo-BRUVs is spreading rapidly. We provide a standardised protocol that will reduce methodological variation among researchers and encourage the use of Findable, Accessible, Interoperable, and Reproducible (FAIR) workflows to increase the ability to synthesise global datasets and answer a broad suite of ecological questions.Publisher PDFPeer reviewe

    A family of process-based models to simulate landscape use by multiple taxa

    Get PDF
    Context: Land-use change is a key driver of biodiversity loss. Models that accurately predict how biodiversity might be affected by land-use changes are urgently needed, to help avoid further negative impacts and inform landscape-scale restoration projects. To be effective, such models must balance model realism with computational tractability and must represent the different habitat and connectivity requirements of multiple species. Objectives: We explored the extent to which process-based modelling might fulfil this role, examining feasibility for different taxa and potential for informing real-world decision-making. Methods: We developed a family of process-based models (*4pop) that simulate landscape use by birds, bats, reptiles and amphibians, derived from the well-established poll4pop model (designed to simulate bee populations). Given landcover data, the models predict spatially-explicit relative abundance by simulating optimal home-range foraging, reproduction, dispersal of offspring and mortality. The models were co-developed by researchers, conservation NGOs and volunteer surveyors, parameterised using literature data and expert opinion, and validated against observational datasets collected across Great Britain. Results: The models were able to simulate habitat specialists, generalists, and species requiring access to multiple habitats for different types of resources (e.g. breeding vs foraging). We identified model refinements required for some taxa and considerations for modelling further species/groups. Conclusions: We suggest process-based models that integrate multiple forms of knowledge can assist biodiversity-inclusive decision-making by predicting habitat use throughout the year, expanding the range of species that can be modelled, and enabling decision-makers to better account for landscape context and habitat configuration effects on population persistence

    The development and validation of a scoring tool to predict the operative duration of elective laparoscopic cholecystectomy

    Get PDF
    Background: The ability to accurately predict operative duration has the potential to optimise theatre efficiency and utilisation, thus reducing costs and increasing staff and patient satisfaction. With laparoscopic cholecystectomy being one of the most commonly performed procedures worldwide, a tool to predict operative duration could be extremely beneficial to healthcare organisations. Methods: Data collected from the CholeS study on patients undergoing cholecystectomy in UK and Irish hospitals between 04/2014 and 05/2014 were used to study operative duration. A multivariable binary logistic regression model was produced in order to identify significant independent predictors of long (> 90 min) operations. The resulting model was converted to a risk score, which was subsequently validated on second cohort of patients using ROC curves. Results: After exclusions, data were available for 7227 patients in the derivation (CholeS) cohort. The median operative duration was 60 min (interquartile range 45–85), with 17.7% of operations lasting longer than 90 min. Ten factors were found to be significant independent predictors of operative durations > 90 min, including ASA, age, previous surgical admissions, BMI, gallbladder wall thickness and CBD diameter. A risk score was then produced from these factors, and applied to a cohort of 2405 patients from a tertiary centre for external validation. This returned an area under the ROC curve of 0.708 (SE = 0.013, p  90 min increasing more than eightfold from 5.1 to 41.8% in the extremes of the score. Conclusion: The scoring tool produced in this study was found to be significantly predictive of long operative durations on validation in an external cohort. As such, the tool may have the potential to enable organisations to better organise theatre lists and deliver greater efficiencies in care

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements

    Search for new particles in events with energetic jets and large missing transverse momentum in proton-proton collisions at root s=13 TeV

    Get PDF
    A search is presented for new particles produced at the LHC in proton-proton collisions at root s = 13 TeV, using events with energetic jets and large missing transverse momentum. The analysis is based on a data sample corresponding to an integrated luminosity of 101 fb(-1), collected in 2017-2018 with the CMS detector. Machine learning techniques are used to define separate categories for events with narrow jets from initial-state radiation and events with large-radius jets consistent with a hadronic decay of a W or Z boson. A statistical combination is made with an earlier search based on a data sample of 36 fb(-1), collected in 2016. No significant excess of events is observed with respect to the standard model background expectation determined from control samples in data. The results are interpreted in terms of limits on the branching fraction of an invisible decay of the Higgs boson, as well as constraints on simplified models of dark matter, on first-generation scalar leptoquarks decaying to quarks and neutrinos, and on models with large extra dimensions. Several of the new limits, specifically for spin-1 dark matter mediators, pseudoscalar mediators, colored mediators, and leptoquarks, are the most restrictive to date.Peer reviewe

    Combined searches for the production of supersymmetric top quark partners in proton-proton collisions at root s=13 TeV

    Get PDF
    A combination of searches for top squark pair production using proton-proton collision data at a center-of-mass energy of 13 TeV at the CERN LHC, corresponding to an integrated luminosity of 137 fb(-1) collected by the CMS experiment, is presented. Signatures with at least 2 jets and large missing transverse momentum are categorized into events with 0, 1, or 2 leptons. New results for regions of parameter space where the kinematical properties of top squark pair production and top quark pair production are very similar are presented. Depending on themodel, the combined result excludes a top squarkmass up to 1325 GeV for amassless neutralino, and a neutralinomass up to 700 GeV for a top squarkmass of 1150 GeV. Top squarks with masses from 145 to 295 GeV, for neutralino masses from 0 to 100 GeV, with a mass difference between the top squark and the neutralino in a window of 30 GeV around the mass of the top quark, are excluded for the first time with CMS data. The results of theses searches are also interpreted in an alternative signal model of dark matter production via a spin-0 mediator in association with a top quark pair. Upper limits are set on the cross section for mediator particle masses of up to 420 GeV

    Impact of opioid-free analgesia on pain severity and patient satisfaction after discharge from surgery: multispecialty, prospective cohort study in 25 countries

    Get PDF
    Background: Balancing opioid stewardship and the need for adequate analgesia following discharge after surgery is challenging. This study aimed to compare the outcomes for patients discharged with opioid versus opioid-free analgesia after common surgical procedures.Methods: This international, multicentre, prospective cohort study collected data from patients undergoing common acute and elective general surgical, urological, gynaecological, and orthopaedic procedures. The primary outcomes were patient-reported time in severe pain measured on a numerical analogue scale from 0 to 100% and patient-reported satisfaction with pain relief during the first week following discharge. Data were collected by in-hospital chart review and patient telephone interview 1 week after discharge.Results: The study recruited 4273 patients from 144 centres in 25 countries; 1311 patients (30.7%) were prescribed opioid analgesia at discharge. Patients reported being in severe pain for 10 (i.q.r. 1-30)% of the first week after discharge and rated satisfaction with analgesia as 90 (i.q.r. 80-100) of 100. After adjustment for confounders, opioid analgesia on discharge was independently associated with increased pain severity (risk ratio 1.52, 95% c.i. 1.31 to 1.76; P < 0.001) and re-presentation to healthcare providers owing to side-effects of medication (OR 2.38, 95% c.i. 1.36 to 4.17; P = 0.004), but not with satisfaction with analgesia (beta coefficient 0.92, 95% c.i. -1.52 to 3.36; P = 0.468) compared with opioid-free analgesia. Although opioid prescribing varied greatly between high-income and low- and middle-income countries, patient-reported outcomes did not.Conclusion: Opioid analgesia prescription on surgical discharge is associated with a higher risk of re-presentation owing to side-effects of medication and increased patient-reported pain, but not with changes in patient-reported satisfaction. Opioid-free discharge analgesia should be adopted routinely
    corecore