165 research outputs found

    Erratum to: Benchmarks for ethically credible partnerships between industry and academic health centers: beyond disclosure of financial conflicts of interest.

    Get PDF
    Relationships between industry and university-based researchers have been commonplace for decades and have received notable attention concerning the conflicts of interest these relationships may harbor. While new efforts are being made to update conflict of interest policies and make industry relationships with academia more transparent, the development of broader institutional partnerships between industry and academic health centers challenges the efficacy of current policy to effectively manage these innovative partnerships. In this paper, we argue that existing strategies to reduce conflicts of interest are not sufficient to address the emerging models of industry-academic partnerships because they focus too narrowly on financial matters and are not comprehensive enough to mitigate all ethical risk. Moreover, conflict-of-interest strategies are not designed to promote best practices nor the scientific and social benefits of academic-industry collaboration. We propose a framework of principles and benchmarks for "ethically credible partnerships" between industry and academic health centers and describe how this framework may provide a practical and comprehensive approach for designing and evaluating such partnerships

    Results from the NASA Capability Roadmap Team for In-Situ Resource Utilization (ISRU)

    Get PDF
    On January 14, 2004, the President of the United States unveiled a new vision for robotic and human exploration of space entitled, "A Renewed Spirit of Discovery". As stated by the President in the Vision for Space Exploration (VSE), NASA must "... implement a sustained and affordable human and robotic program to explore the solar system and beyond " and ".. .develop new technologies and harness the moon's abundant resources to allow manned exploration of more challenging environments." A key to fulfilling the goal of sustained and affordable human and robotic exploration will be the ability to use resources that are available at the site of exploration to "live off the land" instead of bringing everything from Earth, known as In-Situ Resource Utilization (ISRU). ISRU can significantly reduce the mass, cost, and risk of exploration through capabilities such as: mission consumable production (propellants, fuel cell reagents, life support consumables, and feedstock for manufacturing & construction); surface construction (radiation shields, landing pads, walls, habitats, etc.); manufacturing and repair with in-situ resources (spare parts, wires, trusses, integrated systems etc.); and space utilities and power from space resources. On January 27th, 2004 the President's Commission on Implementation of U.S. Space Exploration Policy (Aldridge Committee) was created and its final report was released in June 2004. One of the report's recommendations was to establish special project teams to evaluate enabling technologies, of which "Planetary in situ resource utilization" was one of them. Based on the VSE and the commission's final report, NASA established fifteen Capability Roadmap teams, of which ISRU was one of the teams established. From Oct. 2004 to May 2005 the ISRU Capability Roadmap team examined the capabilities, benefits, architecture and mission implementation strategy, critical decisions, current state-of-the-art (SOA), challenges, technology gaps, and risks of ISRU for future human Moon and Mars exploration. This presentation will provide an overview of the ISRU capability, architecture, and implementation strategy examined by the ISRU Capability Roadmap team, along with a top-level review of ISRU benefits, resources and products of interest, and the current SOA in ISRU processes and systems. The presentation will also highlight the challenges of incorporating ISRU into future missions and the gaps in technologies and capabilities that need to be filled to enable ISRU

    MLPerf Inference Benchmark

    Full text link
    Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and five orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. In this paper, we present our benchmarking method for evaluating ML inference systems. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures. The first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities. The submissions attest to the benchmark's flexibility and adaptability.Comment: ISCA 202

    Macrofilaricides and onchocerciasis control, mathematical modelling of the prospects for elimination

    Get PDF
    BACKGROUND: In most endemic parts of the world, onchocerciasis (river blindness) control relies, or will soon rely, exclusively on mass treatment with the microfilaricide ivermectin. Worldwide eradication of the parasite by means of this drug is unlikely. Macrofilaricidal drugs are currently being developed for human use. METHODS: We used ONCHOSIM, a microsimulation mathematical model of the dynamics of onchocerciasis transmission, to explore the potentials of a hypothetical macrofilaricidal drug for the elimination of onchocerciasis under different epidemiological conditions, as characterized by previous intervention strategies, vectorial capacity and levels of coverage. RESULTS: With a high vector biting rate and poor coverage, a very effective macrofilaricide would appear to have a substantially higher potential for achieving elimination of the parasite than does ivermectin. CONCLUSIONS: Macrofilaricides have a substantially higher potential for achieving onchocerciasis elimination than ivermectin, but high coverage levels are still key. When these drugs become available, onchocerciasis elimination strategies should be reconsidered. In view of the impact of control efforts preceding the introduction of macrofilaricides on the success of elimination, it is important to sustain current control efforts

    Association mapping of malting quality traits in UK spring and winter barley cultivar collections

    Get PDF
    Key Message: Historical malting quality data was collated from UK national and recommended list trial data and used in a GWAS. 25 QTL were identified, with the majority from spring barley cultivar sets. Abstract: In Europe, the most economically significant use of barley is the production of malt for use in the brewing and distilling industries. As such, selection for traits related to malting quality is of great commercial interest. In order to study the genetic basis of variation for malting quality traits in UK cultivars, a historical set of trial data was collated from national and recommended list trials from the period 1988 to 2016. This data was used to estimate variety means for 20 quality related traits in 451 spring barley cultivars, and 407 winter cultivars. Genotypes for these cultivars were generated using iSelect 9k and 50k genotyping platforms, and a genome wide association scan performed to identify malting quality quantitative trait loci (QTL). 24 QTL were identified in spring barley cultivars, and 2 from the winter set. A number of these correspond to known malting quality related genes but the remainder represents novel genetic variation that is accessible to breeders for the genetic improvement of new cultivars.Mark E. Looseley, Luke Ramsay, Hazel Bull, J. Stuart Swanston, Paul D. Shaw, Malcolm Macaulay, Allan Booth, Joanne R. Russell, Robbie Waugh, on behalf of the IMPROMALT Consortium, William T.B. Thoma

    Are biological systems poised at criticality?

    Full text link
    Many of life's most fascinating phenomena emerge from interactions among many elements--many amino acids determine the structure of a single protein, many genes determine the fate of a cell, many neurons are involved in shaping our thoughts and memories. Physicists have long hoped that these collective behaviors could be described using the ideas and methods of statistical mechanics. In the past few years, new, larger scale experiments have made it possible to construct statistical mechanics models of biological systems directly from real data. We review the surprising successes of this "inverse" approach, using examples form families of proteins, networks of neurons, and flocks of birds. Remarkably, in all these cases the models that emerge from the data are poised at a very special point in their parameter space--a critical point. This suggests there may be some deeper theoretical principle behind the behavior of these diverse systems.Comment: 21 page

    Photography-based taxonomy is inadequate, unnecessary, and potentially harmful for biological sciences

    Get PDF
    The question whether taxonomic descriptions naming new animal species without type specimen(s) deposited in collections should be accepted for publication by scientific journals and allowed by the Code has already been discussed in Zootaxa (Dubois & NemĂ©sio 2007; Donegan 2008, 2009; NemĂ©sio 2009a–b; Dubois 2009; Gentile & Snell 2009; Minelli 2009; Cianferoni & Bartolozzi 2016; Amorim et al. 2016). This question was again raised in a letter supported by 35 signatories published in the journal Nature (Pape et al. 2016) on 15 September 2016. On 25 September 2016, the following rebuttal (strictly limited to 300 words as per the editorial rules of Nature) was submitted to Nature, which on 18 October 2016 refused to publish it. As we think this problem is a very important one for zoological taxonomy, this text is published here exactly as submitted to Nature, followed by the list of the 493 taxonomists and collection-based researchers who signed it in the short time span from 20 September to 6 October 2016

    The development and validation of a scoring tool to predict the operative duration of elective laparoscopic cholecystectomy

    Get PDF
    Background: The ability to accurately predict operative duration has the potential to optimise theatre efficiency and utilisation, thus reducing costs and increasing staff and patient satisfaction. With laparoscopic cholecystectomy being one of the most commonly performed procedures worldwide, a tool to predict operative duration could be extremely beneficial to healthcare organisations. Methods: Data collected from the CholeS study on patients undergoing cholecystectomy in UK and Irish hospitals between 04/2014 and 05/2014 were used to study operative duration. A multivariable binary logistic regression model was produced in order to identify significant independent predictors of long (> 90 min) operations. The resulting model was converted to a risk score, which was subsequently validated on second cohort of patients using ROC curves. Results: After exclusions, data were available for 7227 patients in the derivation (CholeS) cohort. The median operative duration was 60 min (interquartile range 45–85), with 17.7% of operations lasting longer than 90 min. Ten factors were found to be significant independent predictors of operative durations > 90 min, including ASA, age, previous surgical admissions, BMI, gallbladder wall thickness and CBD diameter. A risk score was then produced from these factors, and applied to a cohort of 2405 patients from a tertiary centre for external validation. This returned an area under the ROC curve of 0.708 (SE = 0.013, p  90 min increasing more than eightfold from 5.1 to 41.8% in the extremes of the score. Conclusion: The scoring tool produced in this study was found to be significantly predictive of long operative durations on validation in an external cohort. As such, the tool may have the potential to enable organisations to better organise theatre lists and deliver greater efficiencies in care
    • 

    corecore