12,709 research outputs found

    Performance of the distributed central analysis in BaBar

    Get PDF
    The total dataset produced by the BaBar experiment at the Stanford Linear Accelerator Center (SLAC) currently comprises roughly3times1093times 10^9data events and an equal amount of simulated events, corresponding to 23 Tbytes of real data and 51 Tbytes simulated events. Since individual analyses typically select a very small fraction of all events, it would be extremely inefficient if each analysis had to process the full dataset. A first, centrally managed analysis step is therefore a common pre-selection (‘skimming’) of all data according to very loose, inclusive criteria to facilitate data access for later analysis. Usually, there are common selection criteria for several analysis. However, they may change over time, e.g., when new analyses are developed. Currently,$cal

    Integrated electronic prescribing and robotic dispensing: a case study

    Get PDF
    INTRODUCTION: To quantify the benefits of electronic prescribing directly linked to a robotic dispensing machine. CASE DESCRIPTION: Quantitative case study analysis is used on a single case. Hospital A (1,000 beds) has used an integrated electronic prescribing system for 10 years, and in 2009 linked two robotic dispensing machines to the system. The impact on dispensing error rates (quality) and efficiency (costs) were assessed. EVALUATION AND DISCUSSION: The implementation delivered staff efficiencies above expectation. For the out-patient department, this was 16% more than the business case had suggested. For the in-patients dispensary, four staff were released for re-deployment. Additionally, £500,000 in stockholding efficiency above that suggested by the business case was identified. Overall dispensing error rates were not adversely affected and products dispensed by the electronic prescribing - robot system produced zero dispensing errors. The speed of dispensing increased also, as the electronic prescribing - robot combination permitted almost instantaneous dispensing from the point of a doctor entering a prescription. CONCLUSION: It was significant that the combination of electronic prescribing and a robot eliminated dispensing errors. Any errors that did occur were not as a result of the electronic prescribing - robotic system (i.e. the product was not stocked within the robot). The direct linking of electronic prescribing and robots as a dispensing system together produces efficiencies and improves the quality of the dispensing process

    Health system performance assessment in small countries: The case study of Latvia

    Get PDF
    Managing the complexity that characterizes health systems requires sophisticated performance assessment information to support the decision‐making processes of healthcare stakeholders at various levels. Accordingly, in the past few decades, many countries have designed and implemented health system performance assessment (HSPA) programmes. Literature and practice agree on the key features that performance measurement in health should have, namely, multidimensionality, evidence‐based data collection, systematic benchmarking of results, shared design, transparent disclosure, and timeliness. Nevertheless, the specific characteristics of different countries may pose challenges in the implementation of such programmes. In the case of small countries, many of these challenges are common and related to their inherent characteristics, eg, small populations, small volumes of activity for certain treatments, and lack of benchmarks. Through the development of the case study of Latvia, this paper aims at discussing the challenges and opportunities for assessing health system performance in a small country. As a result, for each of the performance measurement features identified by the literature, the authors discuss the issues emerging when adopting them in Latvia and set out the potential solutions that have been designed during the development of the case study

    To test or to treat? an analysis of influenza testing and Antiviral treatment strategies using economic computer modeling

    Get PDF
    Background: Due to the unpredictable burden of pandemic influenza, the best strategy to manage testing, such as rapid or polymerase chain reaction (PCR), and antiviral medications for patients who present with influenza-like illness (ILI) is unknown. Methodology/Principal Findings: We developed a set of computer simulation models to evaluate the potential economic value of seven strategies under seasonal and pandemic influenza conditions: (1) using clinical judgment alone to guide antiviral use, (2) using PCR to determine whether to initiate antivirals, (3) using a rapid (point-of-care) test to determine antiviral use, (4) using a combination of a point-of-care test and clinical judgment, (5) using clinical judgment and confirming the diagnosis with PCR testing, (6) treating all with antivirals, and (7) not treating anyone with antivirals. For healthy younger adults (<65 years old) presenting with ILI in a seasonal influenza scenario, strategies were only cost-effective from the societal perspective. Clinical judgment, followed by PCR and point-of-care testing, was found to be cost-effective given a high influenza probability. Doubling hospitalization risk and mortality (representing either higher risk individuals or more virulent strains) made using clinical judgment to guide antiviral decision-making cost-effective, as well as PCR testing, point-of-care testing, and point-of-care testing used in conjunction with clinical judgment. For older adults (≄65 years old), in both seasonal and pandemic influenza scenarios, employing PCR was the most cost-effective option, with the closest competitor being clinical judgment (when judgment accuracy ≄50%). Point-of-care testing plus clinical judgment was cost-effective with higher probabilities of influenza. Treating all symptomatic ILI patients with antivirals was cost-effective only in older adults. Conclusions/Significance: Our study delineated the conditions under which different testing and antiviral strategies may be cost-effective, showing the importance of accuracy, as seen with PCR or highly sensitive clinical judgment. © 2010 Lee et al

    Boundary-layer effects on acoustic transmission through narrow slit cavities

    Get PDF
    PublishedJournal ArticleWe explore the slit-width dependence of the resonant transmission of sound in air through both a slit array formed of aluminum slats and a single open-ended slit cavity in an aluminum plate. Our experimental results accord well with Lord Rayleigh's theory concerning how thin viscous and thermal boundary layers at a slit's walls affect the acoustic wave across the whole slit cavity. By measuring accurately the frequencies of the Fabry-Perot-like cavity resonances, we find a significant 5% reduction in the effective speed of sound through the slits when an individual viscous boundary layer occupies only 5% of the total slit width. Importantly, this effect is true for any airborne slit cavity, with the reduction being achieved despite the slit width being on a far larger scale than an individual boundary layer's thickness. This work demonstrates that the recent prevalent loss-free treatment of narrow slit cavities within acoustic metamaterials is unrealistic.The authors would like to thank DSTL for their financial support

    Molecular characterisation and epidemiological investigation of an outbreak of blaOXA-181 carbapenemaseproducing isolates of Klebsiella pneumoniae in South Africa

    Get PDF
    Background. Klebsiella pneumoniae is an opportunistic pathogen often associated with nosocomial  infections. A suspected outbreak of K. pneumoniae isolates, exhibiting reduced susceptibility to  carbapenem antibiotics, was detected during the month of May 2012 among patients admitted to a haematology unit of a tertiary academic hospital in Cape Town, South Africa (SA).Objectives. An investigation was done to determine possible epidemiological links between the case patients and to describe the mechanisms of carbapenem resistance of these bacterial isolates.Methods. Relevant demographic, clinical and laboratory information was extracted from hospital  records and an observational review of infection prevention and control practices in the affected unit was performed. Antimicrobial susceptibility testing including phenotypic testing and genotypic detection of the most commonly described carbapenemase genes was done. The phylogenetic relationship of all isolates containing the blaOXA-181 carbapenemase gene was determined by pulsed-field gel electrophoresis  (PFGE) and multilocus sequence typing.Results. Polymerase chain reaction analysis identified a total of seven blaOXA-181-positive,  carbapenem-resistant K. pneumoniae isolates obtained from seven patients, all from a single unit. These isolates were indistinguishable using PFGE analysis and belonged to sequence type ST-14. No other carbapenemase enzymes were detected.Conclusion. This is the first documented laboratory-confirmed outbreak of OXA-181-producing K.  pneumoniae in SA, and highlights the importance of enforcing strict adherence to infection control  procedures and the need for ongoing surveillance of antibiotic-resistant pathogens in local hospitals

    Improved model identification for non-linear systems using a random subsampling and multifold modelling (RSMM) approach

    Get PDF
    In non-linear system identification, the available observed data are conventionally partitioned into two parts: the training data that are used for model identification and the test data that are used for model performance testing. This sort of 'hold-out' or 'split-sample' data partitioning method is convenient and the associated model identification procedure is in general easy to implement. The resultant model obtained from such a once-partitioned single training dataset, however, may occasionally lack robustness and generalisation to represent future unseen data, because the performance of the identified model may be highly dependent on how the data partition is made. To overcome the drawback of the hold-out data partitioning method, this study presents a new random subsampling and multifold modelling (RSMM) approach to produce less biased or preferably unbiased models. The basic idea and the associated procedure are as follows. First, generate K training datasets (and also K validation datasets), using a K-fold random subsampling method. Secondly, detect significant model terms and identify a common model structure that fits all the K datasets using a new proposed common model selection approach, called the multiple orthogonal search algorithm. Finally, estimate and refine the model parameters for the identified common-structured model using a multifold parameter estimation method. The proposed method can produce robust models with better generalisation performance
    • 

    corecore