40 research outputs found
Hospital deaths and adverse events in Brazil
<p>Abstract</p> <p>Background</p> <p>Adverse events are considered a major international problem related to the performance of health systems. Evaluating the occurrence of adverse events involves, as any other outcome measure, determining the extent to which the observed differences can be attributed to the patient's risk factors or to variations in the treatment process, and this in turn highlights the importance of measuring differences in the severity of the cases. The current study aims to evaluate the association between deaths and adverse events, adjusted according to patient risk factors.</p> <p>Methods</p> <p>The study is based on a random sample of 1103 patient charts from hospitalizations in the year 2003 in 3 teaching hospitals in the state of Rio de Janeiro, Brazil. The methodology involved a retrospective review of patient charts in two stages - screening phase and evaluation phase. Logistic regression was used to evaluate the relationship between hospital deaths and adverse events.</p> <p>Results</p> <p>The overall mortality rate was 8.5%, while the rate related to the occurrence of an adverse event was 2.9% (32/1103) and that related to preventable adverse events was 2.3% (25/1103). Among the 94 deaths analyzed, 34% were related to cases involving adverse events, and 26.6% of deaths occurred in cases whose adverse events were considered preventable. The models tested showed good discriminatory capacity. The unadjusted odds ratio (OR 11.43) and the odds ratio adjusted for patient risk factors (OR 8.23) between death and preventable adverse event were high.</p> <p>Conclusions</p> <p>Despite discussions in the literature regarding the limitations of evaluating preventable adverse events based on peer review, the results presented here emphasize that adverse events are not only prevalent, but are associated with serious harm and even death. These results also highlight the importance of risk adjustment and multivariate models in the study of adverse events.</p
Recommended from our members
Server-side parallel data reduction and analysis
Geoscience analysis is currently limited by cumbersome access and manipulation of large datasets from remote sources. Due to their data-heavy and compute-light nature, these analysis workloads represent a class of applications unsuited to a computational grid optimized for compute-intensive applications. We present the Script Workflow Analysis for Multiprocessing (SWAMP) system, which relocates data-intensive workflows from scientists' workstations to the hosting datacenters in order to reduce data transfer and exploit locality. Our colocation of computation and data leverages the typically reductive characteristics of these workflows, allowing SWAMP to complete workflows in a fraction of the time and with much less data transfer. We describe SWAMP's implementation and interface, which is designed to leverage scientists' existing script-based workflows. Tests with a production geoscience workflow show drastic improvements not only in overall execution time, but in computation time as well. SWAMP's workflow analysis capability allows it to detect dependencies, optimize I/O, and dynamically parallelize execution. Benchmarks quantify the drastic reduction in transfer time, computation time, and end-to-end execution time. © Springer-Verlag Berlin Heidelberg 2007
Clustered Workflow Execution of Retargeted Data Analysis Scripts
Supercomputing advances have enabled computational science data volumes to grow at ever increasing rates, commonly resulting in more data produced than can be practically analyzed. Whole-dataset download costs have grown to impractical heights, even with multi-Gbps networks, forcing scientists to rely on server-side subsetting and limiting the scope of data they can analyze on a workstation. Our system supplements existing scientific data services with lightweight computational capability, providing a means of safely relocating analysis from the desktop to the server where clustered execution can be coordinated, exploiting data locality, reducing unnecessary data transfer, and providing end-users with results several times faster. We show how dataflow and other compiler-inspired analyses of shell scripts of scientists' most common analysis tools enables parallelization and optimizations in disk and network I/O bandwidth. We benchmark using an actual geoscience analysis script, illustrating the crucial performance gains of extracting workflows defined in scripts and optimizing their execution. Current results quantify significant improvements in performance, showing the promise of bringing transparent high-performance analysis to the scientist's desktop. © 2008 IEEE
Recommended from our members
Clustered workflow execution of retargeted data analysis scripts
Supercomputing advances have enabled computational science data volumes to grow at ever increasing rates, commonly resulting in more data produced than can be practically analyzed. Whole-dataset download costs have grown to impractical heights, even with multi-Gbps networks, forcing scientists to rely on server-side subsetting and limiting the scope of data they can analyze on a workstation. Our system supplements existing scientific data services with lightweight computational capability, providing a means of safely relocating analysis from the desktop to the server where clustered execution can be coordinated, exploiting data locality, reducing unnecessary data transfer, and providing end-users with results several times faster. We show how dataflow and other compiler-inspired analyses of shell scripts of scientists' most common analysis tools enables parallelization and optimizations in disk and network I/O bandwidth. We benchmark using an actual geoscience analysis script, illustrating the crucial performance gains of extracting workflows defined in scripts and optimizing their execution. Current results quantify significant improvements in performance, showing the promise of bringing transparent high-performance analysis to the scientist's desktop. © 2008 IEEE
Recommended from our members
Server-side parallel data reduction and analysis
Geoscience analysis is currently limited by cumbersome access and manipulation of large datasets from remote sources. Due to their data-heavy and compute-light nature, these analysis workloads represent a class of applications unsuited to a computational grid optimized for compute-intensive applications. We present the Script Workflow Analysis for Multiprocessing (SWAMP) system, which relocates data-intensive workflows from scientists' workstations to the hosting datacenters in order to reduce data transfer and exploit locality. Our colocation of computation and data leverages the typically reductive characteristics of these workflows, allowing SWAMP to complete workflows in a fraction of the time and with much less data transfer. We describe SWAMP's implementation and interface, which is designed to leverage scientists' existing script-based workflows. Tests with a production geoscience workflow show drastic improvements not only in overall execution time, but in computation time as well. SWAMP's workflow analysis capability allows it to detect dependencies, optimize I/O, and dynamically parallelize execution. Benchmarks quantify the drastic reduction in transfer time, computation time, and end-to-end execution time. © Springer-Verlag Berlin Heidelberg 2007