39 research outputs found
Influences on Patient Satisfaction in Healthcare Centers: A Semi-Quantitative Study Over 5 Years
BACKGROUND: Knowledge of ambulatory patients\u27 satisfaction with clinic visits help improve communication and delivery of healthcare. The goal was to examine patient satisfaction in a primary care setting, identify how selected patient and physician setting and characteristics affected satisfaction, and determine if feedback provided to medical directors over time impacted patient satisfaction.
METHODS: A three-phase, semi-quantitative analysis was performed using anonymous, validated patient satisfaction surveys collected from 889 ambulatory outpatients in 6 healthcare centers over 5-years. Patients\u27 responses to 21 questions were analyzed by principal components varimax rotated factor analysis. Three classifiable components emerged: Satisfaction with Physician, Availability/Convenience, and Orderly/Time. To study the effects of several independent variables (location of clinics, patients\u27 and physicians\u27 age, education level and duration at the clinic), data were subjected to multivariate analysis of variance (MANOVA)..
RESULTS: Changes in the healthcare centers over time were not significantly related to patient satisfaction. However, location of the center did affect satisfaction. Urban patients were more satisfied with their physicians than rural, and inner city patients were less satisfied than urban or rural on Availability/Convenience and less satisfied than urban patients on Orderly/Time. How long a patient attended a center most affected satisfaction, with patients attending \u3e10 years more satisfied in all three components than those attending60 years old. Patients were significantly more satisfied with their 30-40 year-old physicians compared with those over 60. On Orderly/Time, patients were more satisfied with physicians who were in their 50\u27s than physicians \u3e60.
CONCLUSIONS: Improvement in patient satisfaction includes a need for immediate, specific feedback. Although Medical Directors received feedback yearly, we found no significant changes in patient satisfaction over time. Our results suggest that, to increase satisfaction, patients with lower education, those who are sicker, and those who are new to the center likely would benefit from additional high quality interactions with their physicians
queue: Customized large-scale clock frequency scaling
Abstract-We examine the scalability of a set of techniques related to Dynamic Voltage-Frequency Scaling (DVFS) on HPC systems to reduce the energy consumption of scientific applications through an application-aware analysis and runtime framework, Green Queue. Green Queue supports making CPU clock frequency changes in response to intra-node and internode observations about application behavior. Our intra-node approach reduces CPU clock frequencies and therefore power consumption while CPUs lacks computational work due to inefficient data movement. Our inter-node approach reduces clock frequencies for MPI ranks that lack computational work. We investigate these techniques on a set of large scientific applications on 1024 cores of Gordon, an Intel Sandybridgebased supercomputer at the San Diego Supercomputer Center. Our optimal intra-node technique showed an average measured energy savings of 10.6% and a maximum of 21.0% over regular application runs. Our optimal inter-node technique showed an average 17.4% and a maximum of 31.7% energy savings
High-frequency simulations of global seismic wave propagation using SPECFEM3D_GLOBE on 62K processors
SPECFEM3D_GLOBE is a spectral element application enabling the simulation of global seismic wave propagation in 3D anelastic, anisotropic, rotating and self-gravitating Earth models at unprecedented resolution. A fundamental challenge in global seismology is to model the propagation of waves with periods between 1 and 2 seconds, the highest frequency signals that can propagate clear across the Earth. These waves help reveal the 3D structure of the Earth's deep interior and can be compared to seismographic recordings. We broke the 2 second barrier using the 62K processor Ranger system at TACC. Indeed we broke the barrier using just half of Ranger, by reaching a period of 1.84 seconds with sustained 28.7 Tflops on 32K processors. We obtained similar results on the XT4 Franklin system at NERSC and the XT4 Kraken system at University of Tennessee Knoxville, while a similar run on the 28K processor Jaguar system at ORNL, which has better memory bandwidth per processor, sustained 35.7 Tflops (a higher flops rate) with a 1.94 shortest period.
Thus we have enabled a powerful new tool for seismic wave simulation, one that operates in the same frequency regimes as nature; in seismology there is no need to pursue periods much smaller because higher frequency signals do not propagate across the entire globe.
We employed performance modeling methods to identify performance bottlenecks and worked through issues of parallel I/O and scalability. Improved mesh design and numbering results in excellent load balancing and few cache misses. The primary achievements are not just the scalability and high teraflops number, but a historic step towards understanding the physics and chemistry of the Earth's interior at unprecedented resolution
Success of Endoscopic Pharyngoesophageal Dilation after Head and Neck Cancer Treatment
To assess clinical success and safety of endoscopic pharyngoesophageal dilation after chemoradiation or radiation for head and neck cancer and to identify variables associated with dilation failure
Från jord till bord 2011/2012 : From farm to folk 2011/2012
Comparisons overall and by time, location, individual HCCs vs. 3 components. (DOC 41 kb
Recommended from our members
Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Tiger Team
The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully
A framework for performance modeling and prediction
Cycle-accurate simulation is far too slow for modeling the expected performance of full parallel applications on large HPC systems. And just running an application on a system and observing wallclock time tells you nothing about why the application performs as it does (and is anyway impossible on yet-to-be-built systems). Here we present a framework for performance modeling and prediction that is faster than cycle-accurate simulation, more informative than simple benchmarking, and is shown useful for performance investigations in several dimensions.
A performance prediction framework for scientific applications
Abstract. This work presents a performance modeling framework, developed by the Performance Modeling and Characterization (PMaC) Lab at the San Diego Supercomputer Center, that is faster than traditional cycle-accurate simulation, more sophisticated than performance estimation based on system peak-performance metrics, and is shown to be effective on the LINPACK benchmark and a synthetic version of an ocean modeling application (NLOM). The LINPACK benchmark is further used to investigate methods to reduce the time required to make accurate performance predictions with the framework. These methods are applied to the predictions of the synthetic NLOM application.
Performance sensitivity studies for strategic applications
This paper applies modeling and simulation to key HPCMP systems and applications to determine the degree to which fundamental system attributes affect application performance. Synthetic probes are used to ascertain target system capabilities, while application tracing is used to uncover the memory and communication usage characteristics of target codes. A predictive model subsequently melds system and application data in order to project a time-to-solution for each application and system pair. System attributes are then systematically modified, and the predictive model is again applied, to determine the sensitivity of application performance to key system attributes. Time-to-solution predictions for five application tes