281 research outputs found
A structured framework for improving outbreak investigation audits
Outbreak investigation is a core function of public health agencies. Suboptimal outbreak investigation endangers both public health and agency reputations. While audits of clinical medical and nursing practice are conducted as part of continuous quality improvement, public health agencies rarely make systematic use of structured audits to ensure best practice for outbreak responses, and there is limited guidance or policy to guide outbreak audit. A framework for prioritising which outbreak investigations to audit, an approach for conducting a successful audit, and a template for audit trigger questions was developed and trialled in four foodborne outbreaks and a respiratory disease outbreak in Australia. The following issues were identified across several structured audits: the need for clear definitions of roles and responsibilities both within and between agencies, improved communication between agencies and with external stakeholders involved in outbreaks, and the need for development of performance standards in outbreak investigations - particularly in relation to timeliness of response. Participants considered the audit process and methodology to be clear, useful, and non-threatening. Most audits can be conducted within two to three hours, however, some participants felt this limited the scope of the audit. The framework was acceptable to participants, provided an opportunity for clarifying perceptions and enhancing partnership approaches, and provided useful recommendations for approaching future outbreaks. Future challenges include incorporating feedback from broader stakeholder groups, for example those of affected cases, institutions and businesses; assessing the quality of a specific audit; developing training for both participants and facilitators; and building a central capacity to support jurisdictions embarking on an audit. The incorporation of measurable performance criteria or sharing of benchmark performance criteria will assist in the standardisation of outbreak investigation audit and further quality improvement
Achieving a desired training intensity through the prescription of external training load variables in youth sport; more pieces to the puzzle required
Identifying the external training load variables which influence subjective internal response will help reduce the mismatch between coach-intended and athlete-perceived training intensity. Therefore, this study aimed to reduce external training load measures into distinct principal components (PCs), plot internal training response (quantified via session Rating of Perceived Exertion [sRPE]) against the identified PCs and investigate how the prescription of PCs influences subjective internal training response. Twenty-nine school to international level youth athletes wore microtechnology units for field-based training sessions. SRPE was collected post-session and assigned to the microtechnology unit data for the corresponding training session. 198 rugby union, 145 field hockey and 142 soccer observations were analysed. The external training variables were reduced to two PCs for each sport cumulatively explaining 91%, 96% and 91% of sRPE variance in rugby union, field hockey and soccer, respectively. However, when internal response was plotted against the PCs, the lack of separation between low-, moderate- and high-intensity training sessions precluded further analysis as the prescription of the PCs do not appear to distinguish subjective session intensity. A coach may therefore wish to consider the multitude of physiological, psychological and environmental factors which influence sRPE alongside external training load prescription
The relative contribution of training intensity and duration to daily measures of training load in professional rugby league and union
This study examined the relative contribution of exercise duration and intensity to team-sport athlete’s training load. Male, professional rugby league (n = 10) and union (n = 22) players were monitored over 6- and 52-week training periods, respectively. Whole-session (load) and per-minute (intensity) metrics were monitored (league: session rating of perceived exertion training load [sRPE-TL], individualised training impulse, total distance, BodyLoad™; union: sRPE-TL, total distance, high-speed running distance, PlayerLoad™). Separate principal component analyses were conducted on the load and intensity measures to consolidate raw data into principal components (PC, k = 4). The first load PC captured 70% and 74% of the total variance in the rugby league and rugby union datasets, respectively.. Multiple linear regression subsequently revealed that session duration explained 73% and 57% of the variance in first load PC, respectively, while the four intensity PCs explained an additional 24% and 34%, respectively. Across two professional rugby training programmes, the majority of the variability in training load measures was explained by session duration (~60–70%), while a smaller proportion was explained by session intensity (~30%). When modelling the training load, training intensity and duration should be disaggregated to better account for their between-session variability
Classification tree analysis of second neoplasms in survivors of childhood cancer
BACKGROUND: Reports on childhood cancer survivors estimated cumulative probability of developing secondary neoplasms vary from 3,3% to 25% at 25 years from diagnosis, and the risk of developing another cancer to several times greater than in the general population. METHODS: In our retrospective study, we have used the classification tree multivariate method on a group of 849 first cancer survivors, to identify childhood cancer patients with the greatest risk for development of secondary neoplasms. RESULTS: In observed group of patients, 34 develop secondary neoplasm after treatment of primary cancer. Analysis of parameters present at the treatment of first cancer, exposed two groups of patients at the special risk for secondary neoplasm. First are female patients treated for Hodgkin's disease at the age between 10 and 15 years, whose treatment included radiotherapy. Second group at special risk were male patients with acute lymphoblastic leukemia who were treated at the age between 4,6 and 6,6 years of age. CONCLUSION: The risk groups identified in our study are similar to the results of studies that used more conventional approaches. Usefulness of our approach in study of occurrence of second neoplasms should be confirmed in larger sample study, but user friendly presentation of results makes it attractive for further studies
Internal Jugular Vein Cross-Sectional Area and Cerebrospinal Fluid Pulsatility in the Aqueduct of Sylvius: A Comparative Study between Healthy Subjects and Multiple Sclerosis Patients
Objectives Constricted cerebral venous outflow has been linked with increased cerebrospinal fluid (CSF) pulsatility in the aqueduct of Sylvius in multiple sclerosis (MS) patients and healthy individuals. This study investigates the relationship between CSF pulsatility and internal jugular vein (IJV) cross-sectional area (CSA) in these two groups, something previously unknown. Methods 65 relapsing-remitting MS patients (50.8% female; mean age = 43.8 years) and 74 healthy controls (HCs) (54.1% female; mean age = 43.9 years) were investigated. CSF flow quantification was performed on cine phase-contrast MRI, while IJV-CSA was calculated using magnetic resonance venography. Statistical analysis involved correlation, and partial least squares correlation analysis (PLSCA). Results PLSCA revealed a significant difference (p<0.001; effect size = 1.072) between MS patients and HCs in the positive relationship between CSF pulsatility and IJV-CSA at C5-T1, something not detected at C2-C4. Controlling for age and cardiovascular risk factors, statistical trends were identified in HCs between: increased net positive CSF flow (NPF) and increased IJV-CSA at C5-C6 (left: r = 0.374, p = 0.016; right: r = 0.364, p = 0.019) and C4 (left: r = 0.361, p = 0.020); and increased net negative CSF flow and increased left IJV-CSA at C5-C6 (r = -0.348, p = 0.026) and C4 (r = -0.324, p = 0.039), whereas in MS patients a trend was only identified between increased NPF and increased left IJV-CSA at C5-C6 (r = 0.351, p = 0.021). Overall, correlations were weaker in MS patients (p = 0.015). Conclusions In healthy adults, increased CSF pulsatility is associated with increased IJV-CSA in the lower cervix (independent of age and cardiovascular risk factors), suggesting a biomechanical link between the two. This relationship is altered in MS patients
Use of selective serotonin reuptake inhibitors and risk of re-operation due to post-surgical bleeding in breast cancer patients: a Danish population-based cohort study
<p>Abstract</p> <p>Background</p> <p>Selective serotonin reuptake inhibitors (SSRI) decrease platelet-function, which suggests that SSRI use may increase the risk of post-surgical bleeding. Few studies have investigated this potential association.</p> <p>Methods</p> <p>We conducted a population-based study of the risk of re-operation due to post-surgical bleeding within two weeks of primary surgery among Danish women with primary breast cancer. Patients were categorised according to their use of SSRI: never users, current users (SSRI prescription within 30 days of initial breast cancer surgery), and former users (SSRI prescription more than 30 days before initial breast cancer surgery). We calculated the risk of re-operation due to post-surgical bleeding within 14 days of initial surgery, and the relative risk (RR) of re-operation comparing SSRI users with never users of SSRI adjusting for potential confounders.</p> <p>Results</p> <p>389 of 14,464 women (2.7%) were re-operated. 1592 (11%) had a history of SSRI use. Risk of re-operation was 2.6% among never users, 7.0% among current SSRI users, and 2.7% among former users. Current users thus had an increased risk of re-operation due to post-operative bleeding (adjusted relative risk = 2.3; 95% confidence interval (CI) = 1.4, 3.9) compared with never users. There was no increased risk of re-operation associated with former use of SSRI (RR = 0.93, 95% CI = 0.66, 1.3).</p> <p>Conclusions</p> <p>Current use of SSRI is associated with an increased risk of re-operation due to bleeding after surgery for breast cancer.</p
Surface and Temporal Biosignatures
Recent discoveries of potentially habitable exoplanets have ignited the
prospect of spectroscopic investigations of exoplanet surfaces and atmospheres
for signs of life. This chapter provides an overview of potential surface and
temporal exoplanet biosignatures, reviewing Earth analogues and proposed
applications based on observations and models. The vegetation red-edge (VRE)
remains the most well-studied surface biosignature. Extensions of the VRE,
spectral "edges" produced in part by photosynthetic or nonphotosynthetic
pigments, may likewise present potential evidence of life. Polarization
signatures have the capacity to discriminate between biotic and abiotic "edge"
features in the face of false positives from band-gap generating material.
Temporal biosignatures -- modulations in measurable quantities such as gas
abundances (e.g., CO2), surface features, or emission of light (e.g.,
fluorescence, bioluminescence) that can be directly linked to the actions of a
biosphere -- are in general less well studied than surface or gaseous
biosignatures. However, remote observations of Earth's biosphere nonetheless
provide proofs of concept for these techniques and are reviewed here. Surface
and temporal biosignatures provide complementary information to gaseous
biosignatures, and while likely more challenging to observe, would contribute
information inaccessible from study of the time-averaged atmospheric
composition alone.Comment: 26 pages, 9 figures, review to appear in Handbook of Exoplanets.
Fixed figure conversion error
A Comparison of the Epidemiology and Clinical Presentation of Seasonal Influenza A and 2009 Pandemic Influenza A (H1N1) in Guatemala
A new influenza A (H1N1) virus was first found in April 2009 and proceeded to cause a global pandemic. We compare the epidemiology and clinical presentation of seasonal influenza A (H1N1 and H3N2) and 2009 pandemic influenza A (H1N1) (pH1N1) using a prospective surveillance system for acute respiratory disease in Guatemala.Patients admitted to two public hospitals in Guatemala in 2008-2009 who met a pneumonia case definition, and ambulatory patients with influenza-like illness (ILI) at 10 ambulatory clinics were invited to participate. Data were collected through patient interview, chart abstraction and standardized physical and radiological exams. Nasopharyngeal swabs were taken from all enrolled patients for laboratory diagnosis of influenza A virus infection with real-time reverse transcription polymerase chain reaction. We identified 1,744 eligible, hospitalized pneumonia patients, enrolled 1,666 (96%) and tested samples from 1,601 (96%); 138 (9%) had influenza A virus infection. Surveillance for ILI found 899 eligible patients, enrolled 801 (89%) and tested samples from 793 (99%); influenza A virus infection was identified in 246 (31%). The age distribution of hospitalized pneumonia patients was similar between seasonal H1N1 and pH1N1 (P = 0.21); the proportion of pneumonia patients <1 year old with seasonal H1N1 (39%) and pH1N1 (37%) were similar (P = 0.42). The clinical presentation of pH1N1 and seasonal influenza A was similar for both hospitalized pneumonia and ILI patients. Although signs of severity (admission to an intensive care unit, mechanical ventilation and death) were higher among cases of pH1N1 than seasonal H1N1, none of the differences was statistically significant.Small sample sizes may limit the power of this study to find significant differences between seasonal influenza A and pH1N1. In Guatemala, influenza, whether seasonal or pH1N1, appears to cause severe disease mainly in infants; targeted vaccination of children should be considered
- …