51 research outputs found
Cancer effects of formaldehyde: a proposal for an indoor air guideline value
Formaldehyde is a ubiquitous indoor air pollutant that is classified as “Carcinogenic to humans (Group 1)” (IARC, Formaldehyde, 2-butoxyethanol and 1-tert-butoxypropanol-2-ol. IARC monographs on the evaluation of carcinogenic risks to humans, vol 88. World Health Organization, Lyon, pp 39–325, 2006). For nasal cancer in rats, the exposure–response relationship is highly non-linear, supporting a no-observed-adverse-effect level (NOAEL) that allows setting a guideline value. Epidemiological studies reported no increased incidence of nasopharyngeal cancer in humans below a mean level of 1 ppm and peak levels below 4 ppm, consistent with results from rat studies. Rat studies indicate that cytotoxicity-induced cell proliferation (NOAEL at 1 ppm) is a key mechanism in development of nasal cancer. However, the linear unit risk approach that is based on conservative (“worst-case”) considerations is also used for risk characterization of formaldehyde exposures. Lymphohematopoietic malignancies are not observed consistently in animal studies and if caused by formaldehyde in humans, they are high-dose phenomenons with non-linear exposure–response relationships. Apparently, these diseases are not reported in epidemiological studies at peak exposures below 2 ppm and average exposures below 0.5 ppm. At the similar airborne exposure levels in rodents, the nasal cancer effect is much more prominent than lymphohematopoietic malignancies. Thus, prevention of nasal cancer is considered to prevent lymphohematopoietic malignancies. Departing from the rat studies, the guideline value of the WHO (Air quality guidelines for Europe, 2nd edn. World Health Organization, Regional Office for Europe, Copenhagen, pp 87–91, 2000), 0.08 ppm (0.1 mg m−3) formaldehyde, is considered preventive of carcinogenic effects in compliance with epidemiological findings
Recommended from our members
Nitrous oxide inhalant abuse and massive pulmonary embolism in COVID-19
A patient presented to the emergency department with altered mental status and lower extremity weakness in the setting of nitrous oxide inhalant abuse and Coronavirus Disease-2019 (COVID-19) infection. He subsequently developed hypotension and severe hypoxia, found to have a saddle pulmonary embolus (PE) with right heart strain requiring alteplase (tPA)
Correlation of carotid blood flow and corrected carotid flow time with invasive cardiac output measurements
Abstract
Background
Non-invasive measures that can accurately estimate cardiac output may help identify volume-responsive patients. This study seeks to compare two non-invasive measures (corrected carotid flow time and carotid blood flow) and their correlations with invasive reference measurements of cardiac output. Consenting adult patients (n = 51) at Massachusetts General Hospital cardiac catheterization laboratory undergoing right heart catheterization between February and April 2016 were included. Carotid ultrasound images were obtained concurrently with cardiac output measurements, obtained by the thermodilution method in the absence of severe tricuspid regurgitation and by the Fick oxygen method otherwise. Corrected carotid flow time was calculated as systole time/√cycle time. Carotid blood flow was calculated as π × (carotid diameter)2/4 × velocity time integral × heart rate. Measurements were obtained using a single carotid waveform and an average of three carotid waveforms for both measures.
Results
Single waveform measurements of corrected flow time did not correlate with cardiac output (ρ = 0.25, 95% CI −0.03 to 0.49, p = 0.08), but an average of three waveforms correlated significantly, although weakly (ρ = 0.29, 95% CI 0.02–0.53, p = 0.046). Carotid blood flow measurements correlated moderately with cardiac output regardless of if single waveform or an average of three waveforms were used: ρ = 0.44, 95% CI 0.18–0.63, p = 0.004, and ρ = 0.41, 95% CI 0.16–0.62, p = 0.004, respectively.
Conclusions
Carotid blood flow may be a better marker of cardiac output and less subject to measurements issues than corrected carotid flow time
Carotid Flow Time Changes With Volume Status in Acute Blood Loss
STUDY OBJECTIVE: Noninvasive predictors of volume responsiveness may improve patient care in the emergency department. Doppler measurements of arterial blood flow have been proposed as a predictor of volume responsiveness. We seek to determine the effect of acute blood loss and a passive leg raise maneuver on corrected carotid artery flow time.
METHODS: In a prospective cohort of blood donors, we obtained a Doppler tracing of blood flow through the carotid artery before and after blood loss. Measurements of carotid flow time, cardiac cycle time, and peak blood velocity were obtained in supine position and after a passive leg raise. Measurements of flow time were corrected for pulse rate.
RESULTS: Seventy-nine donors were screened for participation; 70 completed the study. Donors had a mean blood loss of 452 mL. Mean corrected carotid artery flow time before blood loss was 320 ms (95% confidence interval [CI] 315 to 325 ms); this decreased after blood loss to 299 ms (95% CI 294 to 304 ms). A passive leg raise had little effect on mean corrected carotid artery flow time before blood loss (mean increase 4 ms; 95% CI -1 to 9 ms), but increased mean corrected carotid artery flow time after blood loss (mean increase 23 ms; 95% CI 18 to 28 ms) to predonation levels.
CONCLUSION: Corrected carotid artery flow time decreased after acute blood loss. In the setting of acute hypovolemia, a passive leg raise restored corrected carotid artery flow time to predonation levels. Further investigation of corrected carotid artery flow time as a predictor of volume responsiveness is warranted
Multi-Institution Validation of an Emergency Ultrasound Image Rating Scale-A Pilot Study
BACKGROUND: As bedside ultrasound (BUS) is being increasingly taught and incorporated into emergency medicine practice, measurement of BUS competency is becoming more important. The commonly adopted experiential approach to BUS competency has never been validated on a large scale, and has some limitations by design.
OBJECTIVE: Our aim was to introduce and report preliminary testing of a novel emergency BUS image rating scale (URS).
METHODS: Gallbladder BUS was selected as the test case. Twenty anonymous BUS image sets (still images and clips) were forwarded electronically to 16 reviewers (13 attendings, 3 fellows) at six training sites across the United States. Each reviewer rated the BUS sets using the pilot URS that consisted of three components, with numerical values assigned to each of the following aspects: Landmarks, Image Quality, and Annotations. Reviewers also decided whether or not each BUS set would be Clinically Useful. Kendall taus were calculated as a measure of concordance among the reviewers.
RESULTS: Among the 13 attendings, image review experience ranged from 2-15 years, 5-300 scans per week, and averaged 7.8 years and 60 images. Kendall taus for each aspect of the URS were: Landmarks: 0.55; Image Quality: 0.57; Annotation: 0.26; Total Score: 0.63, and Clinical Usefulness: 0.45. All URS elements correlated significantly with clinical usefulness (p \u3c 0.001). The correlation coefficient between each attending reviewer and the entire group ranged from 0.48-0.69, and was independent of image review experience beyond fellowship training.
CONCLUSION: Our novel URS had moderate-to-good inter-rater agreement in this pilot study. Based on these results, the URS will be modified for use in future investigations
Recommended from our members
Prospective validation of the bedside sonographic acute cholecystitis score in emergency department patients
Acute cholecystitis can be difficult to diagnose in the emergency department (ED); no single finding can rule in or rule out the disease. A prediction score for the diagnosis of acute cholecystitis for use at the bedside would be of great value to expedite the management of patients presenting with possible acute cholecystitis. The 2013 Tokyo Guidelines is a validated method for the diagnosis of acute cholecystitis but its prognostic capability is limited. The purpose of this study was to prospectively validate the Bedside Sonographic Acute Cholecystitis (SAC) Score utilizing a combination of only historical symptoms, physical exam signs, and point-of-care ultrasound (POCUS) findings for the prediction of the diagnosis of acute cholecystitis in ED patients.
This was a prospective observational validation study of the Bedside SAC Score. The study was conducted at two tertiary referral academic centers in Boston, Massachusetts. From April 2016 to March 2019, adult patients (≥18 years old) with suspected acute cholecystitis were enrolled via convenience sampling and underwent a physical exam and a focused biliary POCUS in the ED. Three symptoms and signs (post-prandial symptoms, RUQ tenderness, and Murphy's sign) and two sonographic findings (gallbladder wall thickening and the presence of gallstones) were combined to calculate the Bedside Sonographic Acute Cholecystitis (SAC) Score. The final diagnosis of acute cholecystitis was determined from chart review or patient follow-up up to 30 days after the initial assessment. In patients who underwent operative intervention, surgical pathology was used to confirm the diagnosis of acute cholecystitis. Sensitivity, specificity, PPV and NPV of the Bedside SAC Score were calculated for various cut off points.
153 patients were included in the analysis. Using a previously defined cutoff of ≥ 4, the Bedside SAC Score had a sensitivity of 88.9% (95% CI 73.9%–96.9%), and a specificity of 67.5% (95% CI 58.2%–75.9%). A Bedside SAC Score of < 2 had a sensitivity of 100% (95% CI 90.3%–100%) and specificity of 35% (95% CI 26.5%–44.4%). A Bedside SAC Score of ≥ 7 had a sensitivity of 44.4% (95% CI 27.9%–61.9%) and specificity of 95.7% (95% CI 90.3%–98.6%).
A bedside prediction score for the diagnosis of acute cholecystitis would have great utility in the ED. The Bedside SAC Score would be most helpful as a rule out for patients with a low Bedside SAC Score < 2 (sensitivity of 100%) or as a rule in for patients with a high Bedside SAC Score ≥ 7 (specificity of 95.7%). Prospective validation with a larger study is required
Intensive point-of-care ultrasound training with long-term follow-up in a cohort of Rwandan physicians.
OBJECTIVE: We delivered a point-of-care ultrasound training programme in a resource-limited setting in Rwanda, and sought to determine participants\u27 knowledge and skill retention. We also measured trainees\u27 assessment of the usefulness of ultrasound in clinical practice.
METHODS: This was a prospective cohort study of 17 Rwandan physicians participating in a point-of-care ultrasound training programme. The follow-up period was 1 year. Participants completed a 10-day ultrasound course, with follow-up training delivered over the subsequent 12 months. Trainee knowledge acquisition and skill retention were assessed via observed structured clinical examinations (OSCEs) administered at six points during the study, and an image-based assessment completed at three points.
RESULTS: Trainees reported minimal structured ultrasound education and little confidence using point-of-care ultrasound before the training. Mean scores on the image-based assessment increased from 36.9% (95% CI 32-41.8%) before the initial 10-day training to 74.3% afterwards (95% CI 69.4-79.2; P \u3c 0.001). The mean score on the initial OSCE after the introductory course was 81.7% (95% CI 78-85.4%). The mean OSCE performance at each subsequent evaluation was at least 75%, and the mean OSCE score at the 58-week follow up was 84.9% (95% CI 80.9-88.9%).
CONCLUSIONS: Physicians providing acute care in a resource-limited setting demonstrated sustained improvement in their ultrasound knowledge and skill 1 year after completing a clinical ultrasound training programme. They also reported improvements in their ability to provide patient care and in job satisfaction
- …