218 research outputs found

    Evaluation of Reperfused Myocardial Infarction by Low-Dose Multidetector Computed Tomography Using Prospective Electrocardiography (ECG)-Triggering: Comparison with Magnetic Resonance Imaging

    Get PDF
    PURPOSE: To evaluate the potential of prospective electrocardiography (ECG)- gated 64-slice multidetector computed tomography (MDCT) for evaluation of myocardial enhancement, infarct size, and stent patency after percutaneous coronary intervention (PCI) with stenting in patients with myocardial infarction. MATERIALS AND METHODS: Seventeen patients who were admitted with acute myocardial infarction were examined with prospective ECG-gated 64-slice cardiac MDCT and magnetic resonance (MR) imaging after reperfusion using PCI with stenting. Cardiac MDCT was performed with two different phases: arterial and delayed phases. We evaluated the stent patency on the arterial phase, and nonviable myocardium on the delayed phase of computed tomography (CT) image, and they were compared with the results from the delayed MR images. RESULTS: Total mean radiation dose was 7.7 +/- 0.5 mSv on the two phases of CT images. All patients except one showed good patency of the stent at the culprit lesion on the arterial phase CT images. All patients had hyperenhanced area on the delayed phase CT images, which correlated well with those on the delayed phase MR images, with a mean difference of 1.6% (20 +/- 10% vs. 22 +/- 10%, r = 0.935, p = 0.10). Delayed MR images had a better contrast-to-noise ratio (CNR) than delayed CT images (27.1 +/- 17.8% vs. 4.3 +/- 2.1%, p < 0.001). CONCLUSION: Prospective ECG-gated 64-slice MDCT provides the potential to evaluate myocardial viability on delayed phase as well as for stent patency on arterial phase with an acceptable radiation dose after PCI with stenting in patients with myocardial infarctionope

    Is standard breast-conserving therapy (BCT) in elderly breast cancer patients justified? A prospective measurement of acute toxicity according CTC-classification

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Breast conserving therapy (BCT) is an accepted treatment for early-stage breast cancer. This study aimed to measure prospectively acute radiation-related toxicity and to create a comprehensive data base for long-term temporal analyses of 3D conformal adjuvant radiotherapy. The specific aspect of age has been neglected by traditional research. Therefore, the impact of age on acute BCT toxicity should be also specifically adressed.</p> <p>Methods</p> <p>Toxicity was measured in 109 patients at initiation (t1), during radiotherapy (t2-t7), and 6 weeks after treatment completion (t8) using a new topographic module. Organ systems were recorded in 15 scales and scored according to symptom intensity (grade 0-5) based on CTC (Common Toxicity Criteria) -classification. Radiotherapy was virtually CT-based planned and applied with 6-MeV-photons. Mean total dose was 60.1 Gy. Patients were stratified by age in 3 Groups: <50, 50-60, and >60 years.</p> <p>Results</p> <p>Registered toxicity was generally low. Mean overall-grade climbed from 0.29-0.40 (t1-t7), and dropped to 0.23 (t8). Univariate analyses revealed slightly higher toxicity in older (> 60 years) versus young patients (< 50 years) in 2 scales only: breast-symmetry (p = 0.033), and arm function (p = 0.007). However, in the scale "appetite" toxicity was higher in younger (< 50 years) versus older (> 60 years) patients (p = 0.039). Toxicity differences in all other scales were not significant. Between older (> 60 years) and midaged patients (50-60 years) no significant differences in toxicity were found. This was also true for the comparison between young (< 50 years) versus midaged patient groups (50-60 years).</p> <p>Conclusion</p> <p>The treatment concept of BCT for breast cancer is generally well tolerated. The toxicity-measurement with the new topographic module is feasible. Not modified standard treatment for BC should be performed in elderly women.</p

    Development of a questionnaire to assess sedentary time in older persons -- a comparative study using accelerometry

    Get PDF
    Background: There is currently no validated questionnaire available to assess total sedentary time in older adults. Most studies only used TV viewing time as an indicator of sedentary time. The first aim of our study was to investigate the self-reported time spent by older persons on a set of sedentary activities, and to compare this with objective sedentary time measured by accelerometry. The second aim was to determine what set of self-reported sedentary activities should be used to validly rank people's total sedentary time. Finally we tested the reliability of our newly developed questionnaire using the best performing set of sedentary activities. Methods. The study sample included 83 men and women aged 65-92 y, a random sample of Longitudinal Aging Study Amsterdam participants, who completed a questionnaire including ten sedentary activities and wore an Actigraph GT3X accelerometer for 8 days. Spearman correlation coefficients were calculated to examine the association between self-reported time and objective sedentary time. The test-retest reliability was calculated using the intraclass correlation coefficient (ICC). Results: Mean total self-reported sedentary time was 10.4 (SD 3.5) h/d and was not significantly different from mean total objective sedentary time (10.2 (1.2) h/d, p = 0.63). Total self-reported sedentary time on an average day (sum of ten activities) correlated moderately (Spearman's r = 0.35, p < 0.01) with total objective sedentary time. The correlation improved when using the sum of six activities (r = 0.46, p < 0.01), and was much higher than when using TV watching only (r = 0.22, p = 0.05). The test-retest reliability of the sum of six sedentary activities was 0.71 (95% CI 0.57-0.81). Conclusions: A questionnaire including six sedentary activities was moderately associated with accelerometry-derived sedentary time and can be used to reliably rank sedentary time in older persons. © 2013 Visser and Koster; licensee BioMed Central Ltd

    Functional claudication distance: a reliable and valid measurement to assess functional limitation in patients with intermittent claudication

    Get PDF
    BACKGROUND: Disease severity and functional impairment in patients with intermittent claudication is usually quantified by the measurement of pain-free walking distance (intermittent claudication distance, ICD) and maximal walking distance (absolute claudication distance, ACD). However, the distance at which a patient would prefer to stop because of claudication pain seems a definition that is more correspondent with the actual daily life walking distance. We conducted a study in which the distance a patient prefers to stop was defined as the functional claudication distance (FCD), and estimated the reliability and validity of this measurement. METHODS: In this clinical validity study we included patients with intermittent claudication, following a supervised exercise therapy program. The first study part consisted of two standardised treadmill tests. During each test ICD, FCD and ACD were determined. Primary endpoint was the reliability as represented by the calculated intra-class correlation coefficients. In the second study part patients performed a standardised treadmill test and filled out the Rand-36 questionnaire. Spearman's rho was calculated to assess validity. RESULTS: The intra-class correlation coefficients of ICD, FCD and ACD were 0.940, 0.959, and 0.975 respectively. FCD correlated significantly with five out of nine domains, namely physical function (rho = 0.571), physical role (rho = 0.532), vitality (rho = 0.416), pain (rho = 0.416) and health change (rho = 0.414). CONCLUSION: FCD is a reliable and valid measurement for determining functional capacity in trained patients with intermittent claudication. Furthermore it seems that FCD better reflects the actual functional impairment. In future studies, FCD could be used alongside ICD and ACD

    Mammographic density does not correlate with Ki-67 expression or cytomorphology in benign breast cells obtained by random periareolar fine needle aspiration from women at high risk for breast cancer

    Get PDF
    BACKGROUND:Ki-67 expression is a possible risk biomarker and is currently being used as a response biomarker in chemoprevention trials. Mammographic breast density is a risk biomarker and is also being used as a response biomarker. We previously showed that Ki-67 expression is higher in specimens of benign breast cells exhibiting cytologic atypia that are obtained by random periareolar fine needle aspiration (RPFNA). It is not known whether there is a correlation between mammographic density and Ki-67 expression in benign breast ductal cells obtained by RPFNA.METHODS:Included in the study were 344 women at high risk for developing breast cancer (based on personal or family history), seen at The University of Kansas Medical Center high-risk breast clinic, who underwent RPFNA with cytomorphology and Ki-67 assessment plus a mammogram. Mammographic breast density was assessed using the Cumulus program. Categorical variables were analyzed by ?2 test, and continuous variables were analyzed by nonparametric test and linear regression.RESULTS:Forty-seven per cent of women were premenopausal and 53% were postmenopausal. The median age was 48 years, median 5-year Gail Risk was 2.2%, and median Ki-67 was 1.9%. The median mammographic breast density was 37%. Ki-67 expression increased with cytologic abnormality (atypia versus no atypia; P = 0.001) and younger age (=50 years versus >50 years; P = 0.001). Mammographic density was higher in premenopausal women (P = 0.001), those with lower body mass index (P < 0.001), and those with lower 5-year Gail risk (P = 0.001). Mammographic density exhibited no correlation with Ki-67 expression or cytomorphology.CONCLUSION:Given the lack of correlation of mammographic breast density with either cytomorphology or Ki-67 expression in RPFNA specimens, mammographic density and Ki-67 expression should be considered as potentially complementary response biomarkers in breast cancer chemoprevention trials

    The Ergogenic Effect of Recombinant Human Erythropoietin on V̇O2max Depends on the Severity of Arterial Hypoxemia

    Get PDF
    Treatment with recombinant human erythropoietin (rhEpo) induces a rise in blood oxygen-carrying capacity (CaO2) that unequivocally enhances maximal oxygen uptake (V̇O2max) during exercise in normoxia, but not when exercise is carried out in severe acute hypoxia. This implies that there should be a threshold altitude at which V̇O2max is less dependent on CaO2. To ascertain which are the mechanisms explaining the interactions between hypoxia, CaO2 and V̇O2max we measured systemic and leg O2 transport and utilization during incremental exercise to exhaustion in normoxia and with different degrees of acute hypoxia in eight rhEpo-treated subjects. Following prolonged rhEpo treatment, the gain in systemic V̇O2max observed in normoxia (6–7%) persisted during mild hypoxia (8% at inspired O2 fraction (FIO2) of 0.173) and was even larger during moderate hypoxia (14–17% at FIO2 = 0.153–0.134). When hypoxia was further augmented to FIO2 = 0.115, there was no rhEpo-induced enhancement of systemic V̇O2max or peak leg V̇O2. The mechanism highlighted by our data is that besides its strong influence on CaO2, rhEpo was found to enhance leg V̇O2max in normoxia through a preferential redistribution of cardiac output toward the exercising legs, whereas this advantageous effect disappeared during severe hypoxia, leaving augmented CaO2 alone insufficient for improving peak leg O2 delivery and V̇O2. Finally, that V̇O2max was largely dependent on CaO2 during moderate hypoxia but became abruptly CaO2-independent by slightly increasing the severity of hypoxia could be an indirect evidence of the appearance of central fatigue

    The effects of timing of fine needle aspiration biopsies on gene expression profiles in breast cancers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>DNA microarray analysis has great potential to become an important clinical tool to individualize prognostication and treatment for breast cancer patients. However, with any emerging technology, there are many variables one must consider before bringing the technology to the bedside. There are already concerted efforts to standardize protocols and to improve reproducibility of DNA microarray. Our study examines one variable that is often overlooked, the timing of tissue acquisition, which may have a significant impact on the outcomes of DNA microarray analyses especially in studies that compare microarray data based on biospecimens taken <it>in vivo </it>and <it>ex vivo</it>.</p> <p>Methods</p> <p>From 16 patients, we obtained paired fine needle aspiration biopsies (FNABs) of breast cancers taken before (PRE) and after (POST) their surgeries and compared the microarray data to determine the genes that were differentially expressed between the FNABs taken at the two time points. qRT-PCR was used to validate our findings. To examine effects of longer exposure to hypoxia on gene expression, we also compared the gene expression profiles of 10 breast cancers from clinical tissue bank.</p> <p>Results</p> <p>Using hierarchical clustering analysis, 12 genes were found to be differentially expressed between the FNABs taken before and after surgical removal. Remarkably, most of the genes were linked to FOS in an early hypoxia pathway. The gene expression of FOS also increased with longer exposure to hypoxia.</p> <p>Conclusion</p> <p>Our study demonstrated that the timing of fine needle aspiration biopsies can be a confounding factor in microarray data analyses in breast cancer. We have shown that FOS-related genes, which have been implicated in early hypoxia as well as the development of breast cancers, were differentially expressed before and after surgery. Therefore, it is important that future studies take timing of tissue acquisition into account.</p

    Comparison of three methods for detection of gametocytes in Melanesian children treated for uncomplicated malaria

    Get PDF
    Background: Gametocytes are the transmission stages of Plasmodium parasites, the causative agents of malaria. As their density in the human host is typically low, they are often undetected by conventional light microscopy. Furthermore, application of RNA-based molecular detection methods for gametocyte detection remains challenging in remote field settings. In the present study, a detailed comparison of three methods, namely light microscopy, magnetic fractionation and reverse transcriptase polymerase chain reaction for detection of Plasmodium falciparum and Plasmodium vivax gametocytes was conducted.Methods. Peripheral blood samples from 70 children aged 0.5 to five years with uncomplicated malaria who were treated with either artemether-lumefantrine or artemisinin-naphthoquine were collected from two health facilities on the north coast of Papua New Guinea. The samples were taken prior to treatment (day 0) and at pre-specified intervals during follow-up. Gametocytes were measured in each sample by three methods: i) light microscopy (LM), ii) quantitative magnetic fractionation (MF) and, iii) reverse transcriptase PCR (RTPCR). Data were analysed using censored linear regression and Bland and Altman techniques.Results: MF and RTPCR were similarly sensitive and specific, and both were superior to LM. Overall, there were approximately 20% gametocyte-positive samples by LM, whereas gametocyte positivity by MF and RTPCR were both more than two-fold this level. In the subset of samples collected prior to treatment, 29% of children were positive by LM, and 85% were gametocyte positive by MF and RTPCR, respectively.Conclusions: The present study represents the first direct comparison of standard LM, MF and RTPCR for gametocyte detection in field isolates. It provides strong evidence that MF is superior to LM and can be used to detect gametocytaemic patients under field conditions with similar sensitivity and specificity as RTPCR
    corecore