69 research outputs found

    Chest pain with ST segment elevation in a patient with prosthetic aortic valve infective endocarditis: a case report

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>Acute ST-segment elevation myocardial infarction secondary to atherosclerotic plaque rupture is a common medical emergency. This condition is effectively managed with percutaneous coronary intervention or thrombolysis. We report a rare case of acute myocardial infarction secondary to coronary embolisation of valvular vegetation in a patient with infective endocarditis, and we highlight how the management of this phenomenon may not be the same.</p> <p>Case presentation</p> <p>A 73-year-old British Caucasian man with previous tissue aortic valve replacement was diagnosed with and treated for infective endocarditis of his native mitral valve. His condition deteriorated in hospital and repeat echocardiography revealed migration of vegetation to his aortic valve. Whilst waiting for surgery, our patient developed severe central crushing chest pain with associated anterior ST segment elevation on his electrocardiogram. Our patient had no history or risk factors for ischaemic heart disease. It was likely that coronary embolisation of part of the vegetation had occurred. Thrombolysis or percutaneous coronary intervention treatments were not performed in this setting and a plan was made for urgent surgical intervention. However, our patient deteriorated rapidly and unfortunately died.</p> <p>Conclusion</p> <p>Clinicians need to be aware that atherosclerotic plaque rupture is not the only cause of acute myocardial infarction. In the case of septic vegetation embolisation, case report evidence reveals that adopting the current strategies used in the treatment of myocardial infarction can be dangerous. Thrombolysis risks intra-cerebral hemorrhage from mycotic aneurysm rupture. Percutaneous coronary intervention risks coronary mycotic aneurysm formation, stent infections as well as distal septic embolisation. As yet, there remains no defined treatment modality and we feel all cases should be referred to specialist cardiac centers to consider how best to proceed.</p

    Preoperative calculation of risk for prolonged intensive care unit stay following coronary artery bypass grafting

    Get PDF
    OBJECTIVE: Patients who have prolonged stay in intensive care unit (ICU) are associated with adverse outcomes. Such patients have cost implications and can lead to shortage of ICU beds. We aimed to develop a preoperative risk prediction tool for prolonged ICU stay following coronary artery surgery (CABG). METHODS: 5,186 patients who underwent CABG between 1st April 1997 and 31st March 2002 were analysed in a development dataset. Logistic regression was used with forward stepwise technique to identify preoperative risk factors for prolonged ICU stay; defined as patients staying longer than 3 days on ICU. Variables examined included presentation history, co-morbidities, catheter and demographic details. The use of cardiopulmonary bypass (CPB) was also recorded. The prediction tool was tested on validation dataset (1197 CABG patients between 1(st )April 2003 and 31(st )March 2004). The area under the receiver operating characteristic (ROC) curve was calculated to assess the performance of the prediction tool. RESULTS: 475(9.2%) patients had a prolonged ICU stay in the development dataset. Variables identified as risk factors for a prolonged ICU stay included renal dysfunction, unstable angina, poor ejection fraction, peripheral vascular disease, obesity, increasing age, smoking, diabetes, priority, hypercholesterolaemia, hypertension, and use of CPB. In the validation dataset, 8.1% patients had a prolonged ICU stay compared to 8.7% expected. The ROC curve for the development and validation datasets was 0.72 and 0.74 respectively. CONCLUSION: A prediction tool has been developed which is reliable and valid. The tool is being piloted at our institution to aid resource management

    Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: A randomized, controlled study

    Get PDF
    Background: The operating room environment is replete with stressors and distractions that increase the attention demands of what are already complex psychomotor procedures. Contemporary research in other fields (e.g., sport) has revealed that gaze training interventions may support the development of robust movement skills. This current study was designed to examine the utility of gaze training for technical laparoscopic skills and to test performance under multitasking conditions. Methods: Thirty medical trainees with no laparoscopic experience were divided randomly into one of three treatment groups: gaze trained (GAZE), movement trained (MOVE), and discovery learning/control (DISCOVERY). Participants were fitted with a Mobile Eye gaze registration system, which measures eye-line of gaze at 25 Hz. Training consisted of ten repetitions of the "eye-hand coordination" task from the LAP Mentor VR laparoscopic surgical simulator while receiving instruction and video feedback (specific to each treatment condition). After training, all participants completed a control test (designed to assess learning) and a multitasking transfer test, in which they completed the procedure while performing a concurrent tone counting task. Results: Not only did the GAZE group learn more quickly than the MOVE and DISCOVERY groups (faster completion times in the control test), but the performance difference was even more pronounced when multitasking. Differences in gaze control (target locking fixations), rather than tool movement measures (tool path length), underpinned this performance advantage for GAZE training. Conclusions: These results suggest that although the GAZE intervention focused on training gaze behavior only, there were indirect benefits for movement behaviors and performance efficiency. Additionally, focusing on a single external target when learning, rather than on complex movement patterns, may have freed-up attentional resources that could be applied to concurrent cognitive tasks. © 2011 The Author(s).published_or_final_versionSpringer Open Choice, 21 Feb 201

    Topography of the Chimpanzee Corpus Callosum

    Get PDF
    The corpus callosum (CC) is the largest commissural white matter tract in mammalian brains, connecting homotopic and heterotopic regions of the cerebral cortex. Knowledge of the distribution of callosal fibers projecting into specific cortical regions has important implications for understanding the evolution of lateralized structures and functions of the cerebral cortex. No comparisons of CC topography in humans and great apes have yet been conducted. We investigated the topography of the CC in 21 chimpanzees using high-resolution magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI). Tractography was conducted based on fiber assignment by continuous tracking (FACT) algorithm. We expected chimpanzees to display topographical organization similar to humans, especially concerning projections into the frontal cortical regions. Similar to recent studies in humans, tractography identified five clusters of CC fibers projecting into defined cortical regions: prefrontal; premotor and supplementary motor; motor; sensory; parietal, temporal and occipital. Significant differences in fractional anisotropy (FA) were found in callosal regions, with highest FA values in regions projecting to higher-association areas of posterior cortical (including parietal, temporal and occipital cortices) and prefrontal cortical regions (p<0.001). The lowest FA values were seen in regions projecting into motor and sensory cortical areas. Our results indicate chimpanzees display similar topography of the CC as humans, in terms of distribution of callosal projections and microstructure of fibers as determined by anisotropy measures

    Vitamin D and HIV Progression among Tanzanian Adults Initiating Antiretroviral Therapy

    Get PDF
    Background: There is growing evidence of an association between low vitamin D and HIV disease progression; however, no prospective studies have been conducted among adults receiving antiretroviral therapy (ART) in sub-Saharan Africa. Methods Serum 25-hydroxyvitamin D (25(OH)D) levels were assessed at ART initiation for a randomly selected cohort of HIV-infected adults enrolled in a trial of multivitamins (not including vitamin D) in Tanzania during 2006–2010. Participants were prospectively followed at monthly clinic visits for a median of 20.6 months. CD4 T-cell measurements were obtained every 4 months. Proportional hazard models were utilized for mortality analyses while generalized estimating equations were used for CD4 T-cell counts. Results: Serum 25(OH)D was measured in 1103 adults 9.2% were classified as vitamin D deficient (30 ng/mL). After multivariate adjustment, vitamin D deficiency was significantly associated with increased mortality as compared to vitamin D sufficiency (HR: 2.00; 95% CI: 1.19–3.37; p = 0.009), whereas no significant association was found for vitamin D insufficiency (HR: 1.24; 95% CI: 0.87–1.78; p = 0.24). No effect modification by ART regimen or change in the associations over time was detected. Vitamin D status was not associated with change in CD4 T-cell count after ART initiation. Conclusions: Deficient vitamin D levels may lead to increased mortality in individuals receiving ART and this relationship does not appear to be due to impaired CD4 T-cell reconstitution. Randomized controlled trials are needed to determine the safety and efficacy of vitamin D supplementation for individuals receiving ART

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    The non-immunosuppressive management of childhood nephrotic syndrome

    Get PDF
    corecore