Erasmus University Digital Repository

    Use of incisional negative pressure wound therapy on closed median sternal incisions after cardiothoracic surgery: Clinical evidence and consensus recommendations

    No full text
    Negative pressure wound therapy is a concept introduced initially to assist in the treatment of chronic open wounds. Recently, there has been growing interest in using the technique on closed incisions after surgery to prevent potentially severe surgical site infections and other wound complications in high-risk patients. Negative pressure wound therapy uses a negative pressure unit and specific dressings that help to hold the incision edges together, redistribute lateral tension, reduce edema, stimulate perfusion, and protect the surgical site from external infectious sources. Randomized, controlled studies of negative pressure wound therapy for closed incisions in orthopedic settings (which also is a clean surgical procedure in absence of an open fracture) have shown the technology can reduce the risk of wound infection, wound dehiscence, and seroma, and there is accumulating evidence that it also improves wound outcomes after cardiothoracic surgery. Identifying at-risk individuals for whom prophylactic use of negative pressure wound therapy would be most cost-effective remains a challenge; however, several risk-stratification systems have been proposed and should be evaluated more fully. The recent availability of a single-use, closed incision management system offers surgeons a convenient and practical means of delivering negative pressure wound therapy to their high-risk patients, with excellent wound outcomes reported to date. Although larger, randomized, controlled studies will help to clarify the precise role and benefits of such a system in cardiothoracic surgery, limited initial evidence from clinical studies and from the authors’ own experiences appears promising. In light of the growing interest in this technology among cardiothoracic surgeons, a consensus meeting, which was attended by a group of international experts, was held to review existing evidence for negative pressure wound therapy in the prevention of wound complications after surgery and to provide recommendations on the optimal use of negative pressure wound therapy on closed median sternal incisions after cardiothoracic surgery

    Discriminating somatic and germline mutations in tumor DNA samples without matching normals

    No full text
    Tumor analyses commonly employ a correction with a matched normal (MN), a sample from healthy tissue of the same individual, in order to distinguish germline mutations from somatic mutations. Since the majority of variants found in an individual are thought to be common within the population, we constructed a set of 931 samples from healthy, unrelated individuals, originating from two different sequencing platforms, to serve as a virtual normal (VN) in the absence of such an associated normal sample. Our approach removed (1) >96% of the germline variants also removed by the MN sample and (2) a large number (2%-8%) of additional variants not corrected for by the associated normal. The combination of the VN with the MN improved the correction for polymorphisms significantly, with up to ∼30% compared with MN and ∼15% compared with VN only.We determined the number of unrelated genomes needed in order to correct at least as efficiently as the MN is about 200 for structural variations (SVs) and about 400 for single-nucleotide variants (SNVs) and indels. In addition, we propose that the removal of common variants with purely position-based methods is inaccurate and incurs additional false-positive somatic variants, and more sophisticated algorithms, which are capable of leveraging information about the area surrounding variants, are needed for optimal accuracy. Our VN correction method can be used to analyze any list of variants, regardless of sequencing platform of origin. This VN methodology is available for use on our public Galaxy server

    mRNA expression profiles in circulating tumor cells of metastatic colorectal cancer patients

    Get PDF
    The molecular characterization of circulating tumor cells (CTCs) is a promising tool for the repeated and non-invasive evaluation of predictive and prognostic factors. Challenges associated with CTC characterization using the only FDA approved method for CTC enumeration, the CellSearch technique, include the presence of an excess of leukocytes in CTC-enriched blood fractions. Here we aimed to identify colorectal tumor-specific gene expression levels in the blood of patients with and without detectable CTCs according to CellSearch criteria. Materials and methods: Blood of 30 healthy donors (HDs) and 142 metastatic colorectal cancer (mCRC) patients was subjected to CellSearch CTC enumeration and isolation. In all samples, 95 mRNAs were measured by reverse transcriptase quantitative PCR (RT-qPCR). HD blood samples and patient samples with three or more CTCs were compared to identify CTC-specific mRNAs. Patient samples without detectable CTCs were separately analyzed. Results: Thirty-four CTC-specific mRNAs were higher expressed in patients with ≥3 CTCs compared with HDs (Mann-Whitney U-test P<0.05). Among patients without detectable CTCs, a HD-unlike subgroup was identified which could be distinguished from HDs by the expression of epithelial genes such as KRT19, KRT20 and AGR2. Also, in an independent patient set, a similar HD-unlike group could be identified among the patients without detectable CTCs according to the CellSearch system. Conclusion: Extensive molecular characterization of colorectal CTCs is feasible and a subgroup of patients without detectable CTCs according to CellSearch criteria bears circulating tumor load, which may have clinical consequences. This CTC-specific gene panel for mCRC patients may enable the exploration of CTC characterization as a novel means to further individualize cancer treatment

    A single immunization with modified vaccinia virus Ankara-based influenza virus H7 vaccine affords protection in the influenza A(H7N9) pneumonia ferret model

    No full text
    Since the first reports in early 2013, >440 human cases of infection with avian influenza A(H7N9) have been reported including 122 fatalities. After the isolation of the first A(H7N9) viruses, the nucleotide sequences became publically available. Based on the coding sequence of the influenza virus A/Shanghai/2/2013 hemag-glutinin gene, a codon-optimized gene was synthesized and cloned into a recombinant modified vaccinia virus Ankara (MVA). This MVA-H7-Sh2 viral vector was used to immunize ferrets and proved to be immuno-genic, even after a single immunization. Subsequently, ferrets were challenged with influenza virus A/Anhui/1/2013 via the intratracheal route. Unprotected animals that were mock vaccinated or received empty vector developed interstitial pneumonia characterized by a marked alveolitis, accompanied by loss of appetite, weight loss, and heavy breathing. In contrast, animals vaccinated with MVA-H7-Sh2 were protected from severe disease

    Percutaneous transluminal angioplasty and drug-eluting stents for infrapopliteal lesions in critical limb ischemia (PADI) trial

    No full text
    Background - Endovascular infrapopliteal treatment of patients with critical limb ischemia using percutaneous transluminal angioplasty (PTA) and bail-out bare metal stenting (BMS) is hampered by restenosis. In interventional cardiology, drug-eluting stents (DES) have shown better patency rates and are standard practice nowadays. An investigator-initiated, multicenter, randomized trial was conducted to assess whether DES also improve patency and clinical outcome of infrapopliteal lesions. Methods and Results - Adults with critical limb ischemia (Rutherford category ≥4) and infrapopliteal lesions were randomized to receive PTA±BMS or DES with paclitaxel. Primary end point was 6-month primary binary patency of treated lesions, defined as ≤50% stenosis on computed tomographic angiography. Stenosis >50%, retreatment, major amputation, and critical limb ischemia-related death were regarded as treatment failure. Severity of failure was assessed with an ordinal score, ranging from vessel stenosis through occlusion to the clinical failures. Seventy-four limbs (73 patients) were treated with DES and 66 limbs (64 patients) received PTA±BMS. Six-month patency rates were 48.0% for DES and 35.1% for PTA±BMS (P=0.096) in the modified-intention-to-treat and 51.9% and 35.1% (P=0.037) in the per-protocol analysis. The ordinal score showed significantly worse treatment failure for PTA±BMS versus DES (P=0.041). The observed major amputation rate remained lower in the DES group until 2 years post-treatment, with a trend toward significance (P=0.066). Less minor amputations occurred after DES until 6 months post-treatment (P=0.03). Conclusions - In patients with critical limb ischemia caused by infrapopliteal lesions, DES provide better 6-month patency rates and less amputations after 6 and 12 months compared with PTA±BMS

    A unified theory of sepsis-induced acute kidney injury: Inflammation, microcirculatory dysfunction, bioenergetics, and the tubular cell adaptation to injury

    No full text
    Given that the leading clinical conditions associated with acute kidney injury (AKI), namely, sepsis, major surgery, heart failure, and hypovolemia, are all associated with shock, it is tempting to attribute all AKI to ischemia on the basis of macrohemodynamic changes. However, an increasing body of evidence has suggested that in many patients, AKI can occur in the absence of overt signs of global renal hypoperfusion. Indeed, sepsis-induced AKI can occur in the setting of normal or even increased renal blood flow. Accordingly, renal injury may not be entirely explained solely on the basis of the classic paradigm of hypoperfusion, and thus other mechanisms must come into play. Herein, we put forward a "unifying theory" to explain the interplay between inflammation and oxidative stress, microvascular dysfunction, and the adaptive response of the tubular epithelial cell to the septic insult. We propose that this response is mostly adaptive in origin, that it is driven by mitochondria, and that it ultimately results in and explains the clinical phenotype of sepsis-induced AKI

    Analysis of rectal Chlamydia trachomatis serovar distribution including L2 (lymphogranuloma venereum) at the Erasmus MC STI clinic, Rotterdam

    No full text
    Objectives: Compared to urogenital infections, little is known of serovar distribution in rectal chlamydial infection. The aim of this study was to explore possible relations between demographics, sexual behaviour, clinical manifestations, rectal symptoms, and chlamydial serovars including L2 (lymphogranuloma venereum). Methods: Genotyping was done prospectively in all rectal chlamydial infections since the outbreak of proctitis caused by lymphogranuloma venereum in February 2003. 33 (15.1%) rectal Chlamydia trachomatis infections from the years 2001 and 2002 were genotyped retrospectively. Results: Of all 219 rectal chlamydial infections, detected in the period July 2001 to August 2005, a total of 149 (68.0%) were successfully genotyped including 21 (14.1%) infections with serovar L2. In univariable and multivariable analyses, L2 serovar positive patients were significantly more often HIV positive (p = 0.002; OR: 6.5; 95% Cl: 2.0 to 21.1), and had had sex in the past 6 months with more partners compared to other serovars. Furthermore, patients with L2 proctitis presented far more often with self reported rectal symptoms (p<0.005; OR: 19.4; 95% Cl: 4.9 to 77.0) and clinical manifestations (p<0.005; OR: 15.4; 95% Cl: 4.5 to 52.5). Conclusions: Chlamydial infections with serovar L2 show a different clinical and epidemiological pattern compared to serovar D-K. LGV proctitis is significantly associated with HIV positivity and a high number of sexual partners and causes more rectal symptoms and clinical manifestations. Neither young age nor ethnicity were identified as risk factors for any of the serovars investigated in this study

    Carbon dioxide induced changes in cerebral blood flow and flow velocity: Role of cerebrovascular resistance and effective cerebral perfusion pressure

    No full text
    In addition to cerebrovascular resistance (CVR) zero flow pressure (ZFP), effective cerebral perfusion pressure (CPPe) and the resistance area product (RAP) are supplemental determinants of cerebral blood flow (CBF). Until now, the interrelationship of PaCO2 -induced changes in CBF, CVR, CPPe, ZFP, and RAP is not fully understood. In a controlled crossover trial, we investigated 10 anesthetized patients aiming at PaCO2 levels of 30, 37, 43, and 50 mm Hg. Cerebral blood flow was measured with a modified Kety-Schmidt-technique. Zero flow pressure and RAP was estimated by linear regression analysis of pressure-flow velocity relationships of the middle cerebral artery. Effective cerebral perfusion pressure was calculated as the difference between mean arterial pressure and ZFP, CVR as the ratio CPPe/CBF. Statistical analysis was performed by one-way RM-ANOVA. When comparing hypocapnia with hypercapnia, CBF showed a significant exponential reduction by 55% and mean V MCA by 41%. Effective cerebral perfusion pressure linearly decreased by 17% while ZFP increased from 14 to 29 mm Hg. Cerebrovascular resistance increased by 96% and RAP by 39%; despite these concordant changes in mean CVR and Doppler-derived RAP correlation between these variables was weak (r=0.43). In conclusion, under general anesthesia hypocapnia-induced reduction in CBF is caused by both an increase in CVR and a decrease in CPPe, as a consequence of an increase in ZFP

    Expected effect of deleterious mutations on within-host adaptation of pathogens

    No full text
    Adaptation is a common theme in both pathogen emergence, for example, in zoonotic cross-species transmission, and pathogen control, where adaptation might limit the effect of the immune response and antiviral treatment. When such evolution requires deleterious intermediate mutations, fitness ridges and valleys arise in the pathogen's fitness landscape. The effect of deleterious intermediate mutations on within-host pathogen adaptation is examined with deterministic calculations, appropriate for pathogens replicating in large populations with high error rates. The effect of deleterious intermediate mutations on pathogen adaptation is smaller than their name might suggest: when two mutations are required and each individual single mutation is fully deleterious, the pathogen can jump across the fitness valley by obtaining two mutations at once, leading to a proportion of adapted mutants that is 20-fold lower than that in the situation where the fitness of all mutants is neutral. The negative effects of deleterious intermediates are typically substantially smaller and outweighed by the fitness advantages of the adapted mutant. Moreover, requiring a specific mutation order has a substantially smaller effect on pathogen adaptation than the effect of all intermediates being deleterious. These results can be rationalized when the number of routes of mutation available to the pathogen is calculated, providing a simple approach to estimate the effect of deleterious mutations. The calculations discussed here are applicable when the effect of deleterious mutations on the within-host adaptation of pathogens is assessed, for example, in the context of zoonotic emergence, antigenic escape, and drug resistance
    Erasmus University Digital Repositoryis based in NL
    Do you manage Erasmus University Digital Repository? Access insider analytics, issue reports and manage access to outputs from your repository in the CORE Dashboard!