50 research outputs found

    International Lower Limb Collaborative (INTELLECT) study: a multicentre, international retrospective audit of lower extremity open fractures

    Get PDF

    The impact of surgical delay on resectability of colorectal cancer: An international prospective cohort study

    Get PDF
    AIM: The SARS-CoV-2 pandemic has provided a unique opportunity to explore the impact of surgical delays on cancer resectability. This study aimed to compare resectability for colorectal cancer patients undergoing delayed versus non-delayed surgery. METHODS: This was an international prospective cohort study of consecutive colorectal cancer patients with a decision for curative surgery (January-April 2020). Surgical delay was defined as an operation taking place more than 4 weeks after treatment decision, in a patient who did not receive neoadjuvant therapy. A subgroup analysis explored the effects of delay in elective patients only. The impact of longer delays was explored in a sensitivity analysis. The primary outcome was complete resection, defined as curative resection with an R0 margin. RESULTS: Overall, 5453 patients from 304 hospitals in 47 countries were included, of whom 6.6% (358/5453) did not receive their planned operation. Of the 4304 operated patients without neoadjuvant therapy, 40.5% (1744/4304) were delayed beyond 4 weeks. Delayed patients were more likely to be older, men, more comorbid, have higher body mass index and have rectal cancer and early stage disease. Delayed patients had higher unadjusted rates of complete resection (93.7% vs. 91.9%, P = 0.032) and lower rates of emergency surgery (4.5% vs. 22.5%, P < 0.001). After adjustment, delay was not associated with a lower rate of complete resection (OR 1.18, 95% CI 0.90-1.55, P = 0.224), which was consistent in elective patients only (OR 0.94, 95% CI 0.69-1.27, P = 0.672). Longer delays were not associated with poorer outcomes. CONCLUSION: One in 15 colorectal cancer patients did not receive their planned operation during the first wave of COVID-19. Surgical delay did not appear to compromise resectability, raising the hypothesis that any reduction in long-term survival attributable to delays is likely to be due to micro-metastatic disease

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)

    Get PDF

    Attitudes about tuberculosis prevention in the elimination phase: a survey among physicians in Germany.

    Get PDF
    BACKGROUND: Targeted and stringent measures of tuberculosis prevention are necessary to achieve the goal of tuberculosis elimination in countries of low tuberculosis incidence. METHODS: We ascertained the knowledge about tuberculosis risk factors and stringency of tuberculosis prevention measures by a standardized questionnaire among physicians in Germany involved in the care of individuals from classical risk groups for tuberculosis. RESULTS: 510 physicians responded to the online survey. Among 16 risk factors immunosuppressive therapy, HIV-infection and treatment with TNF-antagonist were thought to be the most important risk factors for the development of tuberculosis in Germany. Exposure to a patient with tuberculosis ranked on the 10th position. In the event of a positive tuberculin-skin-test or interferon-γ release assay only 50%, 40%, 36% and 25% of physicians found that preventive chemotherapy was indicated for individuals undergoing tumor necrosis factor-antagonist therapy, close contacts of tuberculosis patients, HIV-infected individuals and migrants, respectively. CONCLUSIONS: A remarkably low proportion of individuals with latent infection with Mycobacterium tuberculosis belonging to classical risk groups for tuberculosis are considered candidates for preventive chemotherapy in Germany. Better knowledge about the risk for tuberculosis in different groups and more stringent and targeted preventive interventions will probably be necessary to achieve tuberculosis elimination in Germany

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)1.

    Get PDF
    In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field

    Lawson criterion for ignition exceeded in an inertial fusion experiment

    Get PDF
    For more than half a century, researchers around the world have been engaged in attempts to achieve fusion ignition as a proof of principle of various fusion concepts. Following the Lawson criterion, an ignited plasma is one where the fusion heating power is high enough to overcome all the physical processes that cool the fusion plasma, creating a positive thermodynamic feedback loop with rapidly increasing temperature. In inertially confined fusion, ignition is a state where the fusion plasma can begin "burn propagation" into surrounding cold fuel, enabling the possibility of high energy gain. While "scientific breakeven" (i.e., unity target gain) has not yet been achieved (here target gain is 0.72, 1.37 MJ of fusion for 1.92 MJ of laser energy), this Letter reports the first controlled fusion experiment, using laser indirect drive, on the National Ignition Facility to produce capsule gain (here 5.8) and reach ignition by nine different formulations of the Lawson criterion

    Equalization of four cardiovascular risk algorithms after systematic recalibration: individual-participant meta-analysis of 86 prospective studies

    Full text link
    Aims: There is debate about the optimum algorithm for cardiovascular disease (CVD) risk estimation. We conducted head-to-head comparisons of four algorithms recommended by primary prevention guidelines, before and after ‘recalibration’, a method that adapts risk algorithms to take account of differences in the risk characteristics of the populations being studied. Methods & Results: Using individual-participant data on 360737 participants without CVD at baseline in 86 prospective studies from 22 countries, we compared the Framingham risk score (FRS), Systematic COronary Risk Evaluation (SCORE), pooled cohort equations (PCE), and Reynolds risk score (RRS). We calculated measures of risk discrimination and calibration, and modelled clinical implications of initiating statin therapy in people judged to be at ‘high’ 10 year CVD risk. Original risk algorithms were recalibrated using the risk factor profile and CVD incidence of target populations. The four algorithms had similar risk discrimination. Before recalibration, FRS, SCORE, and PCE overpredicted CVD risk on average by 10%, 52%, and 41%, respectively, whereas RRS under-predicted by 10%. Original versions of algorithms classified 29–39% of individuals aged \u3e_40years as high risk. By contrast, recalibration reduced this proportion to 22–24% for every algorithm. We estimated that to prevent one CVD event, it would be necessary to initiate statin therapy in 44–51 such individuals using original algorithms, in contrast to 37–39 individuals with recalibrated algorithms. Conclusions: Before recalibration, the clinical performance of four widely used CVD risk algorithms varied substantially. By contrast, simple recalibration nearly equalized their performance and improved modelled targeting of preventive action to clinical need

    Abstracts from the Food Allergy and Anaphylaxis Meeting 2016

    Get PDF

    On shaky ground: the making of risk in Bogotá

    Get PDF
    How does risk become a technique for governing the future of cities and urban life? Using genealogical and ethnographic methods, this paper tracks the emergence of risk management in Bogotá, Colombia, from its initial institutionalization to its ongoing implementation in governmental practice. Its specific focus is the invention of the ‘zone of high risk’ in Bogotá and the everyday work performed by the officials responsible for determining the likelihood of landslide in these areas. It addresses the ongoing formation of techniques of urban planning and governance and the active relationship between urban populations and environments and emerging forms of political authority and technical expertise. Ultimately, it reveals that techniques of risk management are made and remade as experts and nonexperts grapple with the imperative to bring heterogeneous assemblages of people and things into an unfolding technopolitical domain
    corecore