328 research outputs found

    Orthorectification of helicopter-borne high resolution experimental burn observation from infra red handheld imagers

    Get PDF
    To pursue the development and validation of coupled fire-atmosphere models, the wildland fire modeling community needs validation data sets with scenarios where fire-induced winds influence fire front behavior, and with high temporal and spatial resolution. Helicopter-borne infrared thermal cameras have the potential to monitor landscape-scale wildland fires at a high resolution during experimental burns. To extract valuable information from those observations, three-step image processing is required: (a) Orthorectification to warp raw images on a fixed coordinate system grid, (b) segmentation to delineate the fire front location out of the orthorectified images, and (c) computation of fire behavior metrics such as the rate of spread from the time-evolving fire front location. This work is dedicated to the first orthorectification step, and presents a series of algorithms that are designed to process handheld helicopter-borne thermal images collected during savannah experimental burns. The novelty in the approach lies on its recursive design, which does not require the presence of fixed ground control points, hence relaxing the constraint on field of view coverage and helping the acquisition of high-frequency observations. For four burns ranging from four to eight hectares, long-wave and mid infra red images were collected at 1 and 3 Hz, respectively, and orthorectified at a high spatial resolution (<1 m) with an absolute accuracy estimated to be lower than 4 m. Subsequent computation of fire radiative power is discussed with comparison to concurrent space-borne measurementsPeer ReviewedPostprint (published version

    Development of an organ failure score in acute liver failure for transplant selection and identification of patients at high risk of futility.

    Get PDF
    INTRODUCTION: King's College Hospital criteria are currently used to select liver transplant candidates in acetaminophen-related acute liver failure (ALF). Although widely accepted, they show a poor sensitivity in predicting pre-transplant mortality and cannot predict the outcome after surgery. In this study we aimed to develop a new prognostic score that can allow patient selection for liver transplantation more appropriately and identify patients at high risk of futile transplantation. METHODS: We analysed consecutive patients admitted to the Royal Free and Beaujon Hospitals between 1990 and 2015. Clinical and laboratory data at admission were collected. Predictors of 3-month mortality in the non-transplanted patients admitted to the Royal Free Hospital were used to develop the new score, which was then validated against the Beaujon cohort. The Beaujon-transplanted group was also used to assess the ability of the new score in identifying patients at high risk of transplant futility. RESULTS: 152 patients were included of who 44 were transplanted. SOFA, CLIF-C OF and CLIF-ACLF scores were the best predictors of 3-month mortality among non-transplanted patients. CLIF-C OF score and high dosages of norepinephrine requirement were the only significant predictors of 3-month mortality in the non-transplanted patients, and therefore were included in the ALF-OFs score. In non-transplanted patients, ALF-OFs showed good performance in both exploratory (AUC = 0.89; sensitivity = 82.6%; specificity = 89.5%) and the validation cohort (AUC = 0.988; sensitivity = 100%; specificity = 92.3%). ALF-OFs score was also able to identify patients at high risk of transplant futility (AUC = 0.917; sensitivity = 100%; specificity = 79.2%). CONCLUSION: ALF-OFs is a new prognostic score in acetaminophen-related ALF that can predict both the need for liver transplant and high risk of transplant futility, improving candidate selection for liver transplantation

    Use of Artificial Intelligence as an Innovative Method for Liver Graft Macrosteatosis Assessment

    Get PDF
    The worldwide implementation of a liver graft pool using marginal livers (ie, grafts with a high risk of technical complications and impaired function or with a risk of transmitting infection or malignancy to the recipient) has led to a growing interest in developing methods for accurate evaluation of graft quality. Liver steatosis is associated with a higher risk of primary nonfunction, early graft dysfunction, and poor graft survival rate. The present study aimed to analyze the value of artificial intelligence (AI) in the assessment of liver steatosis during procurement compared with liver biopsy evaluation. A total of 117 consecutive liver grafts from brain-dead donors were included and classified into 2 cohorts: ≥30 versus &lt;30% hepatic steatosis. AI analysis required the presence of an intraoperative smartphone liver picture as well as a graft biopsy and donor data. First, a new algorithm arising from current visual recognition methods was developed, trained, and validated to obtain automatic liver graft segmentation from smartphone images. Second, a fully automated texture analysis and classification of the liver graft was performed by machine-learning algorithms. Automatic liver graft segmentation from smartphone images achieved an accuracy (Acc) of 98%, whereas the analysis of the liver graft features (cropped picture and donor data) showed an Acc of 89% in graft classification (≥30 versus &lt;30%). This study demonstrates that AI has the potential to assess steatosis in a handy and noninvasive way to reliably identify potential nontransplantable liver grafts and to avoid improper graft utilization

    Short-course antibiotic therapy for critically ill patients treated for postoperative intra-abdominal infection: the DURAPOP randomised clinical trial

    Get PDF
    PURPOSE: Shortening the duration of antibiotic therapy (ABT) is a key measure in antimicrobial stewardship. The optimal duration of ABT for treatment of postoperative intra-abdominal infections (PIAI) in critically ill patients is unknown. METHODS: A multicentre prospective randomised trial conducted in 21 French intensive care units (ICU) between May 2011 and February 2015 compared the efficacy and safety of 8-day versus 15-day antibiotic therapy in critically ill patients with PIAI. Among 410 eligible patients (adequate source control and ABT on day 0), 249 patients were randomly assigned on day 8 to either stop ABT immediately (n = 126) or to continue ABT until day 15 (n = 123). The primary endpoint was the number of antibiotic-free days between randomisation (day 8) and day 28. Secondary outcomes were death, ICU and hospital length of stay, emergence of multidrug-resistant (MDR) bacteria and reoperation rate, with 45-day follow-up. RESULTS: Patients treated for 8 days had a higher median number of antibiotic-free days than those treated for 15 days (15 [6-20] vs 12 [6-13] days, respectively; P &lt; 0.0001) (Wilcoxon rank difference 4.99 days [95% CI 2.99-6.00; P &lt; 0.0001). Equivalence was established in terms of 45-day mortality (rate difference 0.038, 95% CI - 0.013 to 0.061). Treatments did not differ in terms of ICU and hospital length of stay, emergence of MDR bacteria or reoperation rate, while subsequent drainages between day 8 and day 45 were observed following short-course ABT (P = 0.041). CONCLUSION: Short-course antibiotic therapy in critically ill ICU patients with PIAI reduces antibiotic exposure. Continuation of treatment until day 15 is not associated with any clinical benefit. CLINICALTRIALS. GOV IDENTIFIER: NCT01311765

    Liaison sans fils à 60 GHz et réseau domestique multi-gigabit/s basé sur une infrastructure radio sur fibre bas coût

    Get PDF
    National audienceLe projet FUI8 ORIGIN (Optical Radio Infrastructure for Gigabit/s Indoor Network) s'adresse au marché du Réseau Local Domestique (RLD) en proposant une infrastructure bas coût qui combine l'efficacité de la fibre optique pour la diffusion radio avec les avantages d'une transmission sans fils. Les premières réalisations et les tests réussis sont présentés dans ce papier

    Tuberculosis and Indoor Biomass and Kerosene Use in Nepal: A Case–Control Study

    Get PDF
    BackgroundIn Nepal, tuberculosis (TB) is a major problem. Worldwide, six previous epidemiologic studies have investigated whether indoor cooking with biomass fuel such as wood or agricultural wastes is associated with TB with inconsistent results.ObjectivesUsing detailed information on potential confounders, we investigated the associations between TB and the use of biomass and kerosene fuels.MethodsA hospital-based case-control study was conducted in Pokhara, Nepal. Cases (n = 125) were women, 20-65 years old, with a confirmed diagnosis of TB. Age-matched controls (n = 250) were female patients without TB. Detailed exposure histories were collected with a standardized questionnaire.ResultsCompared with using a clean-burning fuel stove (liquefied petroleum gas, biogas), the adjusted odds ratio (OR) for using a biomass-fuel stove was 1.21 [95% confidence interval (CI), 0.48-3.05], whereas use of a kerosene-fuel stove had an OR of 3.36 (95% CI, 1.01-11.22). The OR for use of biomass fuel for heating was 3.45 (95% CI, 1.44-8.27) and for use of kerosene lamps for lighting was 9.43 (95% CI, 1.45-61.32).ConclusionsThis study provides evidence that the use of indoor biomass fuel, particularly as a source of heating, is associated with TB in women. It also provides the first evidence that using kerosene stoves and wick lamps is associated with TB. These associations require confirmation in other studies. If using kerosene lamps is a risk factor for TB, it would provide strong justification for promoting clean lighting sources, such as solar lamps

    Early enteral nutrition in critically ill patients: ESICM clinical practice guidelines.

    Get PDF
    To provide evidence-based guidelines for early enteral nutrition (EEN) during critical illness. We aimed to compare EEN vs. early parenteral nutrition (PN) and vs. delayed EN. We defined "early" EN as EN started within 48 h independent of type or amount. We listed, a priori, conditions in which EN is often delayed, and performed systematic reviews in 24 such subtopics. If sufficient evidence was available, we performed meta-analyses; if not, we qualitatively summarized the evidence and based our recommendations on expert opinion. We used the GRADE approach for guideline development. The final recommendations were compiled via Delphi rounds. We formulated 17 recommendations favouring initiation of EEN and seven recommendations favouring delaying EN. We performed five meta-analyses: in unselected critically ill patients, and specifically in traumatic brain injury, severe acute pancreatitis, gastrointestinal (GI) surgery and abdominal trauma. EEN reduced infectious complications in unselected critically ill patients, in patients with severe acute pancreatitis, and after GI surgery. We did not detect any evidence of superiority for early PN or delayed EN over EEN. All recommendations are weak because of the low quality of evidence, with several based only on expert opinion. We suggest using EEN in the majority of critically ill under certain precautions. In the absence of evidence, we suggest delaying EN in critically ill patients with uncontrolled shock, uncontrolled hypoxaemia and acidosis, uncontrolled upper GI bleeding, gastric aspirate &gt;500 ml/6 h, bowel ischaemia, bowel obstruction, abdominal compartment syndrome, and high-output fistula without distal feeding access

    Multiple Deprivation, Severity and Latent Sub-Groups:Advantages of Factor Mixture Modelling for Analysing Material Deprivation

    Get PDF
    Material deprivation is represented in different forms and manifestations. Two individuals with the same deprivation score (i.e. number of deprivations), for instance, are likely to be unable to afford or access entirely or partially different sets of goods and services, while one individual may fail to purchase clothes and consumer durables and another one may lack access to healthcare and be deprived of adequate housing . As such, the number of possible patterns or combinations of multiple deprivation become increasingly complex for a higher number of indicators. Given this difficulty, there is interest in poverty research in understanding multiple deprivation, as this analysis might lead to the identification of meaningful population sub-groups that could be the subjects of specific policies. This article applies a factor mixture model (FMM) to a real dataset and discusses its conceptual and empirical advantages and disadvantages with respect to other methods that have been used in poverty research . The exercise suggests that FMM is based on more sensible assumptions (i.e. deprivation covary within each class), provides valuable information with which to understand multiple deprivation and is useful to understand severity of deprivation and the additive properties of deprivation indicators
    corecore