1,253 research outputs found

    Semi-Supervised Self-Taught Deep Learning for Finger Bones Segmentation

    Full text link
    Segmentation stands at the forefront of many high-level vision tasks. In this study, we focus on segmenting finger bones within a newly introduced semi-supervised self-taught deep learning framework which consists of a student network and a stand-alone teacher module. The whole system is boosted in a life-long learning manner wherein each step the teacher module provides a refinement for the student network to learn with newly unlabeled data. Experimental results demonstrate the superiority of the proposed method over conventional supervised deep learning methods.Comment: IEEE BHI 2019 accepte

    Identification of a laccase Glac15 from Ganoderma lucidum 77002 and its application in bioethanol production

    Get PDF
    Background Laccases have potential applications in detoxification of lignocellulosic biomass after thermochemical pretreatment and production of value-added products or biofuels from renewable biomass. However, their application in large-scale industrial and environmental processes has been severely thwarted by the high cost of commercial laccases. Therefore, it is necessary to identify new laccases with lower cost but higher activity to detoxify lignocellulosic hydrolysates and better efficiency to produce biofuels such as bioethanol. Laccases from Ganoderma lucidum represent proper candidates in processing of lignocellulosic biomass. Results G. lucidum 77002 produces three laccase isoenzymes with a total laccase activity of 141.1 U/mL within 6 days when using wheat bran and peanut powder as energy sources in liquid culture medium. A new isoenzyme named Glac15 was identified, purified, and characterized. Glac15 possesses an optimum pH of 4.5 to 5.0 and a temperature range of 45°C to 55°C for the substrates tested. It was stable at pH values ranging from 5.0 to 7.0 and temperatures lower than 55°C, with more than 80% activity retained after incubation for 2 h. When used in bioethanol production process, 0.05 U/mL Glac15 removed 84% of the phenolic compounds in prehydrolysate, and the yeast biomass reached 11.81 (optimal density at 600 nm (OD600)), compared to no growth in the untreated one. Addition of Glac15 before cellulase hydrolysis had no significant effect on glucose recovery. However, ethanol yield were improved in samples treated with laccases compared to that in control samples. The final ethanol concentration of 9.74, 10.05, 10.11, and 10.81 g/L were obtained from samples containing only solid content, solid content treated with Glac15, solid content containing 50% prehydrolysate, and solid content containing 50% prehydrolysate treated with Glac15, respectively. Conclusions The G. lucidum laccase Glac15 has potentials in bioethanol production industry

    “In-situ” lipase-catalyzed cotton coating with polyesters from ethylene glycol and glycerol

    Get PDF
    "Available online 12 January 2018"Several polyesters were synthesized from ethylene glycol, glycerol and adipate, succinate dimethyl esters. Immobilized Candida antarctica lipase B was used as catalyst for 6hours under vacuum at 70°C without any further solvents. The highest conversion rate of 88.5% occurred for the polymerization of poly (ethylene adipate), evaluated by 1H NMR. MALDI-TOF analysis indicated that most of the oligomers formed were dimers or trimers. After successfully synthesize the polyesters we set-up the optimal conditions for their in-situ coating onto cotton substrates with a soluble lipase from Thermomyces lanuginosus. This work presents a novel bio-approach to impart hydrophobic properties to coated cotton-based fiber materials.This work was supported by Chinese government scholarship under the State Scholarship Fund (grant number 201706790049), Jiangsu Province Scientific Research Innovation Project for Academic Graduate Students (grant number KYLX16_0788), Training Fund for Excellent Doctoral Student in Jiangnan University, Key Projects of governmental cooperation in international scientific and technological innovation (grant number 2016 YFE0115700) and the National Key R & D Program of China (grant number 2017 YFB0309100). This work was also supported by the Portuguese Foundation for Science and Technology (FCT) under the scope of the strategic funding of UID/BIO/04469/2013 unit and COMPETE 2020 (grant number POCI-01-0145-FEDER-006684) and under the Project RECI/BBB-EBI/0179/2012 (grant number FCOMP01-0124-FEDER-027462). This study was also supported by BioTecNorte operation (grant number NORTE-01-0145-FEDER000004) funded by the European Regional Development Fund under the scope of Norte2020 – Programa Operacional Regional do Norte. This work was also supported by the National Natural Science Foundation of China (grant number 31470509 and 31201134), the Industry-Academic Joint Technological Prospective Fund Project of Jiangsu Province (grant number BY2013015-24 and BY2016022-23), the fundamental research funds for the central universities (grant number JUSRP 51622A), and a project funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions.info:eu-repo/semantics/publishedVersio

    Knowledge-enhanced Visual-Language Pre-training on Chest Radiology Images

    Full text link
    While multi-modal foundation models pre-trained on large-scale data have been successful in natural language understanding and vision recognition, their use in medical domains is still limited due to the fine-grained nature of medical tasks and the high demand for domain knowledge. To address this challenge, we propose a novel approach called Knowledge-enhanced Auto Diagnosis (KAD) which leverages existing medical domain knowledge to guide vision-language pre-training using paired chest X-rays and radiology reports. We evaluate KAD on {four} external X-ray datasets and demonstrate that its zero-shot performance is not only comparable to that of fully-supervised models, but also superior to the average of three expert radiologists for three (out of five) pathologies with statistical significance. Moreover, when few-shot annotation is available, KAD outperforms all existing approaches in fine-tuning settings, demonstrating its potential for application in different clinical scenarios

    Towards Generalist Foundation Model for Radiology

    Full text link
    In this study, we aim to initiate the development of Radiology Foundation Model, termed as RadFM.We consider the construction of foundational models from the perspectives of data, model design, and evaluation thoroughly. Our contribution can be concluded as follows: (i), we construct a large-scale Medical Multi-modal Dataset, MedMD, consisting of 16M 2D and 3D medical scans. To the best of our knowledge, this is the first multi-modal dataset containing 3D medical scans. (ii), We propose an architecture that enables visually conditioned generative pre-training, allowing for the integration of text input interleaved with 2D or 3D medical scans to generate response for diverse radiologic tasks. The model was initially pre-trained on MedMD and subsequently domain-specific fine-tuned on RadMD, a radiologic cleaned version of MedMD, containing 3M radiologic visual-language pairs. (iii), we propose a new evaluation benchmark that comprises five tasks, aiming to comprehensively assess the capability of foundation models in handling practical clinical problems. Our experimental results confirm that RadFM significantly outperforms existing multi-modal foundation models. The codes, data, and model checkpoint will all be made publicly available to promote further research and development in the field

    MedKLIP: Medical Knowledge Enhanced Language-Image Pre-Training in Radiology

    Full text link
    In this paper, we consider enhancing medical visual-language pre-training (VLP) with domain-specific knowledge, by exploiting the paired image-text reports from the radiological daily practice. In particular, we make the following contributions: First, unlike existing works that directly process the raw reports, we adopt a novel triplet extraction module to extract the medical-related information, avoiding unnecessary complexity from language grammar and enhancing the supervision signals; Second, we propose a novel triplet encoding module with entity translation by querying a knowledge base, to exploit the rich domain knowledge in medical field, and implicitly build relationships between medical entities in the language embedding space; Third, we propose to use a Transformer-based fusion model for spatially aligning the entity description with visual signals at the image patch level, enabling the ability for medical diagnosis; Fourth, we conduct thorough experiments to validate the effectiveness of our architecture, and benchmark on numerous public benchmarks, e.g., ChestX-ray14, RSNA Pneumonia, SIIM-ACR Pneumothorax, COVIDx CXR-2, COVID Rural, and EdemaSeverity. In both zero-shot and fine-tuning settings, our model has demonstrated strong performance compared with the former methods on disease classification and grounding
    corecore