28 research outputs found

    Supplementary document for AutoUnmix: an autoencoder-based spectral unmixing method for multi-color fluorescence microscopy imaging - 6568293.pdf

    No full text
    AUTOUNMIX: AN AUTOENCODER-BASED SPECTRAL UNMIXING METHOD FOR MULTI-COLOR FLUORESCENCE MICROSCOPY IMAGING: SUPPLEMENTAL DOCUMEN

    Light sheet and light field microscopy based on scanning Bessel beam illumination

    No full text
    We developed a Bessel light sheet fluorescence microscopy (LSFM) system to enable high-speed, wide-field intra-vital imaging of zebrafish and other thick biological samples. This system uses air objectives for the convenient mounting of large samples and incorporates an electrically tunable lens for automatic focusing during volumetric imaging. To enhance the precision of 3D imaging, the impact of the electrically tunable lens on system magnification is investigated and modified through designed experiments. Despite using Bessel beams with side lobes, we achieved satisfactory image quality through a straightforward background noise subtraction method, eliminating the need for further deconvolution. Our system provides zebrafish imaging at a resolution comparable to commercial confocal microscopy but in just 1/40th of the time. We also introduced light field microscopy (LFM) to improve 3D in vivo imaging temporal resolution. Apart from the 28-fold speed enhancement, the comparison of LFM and LSFM results reveals a unique aspect of LFM imaging concerning image dynamic range, which has not been previously reported

    Tough and Biocompatible Hydrogel Tissue Adhesives Entirely Based on Naturally Derived Ingredients

    No full text
    Hydrogel tissue adhesives have tremendous potential applications in biological engineering. Existing hydrogel tissue adhesives generally do not have adequate mechanical robustness and acceptable biocompatibility at the same time. Herein, we report a one-step method to synthesize tough and biocompatible hydrogel tissue adhesives entirely made of naturally derived ingredients. We select two natural polymers, chitosan and gelatin, to construct the backbone and a bioderived compound, genipin, as the cross-linker. We show that, upon gelation, genipins cross-link chitosan and gelatin to form two interpenetrated networks and interlink them to tissue surfaces. Meanwhile, hydrogen bonds form in the matrix to strengthen the networks and at the interface to strengthen the adhesion between the hydrogel and tissue. Furthermore, we elaborately use high initial polymer contents to induce topological entanglements in the polymer networks to toughen the hydrogel. The resulting chitosan–gelatin hydrogel provides a tough matrix, and the robust covalent interlinks and hydrogen bonds provide a strong interface, achieving a tensile strength of ∼190 kPa, a fracture toughness of 205.7 J/m2, a mode I adhesion energy of 197.6 J/m2, and a mode II adhesion energy of 51.2 J/m2. We demonstrate that the hydrogel tissue adhesive is injectable, degradable, and noncytotoxic and can be used for the controlled release of the anticancer drug cisplatin. All-natural ingredient-based tough and biocompatible hydrogels are promising as tissue adhesives for biomedical and related applications

    Image2_Fundus photograph-based cataract evaluation network using deep learning.PNG

    No full text
    Background: Our study aims to develop an artificial intelligence-based high-precision cataract classification and grading evaluation network using fundus images.Methods: We utilized 1,340 color fundus photographs from 875 participants (aged 50–91 years at image capture) from the Beijing Eye Study 2011. Four experienced and trained ophthalmologists performed the classification of these cases based on slit-lamp and retro-illuminated images. Cataracts were classified into three types based on the location of the lens opacity: cortical cataract, nuclear cataract, and posterior subcapsular cataract. We developed a Dual-Stream Cataract Evaluation Network (DCEN) that uses color photographs of cataract fundus to achieve simultaneous cataract type classification and severity grading. The accuracy of severity grading was enhanced by incorporating the results of type classification.Results: The DCEN method achieved an accuracy of 0.9762, a sensitivity of 0.9820, an F1 score of 0.9401, and a kappa coefficient of 0.8618 in the cataract classification task. By incorporating type features, the grading of cataract severity can be improved with an accuracy of 0.9703, a sensitivity of 0.9344, an F1 score of 0.9555, and a kappa coefficient of 0.9111. We utilized Grad-CAM visualization technology to analyze and summarize the fundus image features of different types of cataracts, and we verified our conclusions by examining the information entropy of the retinal vascular region.Conclusion: The proposed DCEN provides a reliable ability to comprehensively evaluate the condition of cataracts from fundus images. Applying deep learning to clinical cataract assessment has the advantages of simplicity, speed, and efficiency.</p

    Image3_Fundus photograph-based cataract evaluation network using deep learning.PNG

    No full text
    Background: Our study aims to develop an artificial intelligence-based high-precision cataract classification and grading evaluation network using fundus images.Methods: We utilized 1,340 color fundus photographs from 875 participants (aged 50–91 years at image capture) from the Beijing Eye Study 2011. Four experienced and trained ophthalmologists performed the classification of these cases based on slit-lamp and retro-illuminated images. Cataracts were classified into three types based on the location of the lens opacity: cortical cataract, nuclear cataract, and posterior subcapsular cataract. We developed a Dual-Stream Cataract Evaluation Network (DCEN) that uses color photographs of cataract fundus to achieve simultaneous cataract type classification and severity grading. The accuracy of severity grading was enhanced by incorporating the results of type classification.Results: The DCEN method achieved an accuracy of 0.9762, a sensitivity of 0.9820, an F1 score of 0.9401, and a kappa coefficient of 0.8618 in the cataract classification task. By incorporating type features, the grading of cataract severity can be improved with an accuracy of 0.9703, a sensitivity of 0.9344, an F1 score of 0.9555, and a kappa coefficient of 0.9111. We utilized Grad-CAM visualization technology to analyze and summarize the fundus image features of different types of cataracts, and we verified our conclusions by examining the information entropy of the retinal vascular region.Conclusion: The proposed DCEN provides a reliable ability to comprehensively evaluate the condition of cataracts from fundus images. Applying deep learning to clinical cataract assessment has the advantages of simplicity, speed, and efficiency.</p

    Image1_Fundus photograph-based cataract evaluation network using deep learning.TIF

    No full text
    Background: Our study aims to develop an artificial intelligence-based high-precision cataract classification and grading evaluation network using fundus images.Methods: We utilized 1,340 color fundus photographs from 875 participants (aged 50–91 years at image capture) from the Beijing Eye Study 2011. Four experienced and trained ophthalmologists performed the classification of these cases based on slit-lamp and retro-illuminated images. Cataracts were classified into three types based on the location of the lens opacity: cortical cataract, nuclear cataract, and posterior subcapsular cataract. We developed a Dual-Stream Cataract Evaluation Network (DCEN) that uses color photographs of cataract fundus to achieve simultaneous cataract type classification and severity grading. The accuracy of severity grading was enhanced by incorporating the results of type classification.Results: The DCEN method achieved an accuracy of 0.9762, a sensitivity of 0.9820, an F1 score of 0.9401, and a kappa coefficient of 0.8618 in the cataract classification task. By incorporating type features, the grading of cataract severity can be improved with an accuracy of 0.9703, a sensitivity of 0.9344, an F1 score of 0.9555, and a kappa coefficient of 0.9111. We utilized Grad-CAM visualization technology to analyze and summarize the fundus image features of different types of cataracts, and we verified our conclusions by examining the information entropy of the retinal vascular region.Conclusion: The proposed DCEN provides a reliable ability to comprehensively evaluate the condition of cataracts from fundus images. Applying deep learning to clinical cataract assessment has the advantages of simplicity, speed, and efficiency.</p

    Lensless coherent diffraction imaging based on spatial light modulator with unknown modulation curve

    No full text
    Lensless imaging is a popular research field for the advantages of small size, wide field-of-view and low aberration in recent years. However, some traditional lensless imaging methods suffer from slow convergence, mechanical errors and conjugate solution interference, which limit its further application and development. In this work, we proposed a lensless imaging method based on spatial light modulator (SLM) with unknown modulation curve. In our imaging system, we use SLM to modulate the wavefront of object, and introduce the ptychographic scanning algorithm that is able to recover the complex amplitude information even the SLM modulation curve is inaccurate or unknown. In addition, we also design a split-beam interference experiment to calibrate the modulation curve of SLM, and using the calibrated modulation function as the initial value of the expended ptychography iterative engine (ePIE) algorithm can improve the convergence speed. We further analyze the effect of modulation function, algorithm parameters and the characteristics of the coherent light source on the quality of reconstructed image. The simulated and real experiments show that the proposed method is superior to traditional mechanical scanning methods in terms of recovering speed and accuracy, with the recovering resolution up to 14 um

    Rapid and Fully Microfluidic Ebola Virus Detection with CRISPR-Cas13a

    No full text
    Highly infectious illness caused by pathogens is endemic especially in developing nations where there is limited laboratory infrastructure and trained personnel. Rapid point-of-care (POC) serological assays with minimal sample manipulation and low cost are desired in clinical practice. In this study, we report an automated POC system for Ebola RNA detection with RNA-guided RNA endonuclease Cas13a, utilizing its collateral RNA degradation after its activation. After automated microfluidic mixing and hybridization, nonspecific cleavage products of Cas13a are immediately measured by a custom integrated fluorometer which is small in size and convenient for in-field diagnosis. Within 5 min, a detection limit of 20 pfu/mL (5.45 × 107 copies/mL) of purified Ebola RNA is achieved. This isothermal and fully solution-based diagnostic method is rapid, amplification-free, simple, and sensitive, thus establishing a key technology toward a useful POC diagnostic platform

    Rapid Escherichia coli Trapping and Retrieval from Bodily Fluids via a Three-Dimensional Bead-Stacked Nanodevice

    No full text
    A novel micro- and nanofluidic device stacked with magnetic beads has been developed to efficiently trap, concentrate, and retrieve Escherichia coli (E. coli) from the bacterial suspension and pig plasma. The small voids between the magnetic beads are used to physically isolate the bacteria in the device. We used computational fluid dynamics, three-dimensional (3D) tomography technology, and machine learning to probe and explain the bead stacking in a small 3D space with various flow rates. A combination of beads with different sizes is utilized to achieve a high capture efficiency (∼86%) with a flow rate of 50 μL/min. Leveraging the high deformability of this device, an E. coli sample can be retrieved from the designated bacterial suspension by applying a higher flow rate followed by rapid magnetic separation. This unique function is also utilized to concentrate E. coli cells from the original bacterial suspension. An on-chip concentration factor of ∼11× is achieved by inputting 1300 μL of the E. coli sample and then concentrating it in 100 μL of buffer. Importantly, this multiplexed, miniaturized, inexpensive, and transparent device is easy to fabricate and operate, making it ideal for pathogen separation in both laboratory and point-of-care settings
    corecore