26 research outputs found

    High-Performance SiC–Based Solar Receivers for CSP: Component Manufacturing and Joining

    Get PDF
    Concentrated solar power (CSP) is an important option as a competitive, secure, and sustainable energy system. At the moment, cost-effective solutions are required for a wider-scale deployment of the CSP technology: in particular, the industrial exploitation of CSP has been so far hindered by limitations in the materials used for the central receiver—a key component in the system. In this context, the H2020 NEXTOWER project is focused on next-generation CSP technologies, particularly on advanced materials for high temperatures (e.g., >900 °C) and extreme applications environments (e.g., corrosive). The research activity described in this paper is focused on two industrial solutions for new SiC ceramic receivers for high thermal gradient continued operations: porous SiC and silicon-infiltrated silicon carbide ceramics (SiSiC). The new receivers should be mechanically tough and highly thermally conductive. This paper presents the activity related to the manufacturing of these components, their joining, and characterization

    Evaluation of an Artificial Intelligence-based Detector of Sub- and Intra-Retinal Fluid on a large set of OCT volumes in AMD and DME

    Get PDF
    Introduction: In this retrospective cohort study, we wanted to evaluate the performance and analyze the insights of an artificial intelligence (AI) algorithm in detecting retinal fluid in spectral-domain OCT volume scans from a large cohort of patients with neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME). Methods: A total of 3’981 OCT volumes from 374 patients with AMD and 11’501 OCT volumes from 811 patients with DME, acquired with Heidelberg Spectralis OCT device (Heidelberg Engineering Inc., Heidelberg, Germany) between 2013 and 2021. Each OCT volume was annotated for the presence or absence of intraretinal fluid (IRF) and subretinal fluid (SRF) by masked reading center graders (ground truth). The performance of an already published AI-algorithm to detect IRF, SRF separately and a combined fluid detector (IRF and/or SRF) of the same OCT volumes was evaluated. An analysis of the sources of disagreement between annotation and prediction and their relationship to central retinal thickness was performed. We computed the mean areas under the curves (AUC) and under the precision-recall curves (AP), accuracy, sensitivity, specificity and precision. Results: The AUC for IRF was 0.92 and 0.98, for SRF 0.98 and 0.99, in the AMD and DME cohort, respectively. The AP for IRF was 0.89 and 1.00, for SRF 0.97 and 0.93, in the AMD and DME cohort, respectively. The accuracy, specificity and sensitivity for IRF was 0.87, 0.88, 0.84, and 0.93, 0.95, 0.93, and for SRF 0.93, 0.93, 0.93, and 0.95, 0.95, 0.95 in the AMD and DME cohort respectively. For detecting any fluid, the AUC was 0.95 and 0.98, the accuracy, specificity and sensitivity was 0.89, 0.93, 0.90 and 0.95, 0.88 and 0.93, in the AMD and DME cohort, respectively. False positives were present when retinal shadow artifacts and strong retinal deformation were present. False negatives were due to small hyporeflective areas in combination with poor image quality. The combined detector correctly predicted more OCT volumes than the single detectors for IRF and SRF, 89.0% versus 81.6% in the AMD and 93.1% versus 88.6% in the DME cohort. Discussion/Conclusion: The AI-based fluid detector achieves high performance for retinal fluid detection in a very large dataset dedicated to AMD and DME. Combining single detectors provides better fluid detection accuracy than considering the single detectors separately. The observed independence of the single detectors ensures that the detectors learned features particular to IRF and SRF

    Fusion of fundus images and MRI data of the human eye

    No full text
    Purpose Ophthalmologists are confronted with a set of different image modalities to diagnose eye tumors e.g., fundus photography, CT and MRI. However, these images are often complementary and represent pathologies differently. Some aspects of tumors can only be seen in a particular modality. A fusion of modalities would improve the contextual information for diagnosis. The presented work attempts to register color fundus photography with MRI volumes. This would complement the low resolution 3D information in the MRI with high resolution 2D fundus images. Methods MRI volumes were acquired from 12 infants under the age of 5 with unilateral retinoblastoma. The contrast-enhanced T1-FLAIR sequence was performed with an isotropic resolution of less than 0.5mm. Fundus images were acquired with a RetCam camera. For healthy eyes, two landmarks were used: the optic disk and the fovea. The eyes were detected and extracted from the MRI volume using a 3D adaption of the Fast Radial Symmetry Transform (FRST). The cropped volume was automatically segmented using the Split Bregman algorithm. The optic nerve was enhanced by a Frangi vessel filter. By intersection the nerve with the retina the optic disk was found. The fovea position was estimated by constraining the position with the angle between the optic and the visual axis as well as the distance from the optic disk. The optical axis was detected automatically by fitting a parable on to the lens surface. On the fundus, the optic disk and the fovea were detected by using the method of Budai et al. Finally, the image was projected on to the segmented surface using the lens position as the camera center. In tumor affected eyes, the manually segmented tumors were used instead of the optic disk and macula for the registration. Results In all of the 12 MRI volumes that were tested the 24 eyes were found correctly, including healthy and pathological cases. In healthy eyes the optic nerve head was found in all of the tested eyes with an error of 1.08 +/- 0.37mm. A successful registration can be seen in figure 1. Conclusions The presented method is a step toward automatic fusion of modalities in ophthalmology. The combination enhances the MRI volume with higher resolution from the color fundus on the retina. Tumor treatment planning is improved by avoiding critical structures and disease progression monitoring is made easier

    Retinal slit lamp video mosaicking

    Get PDF
    Purpose To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Methods Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Results Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. Conclusions The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively

    Stereo-vision three-dimensional reconstruction of curvilinear structures imaged with a TEM

    No full text
    Deriving accurate three-dimensional (3-D) structural information of materials at the nanometre level is often crucial for understanding their properties. Tomography in transmission electron microscopy (TEM) is a powerful technique that provides such information. It is however demanding and sometimes inapplicable, as it requires the acquisition of multiple images within a large tilt arc and hence prolonged exposure to electrons. In some cases, prior knowledge about the structure can tremendously simplify the 3-D reconstruction if incorporated adequately. Here, a novel algorithm is presented that is able to produce a full 3-D reconstruction of curvilinear structures from stereo pair of TEM images acquired within a small tilt range that spans from only a few to tens of degrees. Reliability of the algorithm is demonstrated through reconstruction of a model 3-D object from its simulated projections, and is compared with that of conventional tomography. This method is experimentally demonstrated for the 3-D visualization of dislocation arrangements in a deformed metallic micro-pillar. (C) 2017 Published by Elsevier B.V
    corecore