2,356 research outputs found

    VEGF-A and neuropilin 1 (NRP1) shape axon projections in the developing CNS via dual roles in neurons and blood vessels

    Get PDF
    We thank Vann Bennett and Daizing Zhou (Department of Biochemistry, Duke University Medical Center) for the design and generation of the Brn3bCre knock-in mice. We are grateful to Bennett Alakakone and Susan Reijntjes for help with preliminary experiments and to Anastasia Lampropoulou for preparing tissue culture media. We thank the staff of the Biological Resource Unit at the UCL Institute of Ophthalmology and the University of Aberdeen Institute of Medical Sciences Microscopy and Histology Facility and Medical Research Facility for technical assistance. Funding This research was funded by project grants from the Wellcome Trust [085476/A/08/Z to L.E., C.R.] and Biotechnology and Biological Sciences Research Council (BBSRC) [BB/J00815X/1 to L.E.; BB/J00930X/1 to C.R.] and a Wellcome Trust PhD Fellowship [092839/Z/10/Z to M.T.]. Deposited in PMC for immediate release.Peer reviewedPublisher PD

    Joint segmentation and classification of retinal arteries/veins from fundus images

    Full text link
    Objective Automatic artery/vein (A/V) segmentation from fundus images is required to track blood vessel changes occurring with many pathologies including retinopathy and cardiovascular pathologies. One of the clinical measures that quantifies vessel changes is the arterio-venous ratio (AVR) which represents the ratio between artery and vein diameters. This measure significantly depends on the accuracy of vessel segmentation and classification into arteries and veins. This paper proposes a fast, novel method for semantic A/V segmentation combining deep learning and graph propagation. Methods A convolutional neural network (CNN) is proposed to jointly segment and classify vessels into arteries and veins. The initial CNN labeling is propagated through a graph representation of the retinal vasculature, whose nodes are defined as the vessel branches and edges are weighted by the cost of linking pairs of branches. To efficiently propagate the labels, the graph is simplified into its minimum spanning tree. Results The method achieves an accuracy of 94.8% for vessels segmentation. The A/V classification achieves a specificity of 92.9% with a sensitivity of 93.7% on the CT-DRIVE database compared to the state-of-the-art-specificity and sensitivity, both of 91.7%. Conclusion The results show that our method outperforms the leading previous works on a public dataset for A/V classification and is by far the fastest. Significance The proposed global AVR calculated on the whole fundus image using our automatic A/V segmentation method can better track vessel changes associated to diabetic retinopathy than the standard local AVR calculated only around the optic disc.Comment: Preprint accepted in Artificial Intelligence in Medicin

    Single-breath-hold photoacoustic computed tomography of the breast

    Get PDF
    We have developed a single-breath-hold photoacoustic computed tomography (SBH-PACT) system to reveal detailed angiographic structures in human breasts. SBH-PACT features a deep penetration depth (4 cm in vivo) with high spatial and temporal resolutions (255 µm in-plane resolution and a 10 Hz 2D frame rate). By scanning the entire breast within a single breath hold (~15 s), a volumetric image can be acquired and subsequently reconstructed utilizing 3D back-projection with negligible breathing-induced motion artifacts. SBH-PACT clearly reveals tumors by observing higher blood vessel densities associated with tumors at high spatial resolution, showing early promise for high sensitivity in radiographically dense breasts. In addition to blood vessel imaging, the high imaging speed enables dynamic studies, such as photoacoustic elastography, which identifies tumors by showing less compliance. We imaged breast cancer patients with breast sizes ranging from B cup to DD cup, and skin pigmentations ranging from light to dark. SBH-PACT identified all the tumors without resorting to ionizing radiation or exogenous contrast, posing no health risks

    Network-Based Approach for Modeling and Analyzing Coronary Angiography

    Full text link
    Significant intra-observer and inter-observer variability in the interpretation of coronary angiograms are reported. This variability is in part due to the common practices that rely on performing visual inspections by specialists (e.g., the thickness of coronaries). Quantitative Coronary Angiography (QCA) approaches are emerging to minimize observer's error and furthermore perform predictions and analysis on angiography images. However, QCA approaches suffer from the same problem as they mainly rely on performing visual inspections by utilizing image processing techniques. In this work, we propose an approach to model and analyze the entire cardiovascular tree as a complex network derived from coronary angiography images. This approach enables to analyze the graph structure of coronary arteries. We conduct the assessments of network integration, degree distribution, and controllability on a healthy and a diseased coronary angiogram. Through our discussion and assessments, we propose modeling the cardiovascular system as a complex network is an essential phase to fully automate the interpretation of coronary angiographic images. We show how network science can provide a new perspective to look at coronary angiograms

    Augmented reality based real-time subcutaneous vein imaging system

    Get PDF
    A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Deep spectral learning for label-free optical imaging oximetry with uncertainty quantification

    Get PDF
    Measurement of blood oxygen saturation (sO2) by optical imaging oximetry provides invaluable insight into local tissue functions and metabolism. Despite different embodiments and modalities, all label-free optical-imaging oximetry techniques utilize the same principle of sO2-dependent spectral contrast from haemoglobin. Traditional approaches for quantifying sO2 often rely on analytical models that are fitted by the spectral measurements. These approaches in practice suffer from uncertainties due to biological variability, tissue geometry, light scattering, systemic spectral bias, and variations in the experimental conditions. Here, we propose a new data-driven approach, termed deep spectral learning (DSL), to achieve oximetry that is highly robust to experimental variations and, more importantly, able to provide uncertainty quantification for each sO2 prediction. To demonstrate the robustness and generalizability of DSL, we analyse data from two visible light optical coherence tomography (vis-OCT) setups across two separate in vivo experiments on rat retinas. Predictions made by DSL are highly adaptive to experimental variabilities as well as the depth-dependent backscattering spectra. Two neural-network-based models are tested and compared with the traditional least-squares fitting (LSF) method. The DSL-predicted sO2 shows significantly lower mean-square errors than those of the LSF. For the first time, we have demonstrated en face maps of retinal oximetry along with a pixel-wise confidence assessment. Our DSL overcomes several limitations of traditional approaches and provides a more flexible, robust, and reliable deep learning approach for in vivo non-invasive label-free optical oximetry.R01 CA224911 - NCI NIH HHS; R01 CA232015 - NCI NIH HHS; R01 NS108464 - NINDS NIH HHS; R21 EY029412 - NEI NIH HHSAccepted manuscrip
    • …
    corecore