13 research outputs found

    Clinical validation of an algorithm for rapid and accurate automated segmentation of intracoronary optical coherence tomography images

    Get PDF
    Objectives: The analysis of intracoronary optical coherence tomography (OCT) images is based on manual identification of the lumen contours and relevant structures. However, manual image segmentation is a cumbersome and time-consuming process, subject to significant intra- and inter-observer variability. This study aims to present and validate a fully-automated method for segmentation of intracoronary OCT images. Methods: We studied 20 coronary arteries (mean length = 39.7 ± 10.0 mm) from 20 patients who underwent a clinically-indicated cardiac catheterization. The OCT images (n = 1812) were segmented manually, as well as with a fully-automated approach. A semi-automated variation of the fully-automated algorithm was also applied. Using certain lumen size and lumen shape characteristics, the fully- and semi-automated segmentation algorithms were validated over manual segmentation, which was considered as the gold standard. Results: Linear regression and Bland–Altman analysis demonstrated that both the fully-automated and semiautomated segmentation had a very high agreement with the manual segmentation, with the semi-automated approach being slightly more accurate than the fully-automated method. The fully-automated and semiautomated OCT segmentation reduced the analysis time by more than 97% and 86%, respectively, compared to manual segmentation. Conclusions: In the current work we validated a fully-automated OCT segmentation algorithm, as well as a semiautomated variation of it in an extensive “real-life” dataset of OCT images. The study showed that our algorithm can perform rapid and reliable segmentation of OCT images

    Accurate and reproducible reconstruction of coronary arteries and endothelial shear stress calculation using 3D OCT: Comparative study to 3D IVUS and 3D QCA

    Get PDF
    Background: Geometrically-correct 3D OCT is a new imaging modality with the potential to investigate the association of local hemodynamic microenvironment with OCT-derived high-risk features. We aimed to describe the methodology of 3D OCT and investigate the accuracy, inter- and intra-observer agreement of 3D OCT in reconstructing coronary arteries and calculating ESS, using 3D IVUS and 3D QCA as references. Methods-Results: 35 coronary artery segments derived from 30 patients were reconstructed in 3D space using 3D OCT. 3D OCT was validated against 3D IVUS and 3D QCA. The agreement in artery reconstruction among 3D OCT, 3D IVUS and 3D QCA was assessed in 3-mm-long subsegments using lumen morphometry and ESS parameters. The inter- and intra-observer agreement of 3D OCT, 3D IVUS and 3D QCA were assessed in a representative sample of 61 subsegments (n ¼ 5 arteries). The data processing times for each reconstruction methodology were also calculated. There was a very high agreement between 3D OCT vs. 3D IVUS and 3D OCT vs. 3D QCA in terms of total reconstructed artery length and volume, as well as in terms of segmental morphometric and ESS metrics with mean differences close to zero and narrow limits of agreement (BlandeAltman analysis). 3D OCT exhibited excellent inter- and intra-observer agreement. The analysis time with 3D OCT was significantly lower compared to 3D IVUS. Conclusions: Geometrically-correct 3D OCT is a feasible, accurate and reproducible 3D reconstruction technique that can perform reliable ESS calculations in coronary arteries

    Computational Approaches for Pharmacovigilance Signal Detection: Toward Integrated and Semantically-Enriched Frameworks

    Get PDF
    International audienceComputational signal detection constitutes a key element of postmarketing drug monitoring and surveillance. Diverse data sources are considered within the 'search space' of pharmacovigilance scientists, and respective data analysis methods are employed, all with their qualities and shortcomings, towards more timely and accurate signal detection. Recent systematic comparative studies highlighted not only event-based and data-source-based differential performance across methods but also their complementarity. These findings reinforce the arguments for exploiting all possible information sources for drug safety and the parallel use of multiple signal detection methods. Combinatorial signal detection has been pursued in few studies up to now, employing a rather limited number of methods and data sources but illustrating well-promising outcomes. However, the large-scale realization of this approach requires systematic frameworks to address the challenges of the concurrent analysis setting. In this paper, we argue that semantic technologies provide the means to address some of these challenges, and we particularly highlight their contribution in (a) annotating data sources and analysis methods with quality attributes to facilitate their selection given the analysis scope; (b) consistently defining study parameters such as health outcomes and drugs of interest, and providing guidance for study setup; (c) expressing analysis outcomes in a common format enabling data sharing and systematic comparisons; and (d) assessing/supporting the novelty of the aggregated outcomes through access to reference knowledge sources related to drug safety. A semantically-enriched framework can facilitate seamless access and use of different data sources and computational methods in an integrated fashion, bringing a new perspective for large-scale, knowledge intensive signal detection. Key Points A number of comparative studies assessing various signal detection methods applied to diverse types of data have highlighted the need for combinatorial-integrated approaches. Large-scale integrated signal detection requires systematic frameworks in order to address the challenges posed within the underlying concurrent analysis setting. Semantic technologies and tools may provide the means to address the challenges posed in integrated signal detection, and establish the basis for knowledge-intensive signal detection

    Exploiting heterogeneous publicly available data sources for drug safety surveillance: computational framework and case studies

    No full text
    <p><b>Objective</b>: Driven by the need of pharmacovigilance centres and companies to routinely collect and review all available data about adverse drug reactions (ADRs) and adverse events of interest, we introduce and validate a computational framework exploiting dominant as well as emerging publicly available data sources for drug safety surveillance.</p> <p><b>Methods</b>: Our approach relies on appropriate query formulation for data acquisition and subsequent filtering, transformation and joint visualization of the obtained data. We acquired data from the FDA Adverse Event Reporting System (FAERS), PubMed and Twitter. In order to assess the validity and the robustness of the approach, we elaborated on two important case studies, namely, clozapine-induced cardiomyopathy/myocarditis versus haloperidol-induced cardiomyopathy/myocarditis, and apixaban-induced cerebral hemorrhage.</p> <p><b>Results</b>: The analysis of the obtained data provided interesting insights (identification of potential patient and health-care professional experiences regarding ADRs in Twitter, information/arguments against an ADR existence across all sources), while illustrating the benefits (complementing data from multiple sources to strengthen/confirm evidence) and the underlying challenges (selecting search terms, data presentation) of exploiting heterogeneous information sources, thereby advocating the need for the proposed framework.</p> <p><b>Conclusions</b>: This work contributes in establishing a continuous learning system for drug safety surveillance by exploiting heterogeneous publicly available data sources via appropriate support tools.</p

    ARCOCT: Automatic detection of lumen border in intravascular OCT images

    No full text
    BACKGROUND AND OBJECTIVE: Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. METHODS: ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. RESULTS: ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. CONCLUSIONS: ARCOCT allows accurate and fully-automated lumen border detection in OCT images

    Clinical validation of an algorithm for rapid and accurate automated segmentation of intracoronary optical coherence tomography images

    No full text
    Objectives: The analysis of intracoronary optical coherence tomography (OCT) images is based on manual identification of the lumen contours and relevant structures. However, manual image segmentation is a cumbersome and time-consuming process, subject to significant intra-and inter-observer variability. This study aims to present and validate a fully-automated method for segmentation of intracoronary OCT images. Methods: We studied 20 coronary arteries (mean length = 39.7 +/- 10.0 mm) from 20 patients who underwent a clinically-indicated cardiac catheterization. The OCT images (n = 1812) were segmented manually, as well as with a fully-automated approach. A semi-automated variation of the fully-automated algorithm was also applied. Using certain lumen size and lumen shape characteristics, the fully-and semi-automated segmentation algorithms were validated over manual segmentation, which was considered as the gold standard. Results: Linear regression and Bland-Altman analysis demonstrated that both the fully-automated and semi-automated segmentation had a very high agreement with the manual segmentation, with the semi-automated approach being slightly more accurate than the fully-automated method. The fully-automated and semi-automated OCT segmentation reduced the analysis time by more than 97% and 86%, respectively, compared to manual segmentation. Conclusions: In the current work we validated a fully-automated OCT segmentation algorithm, as well as a semi-automated variation of it in an extensive “real-life” dataset of OCT images. The study showed that our algorithm can perform rapid and reliable segmentation of OCT images. (C) 2014 Elsevier Ireland Ltd. All rights reserved

    Association of global and local low endothelial shear stress with high-risk plaque using intracoronary 3D optical coherence tomography: Introduction of 'shear stress score'

    No full text
    AIMS: The association of low endothelial shear stress (ESS) with high-risk plaque (HRP) has not been thoroughly investigated in humans. We investigated the local ESS and lumen remodelling patterns in HRPs using optical coherence tomography (OCT), developed the shear stress score, and explored its association with the prevalence of HRPs and clinical outcomes. METHODS AND RESULTS: A total of 35 coronary arteries from 30 patients with stable angina or acute coronary syndrome (ACS) were reconstructed with three dimensional (3D) OCT. ESS was calculated using computational fluid dynamics and classified into low, moderate, and high in 3-mm-long subsegments. In each subsegment, (i) fibroatheromas (FAs) were classified into HRPs and non-HRPs based on fibrous cap (FC) thickness and lipid pool size, and (ii) lumen remodelling was classified into constrictive, compensatory, and expansive. In each artery the shear stress score was calculated as metric of the extent and severity of low ESS. FAs in low ESS subsegments had thinner FC compared with high ESS (89 ± 84 vs.138 ± 83 µm, P < 0.05). Low ESS subsegments predominantly co-localized with HRPs vs. non-HRPs (29 vs. 9%, P < 0.05) and high ESS subsegments predominantly with non-HRPs (9 vs. 24%, P < 0.05). Compensatory and expansive lumen remodelling were the predominant responses within subsegments with low ESS and HRPs. In non-stenotic FAs, low ESS was associated with HRPs vs. non-HRPs (29 vs. 3%, P < 0.05). Arteries with increased shear stress score had increased frequency of HRPs and were associated with ACS vs. stable angina. CONCLUSION: Local low ESS and expansive lumen remodelling are associated with HRP. Arteries with increased shear stress score have increased frequency of HRPs and propensity to present with ACS
    corecore