51 research outputs found

    Combining textual and visual information processing for interactive video retrieval: SCHEMA's participation in TRECVID 2004

    Get PDF
    In this paper, the two different applications based on the Schema Reference System that were developed by the SCHEMA NoE for participation to the search task of TRECVID 2004 are illustrated. The first application, named ”Schema-Text”, is an interactive retrieval application that employs only textual information while the second one, named ”Schema-XM”, is an extension of the former, employing algorithms and methods for combining textual, visual and higher level information. Two runs for each application were submitted, I A 2 SCHEMA-Text 3, I A 2 SCHEMA-Text 4 for Schema-Text and I A 2 SCHEMA-XM 1, I A 2 SCHEMA-XM 2 for Schema-XM. The comparison of these two applications in terms of retrieval efficiency revealed that the combination of information from different data sources can provide higher efficiency for retrieval systems. Experimental testing additionally revealed that initially performing a text-based query and subsequently proceeding with visual similarity search using one of the returned relevant keyframes as an example image is a good scheme for combining visual and textual information

    Clinical validation of an algorithm for rapid and accurate automated segmentation of intracoronary optical coherence tomography images

    Get PDF
    Objectives: The analysis of intracoronary optical coherence tomography (OCT) images is based on manual identification of the lumen contours and relevant structures. However, manual image segmentation is a cumbersome and time-consuming process, subject to significant intra- and inter-observer variability. This study aims to present and validate a fully-automated method for segmentation of intracoronary OCT images. Methods: We studied 20 coronary arteries (mean length = 39.7 ± 10.0 mm) from 20 patients who underwent a clinically-indicated cardiac catheterization. The OCT images (n = 1812) were segmented manually, as well as with a fully-automated approach. A semi-automated variation of the fully-automated algorithm was also applied. Using certain lumen size and lumen shape characteristics, the fully- and semi-automated segmentation algorithms were validated over manual segmentation, which was considered as the gold standard. Results: Linear regression and Bland–Altman analysis demonstrated that both the fully-automated and semiautomated segmentation had a very high agreement with the manual segmentation, with the semi-automated approach being slightly more accurate than the fully-automated method. The fully-automated and semiautomated OCT segmentation reduced the analysis time by more than 97% and 86%, respectively, compared to manual segmentation. Conclusions: In the current work we validated a fully-automated OCT segmentation algorithm, as well as a semiautomated variation of it in an extensive “real-life” dataset of OCT images. The study showed that our algorithm can perform rapid and reliable segmentation of OCT images

    Accurate and reproducible reconstruction of coronary arteries and endothelial shear stress calculation using 3D OCT: Comparative study to 3D IVUS and 3D QCA

    Get PDF
    Background: Geometrically-correct 3D OCT is a new imaging modality with the potential to investigate the association of local hemodynamic microenvironment with OCT-derived high-risk features. We aimed to describe the methodology of 3D OCT and investigate the accuracy, inter- and intra-observer agreement of 3D OCT in reconstructing coronary arteries and calculating ESS, using 3D IVUS and 3D QCA as references. Methods-Results: 35 coronary artery segments derived from 30 patients were reconstructed in 3D space using 3D OCT. 3D OCT was validated against 3D IVUS and 3D QCA. The agreement in artery reconstruction among 3D OCT, 3D IVUS and 3D QCA was assessed in 3-mm-long subsegments using lumen morphometry and ESS parameters. The inter- and intra-observer agreement of 3D OCT, 3D IVUS and 3D QCA were assessed in a representative sample of 61 subsegments (n Π5 arteries). The data processing times for each reconstruction methodology were also calculated. There was a very high agreement between 3D OCT vs. 3D IVUS and 3D OCT vs. 3D QCA in terms of total reconstructed artery length and volume, as well as in terms of segmental morphometric and ESS metrics with mean differences close to zero and narrow limits of agreement (BlandeAltman analysis). 3D OCT exhibited excellent inter- and intra-observer agreement. The analysis time with 3D OCT was significantly lower compared to 3D IVUS. Conclusions: Geometrically-correct 3D OCT is a feasible, accurate and reproducible 3D reconstruction technique that can perform reliable ESS calculations in coronary arteries

    COST292 experimental framework for TRECVID 2006

    Get PDF
    In this paper we give an overview of the four TRECVID tasks submitted by COST292, European network of institutions in the area of semantic multimodal analysis and retrieval of digital video media. Initially, we present shot boundary evaluation method based on results merged using a confidence measure. The two SB detectors user here are presented, one of the Technical University of Delft and one of the LaBRI, University of Bordeaux 1, followed by the description of the merging algorithm. The high-level feature extraction task comprises three separate systems. The first system, developed by the National Technical University of Athens (NTUA) utilises a set of MPEG-7 low-level descriptors and Latent Semantic Analysis to detect the features. The second system, developed by Bilkent University, uses a Bayesian classifier trained with a "bag of subregions" for each keyframe. The third system by the Middle East Technical University (METU) exploits textual information in the video using character recognition methodology. The system submitted to the search task is an interactive retrieval application developed by Queen Mary, University of London, University of Zilina and ITI from Thessaloniki, combining basic retrieval functionalities in various modalities (i.e. visual, audio, textual) with a user interface supporting the submission of queries using any combination of the available retrieval tools and the accumulation of relevant retrieval results over all queries submitted by a single user during a specified time interval. Finally, the rushes task submission comprises a video summarisation and browsing system specifically designed to intuitively and efficiently presents rushes material in video production environment. This system is a result of joint work of University of Bristol, Technical University of Delft and LaBRI, University of Bordeaux 1

    Reconstruction of coronary arteries from X-ray angiography: A review.

    Get PDF
    Despite continuous progress in X-ray angiography systems, X-ray coronary angiography is fundamentally limited by its 2D representation of moving coronary arterial trees, which can negatively impact assessment of coronary artery disease and guidance of percutaneous coronary intervention. To provide clinicians with 3D/3D+time information of coronary arteries, methods computing reconstructions of coronary arteries from X-ray angiography are required. Because of several aspects (e.g. cardiac and respiratory motion, type of X-ray system), reconstruction from X-ray coronary angiography has led to vast amount of research and it still remains as a challenging and dynamic research area. In this paper, we review the state-of-the-art approaches on reconstruction of high-contrast coronary arteries from X-ray angiography. We mainly focus on the theoretical features in model-based (modelling) and tomographic reconstruction of coronary arteries, and discuss the evaluation strategies. We also discuss the potential role of reconstructions in clinical decision making and interventional guidance, and highlight areas for future research

    Exploiting visual similarities for ontology alignment

    No full text
    Ontology alignment is the process where two different ontologies that usually describe similar domains are ’aligned’, i.e. a set of correspondences between their entities, regarding semantic equivalence, is determined. In order to identify these correspondences several methods and metrics that measure semantic equivalence have been proposed in literature. The most common features that these metrics employ are string-, lexical-, structure- and semantic-based similarities for which several approaches have been developed. However, what hasn’t been investigated is the usage of visual-based features for determining entity similarity in cases where images are associated with concepts. Nowadays the existence of several resources (e.g. ImageNet) that map lexical concepts onto images allows for exploiting visual similarities for this purpose. In this paper, a novel approach for ontology matching based on visual similarity is presented. Each ontological entity is associated with sets of images, retrieved through ImageNet or web-based search, and state of the art visual feature extraction, clustering and indexing for computing the similarity between entities is employed. An adaptation of a popular Wordnet-based matching algorithm to exploit the visual similarity is also proposed. Our method is compared with traditional metrics against a standard ontology alignment benchmark dataset and demonstrates promising results.This work was supported by MULTISENSOR (contract no. FP7-610411) and KRISTINA (contract no./nH2020-645012) projects, partially funded by the European Commission

    Exploiting visual similarities for ontology alignment

    No full text
    Ontology alignment is the process where two different ontologies that usually describe similar domains are ’aligned’, i.e. a set of correspondences between their entities, regarding semantic equivalence, is determined. In order to identify these correspondences several methods and metrics that measure semantic equivalence have been proposed in literature. The most common features that these metrics employ are string-, lexical-, structure- and semantic-based similarities for which several approaches have been developed. However, what hasn’t been investigated is the usage of visual-based features for determining entity similarity in cases where images are associated with concepts. Nowadays the existence of several resources (e.g. ImageNet) that map lexical concepts onto images allows for exploiting visual similarities for this purpose. In this paper, a novel approach for ontology matching based on visual similarity is presented. Each ontological entity is associated with sets of images, retrieved through ImageNet or web-based search, and state of the art visual feature extraction, clustering and indexing for computing the similarity between entities is employed. An adaptation of a popular Wordnet-based matching algorithm to exploit the visual similarity is also proposed. Our method is compared with traditional metrics against a standard ontology alignment benchmark dataset and demonstrates promising results.This work was supported by MULTISENSOR (contract no. FP7-610411) and KRISTINA (contract no./nH2020-645012) projects, partially funded by the European Commission
    • 

    corecore