138,963 research outputs found

    Towards Context Driven Modularization of Large Biomedical Ontologies

    Get PDF
    Formal knowledge about human anatomy, radiology or diseases is necessary to support clinical applications such as medical image search. This machine processable knowledge can be acquired from biomedical domain ontologies, which however, are typically very large and complex models. Thus, their straightforward incorporation into the software applications becomes difficult. In this paper we discuss first ideas on a statistical approach for modularizing large medical ontologies and we prioritize the practical applicability aspect. The underlying assumption is that the application relevant ontology fragments, i.e. modules, can be identified by the statistical analysis of the ontology concepts in the domain corpus. Accordingly, we argue that most frequently occurring concepts in the domain corpus define the application context and can therefore potentially yield the relevant ontology modules. We illustrate our approach on an example case that involves a large ontology on human anatomy and report on our first manual experiments

    Enhanced image annotations based on spatial information extraction and ontologies

    No full text
    Current research on image annotation often represents images in terms of labelled regions or objects, but pays little attention to the spatial positions or relationships between those regions or objects. To be effective, general purpose image retrieval systems require images with comprehensive annotations describing fully the content of the image. Much research is being done on automatic image annotation schemes but few authors address the issue of spatial annotations directly. This paper begins with a brief analysis of real picture queries to librarians showing how spatial terms are used to formulate queries. The paper is then concerned with the development of an enhanced automatic image annotation system, which extracts spatial information about objects in the image. The approach uses region boundaries and region labels to generate annotations describing absolute object positions and also relative positions between pairs of objects. A domain ontology and spatial information ontology are also used to extract more complex information about the relative closeness of objects to the viewer

    Utilising semantic technologies for intelligent indexing and retrieval of digital images

    Get PDF
    The proliferation of digital media has led to a huge interest in classifying and indexing media objects for generic search and usage. In particular, we are witnessing colossal growth in digital image repositories that are difficult to navigate using free-text search mechanisms, which often return inaccurate matches as they in principle rely on statistical analysis of query keyword recurrence in the image annotation or surrounding text. In this paper we present a semantically-enabled image annotation and retrieval engine that is designed to satisfy the requirements of the commercial image collections market in terms of both accuracy and efficiency of the retrieval process. Our search engine relies on methodically structured ontologies for image annotation, thus allowing for more intelligent reasoning about the image content and subsequently obtaining a more accurate set of results and a richer set of alternatives matchmaking the original query. We also show how our well-analysed and designed domain ontology contributes to the implicit expansion of user queries as well as the exploitation of lexical databases for explicit semantic-based query expansion

    Ontology for pixel processing

    Get PDF
    For all kinds of output devices, such as monitors, printers etc, the most important thing is to show the right information to the user. Pixel is the basic element both on screen and materials printed with. And, as a result pixel processing is the basic technique to make the output correct, precise, and suitable to use on different occasions. Pixel processing solves operations on each pixel of the image, which is for the pixel matrices of that image, so that the image would have different appearance. Ontology is about the exact description of things and their relationships. It is an old study of philosophy from ancient Greece. As the study of artificial intelligence keeps growing, the concept of ontology has been in use more and more in the formalization of knowledge in terms of classes, properties, instances and relations [1]. This paper mainly discusses how to build ontology of pixel processing with OWL. Actually, it is focused on how to describe pixel processing and its functions or operations in an understandable way by computer. With such description, it is possible to improve the development of pixel processing and the sharing of its knowledge both between people and machines. That is from the Natural Language Processing point of view. And also, in the future, it provides a base for intelligent agent to implement pixel processing by understanding such kind of definition and description directly through its knowledge base built up with such ontology. In other words, that may realize the automatic program or program analysis

    Multi modal multi-semantic image retrieval

    Get PDF
    PhDThe rapid growth in the volume of visual information, e.g. image, and video can overwhelm users’ ability to find and access the specific visual information of interest to them. In recent years, ontology knowledge-based (KB) image information retrieval techniques have been adopted into in order to attempt to extract knowledge from these images, enhancing the retrieval performance. A KB framework is presented to promote semi-automatic annotation and semantic image retrieval using multimodal cues (visual features and text captions). In addition, a hierarchical structure for the KB allows metadata to be shared that supports multi-semantics (polysemy) for concepts. The framework builds up an effective knowledge base pertaining to a domain specific image collection, e.g. sports, and is able to disambiguate and assign high level semantics to ‘unannotated’ images. Local feature analysis of visual content, namely using Scale Invariant Feature Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’ model (BVW) as an effective method to represent visual content information and to enhance its classification and retrieval. Local features are more useful than global features, e.g. colour, shape or texture, as they are invariant to image scale, orientation and camera angle. An innovative approach is proposed for the representation, annotation and retrieval of visual content using a hybrid technique based upon the use of an unstructured visual word and upon a (structured) hierarchical ontology KB model. The structural model facilitates the disambiguation of unstructured visual words and a more effective classification of visual content, compared to a vector space model, through exploiting local conceptual structures and their relationships. The key contributions of this framework in using local features for image representation include: first, a method to generate visual words using the semantic local adaptive clustering (SLAC) algorithm which takes term weight and spatial locations of keypoints into account. Consequently, the semantic information is preserved. Second a technique is used to detect the domain specific ‘non-informative visual words’ which are ineffective at representing the content of visual data and degrade its categorisation ability. Third, a method to combine an ontology model with xi a visual word model to resolve synonym (visual heterogeneity) and polysemy problems, is proposed. The experimental results show that this approach can discover semantically meaningful visual content descriptions and recognise specific events, e.g., sports events, depicted in images efficiently. Since discovering the semantics of an image is an extremely challenging problem, one promising approach to enhance visual content interpretation is to use any associated textual information that accompanies an image, as a cue to predict the meaning of an image, by transforming this textual information into a structured annotation for an image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct types of information representation and modality, there are some strong, invariant, implicit, connections between images and any accompanying text information. Semantic analysis of image captions can be used by image retrieval systems to retrieve selected images more precisely. To do this, a Natural Language Processing (NLP) is exploited firstly in order to extract concepts from image captions. Next, an ontology-based knowledge model is deployed in order to resolve natural language ambiguities. To deal with the accompanying text information, two methods to extract knowledge from textual information have been proposed. First, metadata can be extracted automatically from text captions and restructured with respect to a semantic model. Second, the use of LSI in relation to a domain-specific ontology-based knowledge model enables the combined framework to tolerate ambiguities and variations (incompleteness) of metadata. The use of the ontology-based knowledge model allows the system to find indirectly relevant concepts in image captions and thus leverage these to represent the semantics of images at a higher level. Experimental results show that the proposed framework significantly enhances image retrieval and leads to narrowing of the semantic gap between lower level machinederived and higher level human-understandable conceptualisation

    ćź€ć†…æ€ç‰©èĄšćž‹ćčłć°ćŠæ€§çŠ¶é‰Žćźšç ”ç©¶èż›ć±•ć’Œć±•æœ›

    Get PDF
    Plant phenomics is under rapid development in recent years, a research field that is progressing towards integration, scalability, multi-perceptivity and high-throughput analysis. Through combining remote sensing, Internet of Things (IoT), robotics, computer vision, and artificial intelligence techniques such as machine learning and deep learning, relevant research methodologies, biological applications and theoretical foundation of this research domain have been advancing speedily in recent years. This article first introduces the current trends of plant phenomics and its related progress in China and worldwide. Then, it focuses on discussing the characteristics of indoor phenotyping and phenotypic traits that are suitable for indoor experiments, including yield, quality, and stress related traits such as drought, cold and heat resistance, salt stress, heavy metals, and pests. By connecting key phenotypic traits with important biological questions in yield production, crop quality and Stress-related tolerance, we associated indoor phenotyping hardware with relevant biological applications and their plant model systems, for which a range of indoor phenotyping devices and platforms are listed and categorised according to their throughput, sensor integration, platform size, and applications. Additionally, this article introduces existing data management solutions and analysis software packages that are representative for phenotypic analysis. For example, ISA-Tab and MIAPPE ontology standards for capturing metadata in plant phenotyping experiments, PHIS and CropSight for managing complicated datasets, and Python or MATLAB programming languages for automated image analysis based on libraries such as OpenCV, Scikit-Image, MATLAB Image Processing Toolbox. Finally, due to the importance of extracting meaningful information from big phenotyping datasets, this article pays extra attention to the future development of plant phenomics in China, with suggestions and recommendations for the integration of multi-scale phenotyping data to increase confidence in research outcomes, the cultivation of cross-disciplinary researchers to lead the next-generation plant research, as well as the collaboration between academia and industry to enable world-leading research activities in the near future

    Developing the Quantitative Histopathology Image Ontology : A case study using the hot spot detection problem

    Get PDF
    Interoperability across data sets is a key challenge for quantitative histopathological imaging. There is a need for an ontology that can support effective merging of pathological image data with associated clinical and demographic data. To foster organized, cross-disciplinary, information-driven collaborations in the pathological imaging field, we propose to develop an ontology to represent imaging data and methods used in pathological imaging and analysis, and call it Quantitative Histopathological Imaging Ontology – QHIO. We apply QHIO to breast cancer hot-spot detection with the goal of enhancing reliability of detection by promoting the sharing of data between image analysts
    • 

    corecore