1,813 research outputs found

    Multi modal multi-semantic image retrieval

    Get PDF
    PhDThe rapid growth in the volume of visual information, e.g. image, and video can overwhelm users’ ability to find and access the specific visual information of interest to them. In recent years, ontology knowledge-based (KB) image information retrieval techniques have been adopted into in order to attempt to extract knowledge from these images, enhancing the retrieval performance. A KB framework is presented to promote semi-automatic annotation and semantic image retrieval using multimodal cues (visual features and text captions). In addition, a hierarchical structure for the KB allows metadata to be shared that supports multi-semantics (polysemy) for concepts. The framework builds up an effective knowledge base pertaining to a domain specific image collection, e.g. sports, and is able to disambiguate and assign high level semantics to ‘unannotated’ images. Local feature analysis of visual content, namely using Scale Invariant Feature Transform (SIFT) descriptors, have been deployed in the ‘Bag of Visual Words’ model (BVW) as an effective method to represent visual content information and to enhance its classification and retrieval. Local features are more useful than global features, e.g. colour, shape or texture, as they are invariant to image scale, orientation and camera angle. An innovative approach is proposed for the representation, annotation and retrieval of visual content using a hybrid technique based upon the use of an unstructured visual word and upon a (structured) hierarchical ontology KB model. The structural model facilitates the disambiguation of unstructured visual words and a more effective classification of visual content, compared to a vector space model, through exploiting local conceptual structures and their relationships. The key contributions of this framework in using local features for image representation include: first, a method to generate visual words using the semantic local adaptive clustering (SLAC) algorithm which takes term weight and spatial locations of keypoints into account. Consequently, the semantic information is preserved. Second a technique is used to detect the domain specific ‘non-informative visual words’ which are ineffective at representing the content of visual data and degrade its categorisation ability. Third, a method to combine an ontology model with xi a visual word model to resolve synonym (visual heterogeneity) and polysemy problems, is proposed. The experimental results show that this approach can discover semantically meaningful visual content descriptions and recognise specific events, e.g., sports events, depicted in images efficiently. Since discovering the semantics of an image is an extremely challenging problem, one promising approach to enhance visual content interpretation is to use any associated textual information that accompanies an image, as a cue to predict the meaning of an image, by transforming this textual information into a structured annotation for an image e.g. using XML, RDF, OWL or MPEG-7. Although, text and image are distinct types of information representation and modality, there are some strong, invariant, implicit, connections between images and any accompanying text information. Semantic analysis of image captions can be used by image retrieval systems to retrieve selected images more precisely. To do this, a Natural Language Processing (NLP) is exploited firstly in order to extract concepts from image captions. Next, an ontology-based knowledge model is deployed in order to resolve natural language ambiguities. To deal with the accompanying text information, two methods to extract knowledge from textual information have been proposed. First, metadata can be extracted automatically from text captions and restructured with respect to a semantic model. Second, the use of LSI in relation to a domain-specific ontology-based knowledge model enables the combined framework to tolerate ambiguities and variations (incompleteness) of metadata. The use of the ontology-based knowledge model allows the system to find indirectly relevant concepts in image captions and thus leverage these to represent the semantics of images at a higher level. Experimental results show that the proposed framework significantly enhances image retrieval and leads to narrowing of the semantic gap between lower level machinederived and higher level human-understandable conceptualisation

    Annotated text databases in the context of the Kaj Munk corpus:One database model, one query language, and several applications

    Get PDF

    Enhanced biomedical data extraction from scientific publications

    Get PDF
    The field of scientific research is constantly expanding, with thousands of new articles being published every day. As online databases grow, so does the need for technologies capable of navigating and extracting key information from the stored publications. In the biomedical field, these articles lay the foundation for advancing our understanding of human health and improving medical practices. With such a vast amount of data available, it can be difficult for researchers to quickly and efficiently extract the information they need. The challenge is compounded by the fact that many existing tools are expensive, hard to learn and not compatible with all article types. To address this, a prototype was developed. This prototype leverages the PubMed API to provide researchers access to the information in numerous open access articles. Features include the tracking of keywords and high frequent words along with the possibility of extracting table content. The prototype is designed to streamline the process of extracting data from research articles, allowing researchers to more efficiently analyze and synthesize information from multiple sources.Masteroppgave i informatikkINF399MAMN-INFMAMN-PRO

    Research on computer aided testing of pilot response to critical in-flight events

    Get PDF
    Experiments on pilot decision making are described. The development of models of pilot decision making in critical in flight events (CIFE) are emphasized. The following tests are reported on the development of: (1) a frame system representation describing how pilots use their knowledge in a fault diagnosis task; (2) assessment of script norms, distance measures, and Markov models developed from computer aided testing (CAT) data; and (3) performance ranking of subject data. It is demonstrated that interactive computer aided testing either by touch CRT's or personal computers is a useful research and training device for measuring pilot information management in diagnosing system failures in simulated flight situations. Performance is dictated by knowledge of aircraft sybsystems, initial pilot structuring of the failure symptoms and efficient testing of plausible causal hypotheses

    Exploring opportunities to facilitate serendipity in search

    Get PDF
    Serendipitously discovering new information can bring many benefits. Although we can design systems to highlight serendipitous information, serendipity cannot be easily orchestrated and is thus hard to study. In this paper, we deployed a working search engine that matched search results with Facebook `Like' data, as a technology probe to examine naturally occurring serendipitous discoveries. Search logs and diary entries revealed the nature of these occasions in both leisure and work contexts. The findings support the use of the micro-serendipity model in search system design

    Development of a multi-layered botmaster based analysis framework

    Get PDF
    Botnets are networks of compromised machines called bots that come together to form the tool of choice for hackers in the exploitation and destruction of computer networks. Most malicious botnets have the ability to be rented out to a broad range of potential customers, with each customer having an attack agenda different from the other. The result is a botnet that is under the control of multiple botmasters, each of which implement their own attacks and transactions at different times in the botnet. In order to fight botnets, details about their structure, users, and their users motives need to be discovered. Since current botnets require the information about the initial bootstrapping of a bot to a botnet, the monitoring of botnets are possible. Botnet monitoring is used to discover the details of a botnet, but current botnet monitoring projects mainly identify the magnitude of the botnet problem and tend to overt some fundamental problems, such as the diversified sources of the attacks. To understand the use of botnets in more detail, the botmasters that command the botnets need to be studied. In this thesis we focus on identifying the threat of botnets based on each individual botmaster. We present a multi-layered analysis framework which identifies the transactions of each botmaster and then we correlate the transactions with the physical evolution of the botnet. With these characteristics we discover what role each botmaster plays in the overall botnet operation. We demonstrate our results in our system: MasterBlaster, which discovers the level of interaction between each botmaster and the botnet. Our system has been evaluated in real network traces. Our results show that investigating the roles of each botmaster in a botnet should be essential and demonstrates its potential benefit for identifying and conducting additional research on analyzing botmaster interactions. We believe our work will pave the way for more fine-grained analysis of botnets which will lead to better protection capabilities and more rapid attribution of cyber crimes committed using botnets
    • …
    corecore