3,802 research outputs found

    Table Search, Generation and Completion

    Get PDF
    PhD thesis in Information technologyTables are one of those “universal tools” that are practical and useful in many application scenarios. Tables can be used to collect and organize information from multiple sources and then turn that information into knowledge (and, ultimately, support decision-making) by performing various operations, like sorting, filtering, and joins. Because of this, a large number of tables exist already out there on the Web, which represent a vast and rich source of structured information that could be utilized. The focus of the thesis is on developing methods for assisting the user in completing a complex task by providing intelligent assistance for working with tables. Specifically, our interest is in relational tables, which describe a set of entities along with their attributes. Imagine the scenario that a user is working with a table, and has already entered some data in the table. Intelligent assistance can include providing recommendations for the empty table cells, searching for similar tables that can serve as a blueprint, or even generating automatically the entire a table that the user needs. The table-making task can thus be simplified into just a few button clicks. Motivated by the above scenario, we propose a set of novel tasks such as table search, table generation, and table completion. Table search is the task of returning a ranked list of tables in response to a query. Google, for instance, can now provide tables as direct answers to plenty of queries, especially when users are searching for a list of things. Figure 1.1 shows an example. Table generation is about automatically organizing entities and their attributes in a tabular format to facilitate a better overview. Table completion is concerned with the task of augmenting the input table with additional tabular data. Figure 1.2 illustrates a scenario that recommends row and column headings to populate the table with and automatically completes table values from verifiable sources. In this thesis, we propose methods and evaluation resources for addressing these tasks

    Entity-Oriented Search

    Get PDF
    This open access book covers all facets of entity-oriented search—where “search” can be interpreted in the broadest sense of information access—from a unified point of view, and provides a coherent and comprehensive overview of the state of the art. It represents the first synthesis of research in this broad and rapidly developing area. Selected topics are discussed in-depth, the goal being to establish fundamental techniques and methods as a basis for future research and development. Additional topics are treated at a survey level only, containing numerous pointers to the relevant literature. A roadmap for future research, based on open issues and challenges identified along the way, rounds out the book. The book is divided into three main parts, sandwiched between introductory and concluding chapters. The first two chapters introduce readers to the basic concepts, provide an overview of entity-oriented search tasks, and present the various types and sources of data that will be used throughout the book. Part I deals with the core task of entity ranking: given a textual query, possibly enriched with additional elements or structural hints, return a ranked list of entities. This core task is examined in a number of different variants, using both structured and unstructured data collections, and numerous query formulations. In turn, Part II is devoted to the role of entities in bridging unstructured and structured data. Part III explores how entities can enable search engines to understand the concepts, meaning, and intent behind the query that the user enters into the search box, and how they can provide rich and focused responses (as opposed to merely a list of documents)—a process known as semantic search. The final chapter concludes the book by discussing the limitations of current approaches, and suggesting directions for future research. Researchers and graduate students are the primary target audience of this book. A general background in information retrieval is sufficient to follow the material, including an understanding of basic probability and statistics concepts as well as a basic knowledge of machine learning concepts and supervised learning algorithms

    NASA-Nearest Neighbour Algorithm with Structured Robustness Algorithm to improve Difficult Keyword Queries over Database

    Get PDF
    This is a technique to improve difficult keyword queries over databases.Estimating query performance is the job of predicting the excellence of results returned to examine in response of a query. Keyword queries on databases provide easy access to data, but often it goes through from low ranking quality. It is defined to get queries with low ranked result, quality to improve the user satisfaction. Post-retrieval predictors analyze the o of top-retrieved documents. This paper introduced a new technique to get high-performance named as NASA, this method is based on k-Nearest Neighbor (k-NN) search on the top-k results of the corrupted version of database. k-NN handles complex functions during the execution and it improve the loss of information. Simultaneously it helps to reduce the execution time. DOI: 10.17762/ijritcc2321-8169.15078

    Linking Surface Facts to Large-Scale Knowledge Graphs

    Full text link
    Open Information Extraction (OIE) methods extract facts from natural language text in the form of ("subject"; "relation"; "object") triples. These facts are, however, merely surface forms, the ambiguity of which impedes their downstream usage; e.g., the surface phrase "Michael Jordan" may refer to either the former basketball player or the university professor. Knowledge Graphs (KGs), on the other hand, contain facts in a canonical (i.e., unambiguous) form, but their coverage is limited by a static schema (i.e., a fixed set of entities and predicates). To bridge this gap, we need the best of both worlds: (i) high coverage of free-text OIEs, and (ii) semantic precision (i.e., monosemy) of KGs. In order to achieve this goal, we propose a new benchmark with novel evaluation protocols that can, for example, measure fact linking performance on a granular triple slot level, while also measuring if a system has the ability to recognize that a surface form has no match in the existing KG. Our extensive evaluation of several baselines show that detection of out-of-KG entities and predicates is more difficult than accurate linking to existing ones, thus calling for more research efforts on this difficult task. We publicly release all resources (data, benchmark and code) on https://github.com/nec-research/fact-linking

    The Use of Mapping as an Instructional Technique with Difference Model Readers

    Get PDF
    Purpose: This study investigated the use of mapping as an instructional technique to improve reading comprehension in readers exhibiting word-calling behavior (difference model readers) . Mapping is a graphic display of the events and ideas in a passage, depicting sequence and subordination. It was hypothesized that mapping would provide means for processing information from the text at the levels of concept formation, association and integration. Procedure: Six students in grades 7 through 10 were selected as subjects. They received instruction in mapping in conjunction with their regular reading program. At the conclusion of each instructional unit, a test passage was administered to measure change in comprehension abilities. The study utilized a single case experimental design; specifically, multiple baseline across subjects. Treatment lasted between 17 and 21 weeks, depending on the subject. In addition, pre- and post-test scores on the Reading Miscue Inventory were compared. Graphs generated by the multiple baseline procedure were analyzed through visual interpretation and the Rn statistic. All other data were subjected to descriptive analysis. Conclusions: The Rn statistic approached but did not achieve significance. Visual interpretation of the graphs indicated two trends: (1) the decreasing of variability in passage scores, and (2) the decreasing of extremely low scores of the lowest functioning subjects. Data from the Reading Miscue Inventory indicated substantial positive change in the subjects\u27 comprehension abilities. These findings give preliminary, limited support to the effectiveness of mapping in improving comprehension with difference model readers

    Active Analytics: Adapting Web Pages Automatically Based on Analytics Data

    Get PDF
    Web designers are expected to perform the difficult task of adapting a site’s design to fit changing usage trends. Web analytics tools give designers a window into website usage patterns, but they must be analyzed and applied to a website\u27s user interface design manually. A framework for marrying live analytics data with user interface design could allow for interfaces that adapt dynamically to usage patterns, with little or no action from the designers. The goal of this research is to create a framework that utilizes web analytics data to automatically update and enhance web user interfaces. In this research, we present a solution for extracting analytics data via web services from Google Analytics and transforming them into reporting data that will inform user interface improvements. Once data are extracted and summarized, we expose the summarized reports via our own web services in a form that can be used by our client side User Interface (UI) framework. This client side framework will dynamically update the content and navigation on the page to reflect the data mined from the web usage reports. The resulting system will react to changing usage patterns of a website and update the user interface accordingly. We evaluated our framework by assigning navigation tasks to users on the UNF website and measuring the time it took them to complete those tasks, one group with our framework enabled, and one group using the original website. We found that the group that used the modified version of the site with our framework enabled was able to navigate the site more quickly and effectively

    Semantic and pragmatic characterization of learning objects

    Get PDF
    Tese de doutoramento. Engenharia Informática. Universidade do Porto. Faculdade de Engenharia. 201

    Bubble World - A Novel Visual Information Retrieval Technique

    Get PDF
    With the tremendous growth of published electronic information sources in the last decade and the unprecedented reliance on this information to succeed in day-to-day operations, comes the expectation of finding the right information at the right time. Sentential interfaces are currently the only viable solution for searching through large infospheres of unstructured information, however, the simplistic nature of their interaction model and lack of cognitive amplification they can provide severely limit the performance of the interface. Visual information retrieval systems are emerging as possible candidate replacements for the more traditional interfaces, but many lack the cognitive framework to support the knowledge crystallization process found to be essential in information retrieval. This work introduces a novel visual information retrieval technique crafted from two distinct design genres: (1) the cognitive strategies of the human mind to solve problems and (2) observed interaction patterns with existing information retrieval systems. Based on the cognitive and interaction framework developed in this research, a functional prototype information retrieval system, called Bubble World, has been created to demonstrate that significant performance gains can be achieved using this technique when compared to more traditional text-based interfaces. Bubble World does this by successfully transforming the internal mental representation of the information retrieval problem to an efficient external view, and then through visual cues, provides cognitive amplification at key stages of the information retrieval process. Additionally, Bubble World provides the interaction model and the mechanisms to incorporate complex search schemas into the retrieval process either manually or automatically through the use of predefined ontological models
    • …
    corecore