1,456 research outputs found

    Cooperative answers in database systems

    Get PDF
    A major concern of researchers who seek to improve human-computer communication involves how to move beyond literal interpretations of queries to a level of responsiveness that takes the user's misconceptions, expectations, desires, and interests into consideration. At Maryland, we are investigating how to better meet a user's needs within the framework of the cooperative answering system of Gal and Minker. We have been exploring how to use semantic information about the database to formulate coherent and informative answers. The work has two main thrusts: (1) the construction of a logic formula which embodies the content of a cooperative answer; and (2) the presentation of the logic formula to the user in a natural language form. The information that is available in a deductive database system for building cooperative answers includes integrity constraints, user constraints, the search tree for answers to the query, and false presuppositions that are present in the query. The basic cooperative answering theory of Gal and Minker forms the foundation of a cooperative answering system that integrates the new construction and presentation methods. This paper provides an overview of the cooperative answering strategies used in the CARMIN cooperative answering system, an ongoing research effort at Maryland. Section 2 gives some useful background definitions. Section 3 describes techniques for collecting cooperative logical formulae. Section 4 discusses which natural language generation techniques are useful for presenting the logic formula in natural language text. Section 5 presents a diagram of the system

    COOPERATIVE QUERY ANSWERING FOR APPROXIMATE ANSWERS WITH NEARNESS MEASURE IN HIERARCHICAL STRUCTURE INFORMATION SYSTEMS

    Get PDF
    Cooperative query answering for approximate answers has been utilized in various problem domains. Many challenges in manufacturing information retrieval, such as: classifying parts into families in group technology implementation, choosing the closest alternatives or substitutions for an out-of-stock part, or finding similar existing parts for rapid prototyping, could be alleviated using the concept of cooperative query answering. Most cooperative query answering techniques proposed by researchers so far concentrate on simple queries or single table information retrieval. Query relaxations in searching for approximate answers are mostly limited to attribute value substitutions. Many hierarchical structure information systems, such as manufacturing information systems, store their data in multiple tables that are connected to each other using hierarchical relationships - "aggregation", "generalization/specialization", "classification", and "category". Due to the nature of hierarchical structure information systems, information retrieval in such domains usually involves nested or jointed queries. In addition, searching for approximate answers in hierarchical structure databases not only considers attribute value substitutions, but also must take into account attribute or relation substitutions (i.e., WIDTH to DIAMETER, HOLE to GROOVE). For example, shape transformations of parts or features are possible and commonly practiced. A bar could be transformed to a rod. Such characteristics of hierarchical information systems, simple query or single-relation query relaxation techniques used in most cooperative query answering systems are not adequate. In this research, we proposed techniques for neighbor knowledge constructions, and complex query relaxations. We enhanced the original Pattern-based Knowledge Induction (PKI) and Distribution Sensitive Clustering (DISC) so that they can be used in neighbor hierarchy constructions at both tuple and attribute levels. We developed a cooperative query answering model to facilitate the approximate answer searching for complex queries. Our cooperative query answering model is comprised of algorithms for determining the causes of null answer, expanding qualified tuple set, expanding intersected tuple set, and relaxing multiple condition simultaneously. To calculate the semantic nearness between exact-match answers and approximate answers, we also proposed a nearness measuring function, called "Block Nearness", that is appropriate for the query relaxation methods proposed in this research

    Relaxing and Restraining Queries for OBDA

    Get PDF
    In ontology-based data access (OBDA), ontologies have been successfully employed for querying possibly unstructured and incomplete data. In this paper, we advocate using ontologies not only to formulate queries and compute their answers, but also for modifying queries by relaxing or restraining them, so that they can retrieve either more or less answers over a given dataset. Towards this goal, we first illustrate that some domain knowledge that could be naturally leveraged in OBDA can be expressed using complex role inclusions (CRI). Queries over ontologies with CRI are not first-order (FO) rewritable in general. We propose an extension of DL-Lite with CRI, and show that conjunctive queries over ontologies in this extension are FO rewritable. Our main contribution is a set of rules to relax and restrain conjunctive queries (CQs). Firstly, we define rules that use the ontology to produce CQs that are relaxations/restrictions over any dataset. Secondly, we introduce a set of data-driven rules, that leverage patterns in the current dataset, to obtain more fine-grained relaxations and restrictions

    Answering Complex Questions by Joining Multi-Document Evidence with Quasi Knowledge Graphs

    No full text
    Direct answering of questions that involve multiple entities and relations is a challenge for text-based QA. This problem is most pronounced when answers can be found only by joining evidence from multiple documents. Curated knowledge graphs (KGs) may yield good answers, but are limited by their inherent incompleteness and potential staleness. This paper presents QUEST, a method that can answer complex questions directly from textual sources on-the-fly, by computing similarity joins over partial results from different documents. Our method is completely unsupervised, avoiding training-data bottlenecks and being able to cope with rapidly evolving ad hoc topics and formulation style in user questions. QUEST builds a noisy quasi KG with node and edge weights, consisting of dynamically retrieved entity names and relational phrases. It augments this graph with types and semantic alignments, and computes the best answers by an algorithm for Group Steiner Trees. We evaluate QUEST on benchmarks of complex questions, and show that it substantially outperforms state-of-the-art baselines

    Computer-language based data prefetching techniques

    Get PDF
    Data prefetching has long been used as a technique to improve access times to persistent data. It is based on retrieving data records from persistent storage to main memory before the records are needed. Data prefetching has been applied to a wide variety of persistent storage systems, from file systems to Relational Database Management Systems and NoSQL databases, with the aim of reducing access times to the data maintained by the system and thus improve the execution times of the applications using this data. However, most existing solutions to data prefetching have been based on information that can be retrieved from the storage system itself, whether in the form of heuristics based on the data schema or data access patterns detected by monitoring access to the system. There are multiple disadvantages of these approaches in terms of the rigidity of the heuristics they use, the accuracy of the predictions they make and / or the time they need to make these predictions, a process often performed while the applications are accessing the data and causing considerable overhead. In light of the above, this thesis proposes two novel approaches to data prefetching based on predictions made by analyzing the instructions and statements of the computer languages used to access persistent data. The proposed approaches take into consideration how the data is accessed by the higher-level applications, make accurate predictions and are performed without causing any additional overhead. The first of the proposed approaches aims at analyzing instructions of applications written in object-oriented languages in order to prefetch data from Persistent Object Stores. The approach is based on static code analysis that is done prior to the application execution and hence does not add any overhead. It also includes various strategies to deal with cases that require runtime information unavailable prior to the execution of the application. We integrate this analysis approach into an existing Persistent Object Store and run a series of extensive experiments to measure the improvement obtained by prefetching the objects predicted by the approach. The second approach analyzes statements and historic logs of the declarative query language SPARQL in order to prefetch data from RDF Triplestores. The approach measures two types of similarity between SPARQL queries in order to detect recurring query patterns in the historic logs. Afterwards, it uses the detected patterns to predict subsequent queries and launch them before they are requested to prefetch the data needed by them. Our evaluation of the proposed approach shows that it high-accuracy prediction and can achieve a high cache hit rate when caching the results of the predicted queries.Precargar datos ha sido una de las técnicas más comunes para mejorar los tiempos de acceso a datos persistentes. Esta técnica se basa en predecir los registros de datos que se van a acceder en el futuro y cargarlos del almacenimiento persistente a la memoria con antelación a su uso. Precargar datos ha sido aplicado en multitud de sistemas de almacenimiento persistente, desde sistemas de ficheros a bases de datos relacionales y NoSQL, con el objetivo de reducir los tiempos de acceso a los datos y por lo tanto mejorar los tiempos de ejecución de las aplicaciones que usan estos datos. Sin embargo, la mayoría de los enfoques existentes utilizan predicciones basadas en información que se encuentra dentro del mismo sistema de almacenimiento, ya sea en forma de heurísticas basadas en el esquema de los datos o patrones de acceso a los datos generados mediante la monitorización del acceso al sistema. Estos enfoques presentan varias desventajas en cuanto a la rigidez de las heurísticas usadas, la precisión de las predicciones generadas y el tiempo que necesitan para generar estas predicciones, un proceso que se realiza con frecuencia mientras las aplicaciones acceden a los datos y que puede tener efectos negativos en el tiempo de ejecución de estas aplicaciones. En vista de lo anterior, esta tesis presenta dos enfoques novedosos para precargar datos basados en predicciones generadas por el análisis de las instrucciones y sentencias del lenguaje informático usado para acceder a los datos persistentes. Los enfoques propuestos toman en consideración cómo las aplicaciones acceden a los datos, generan predicciones precisas y mejoran el rendimiento de las aplicaciones sin causar ningún efecto negativo. El primer enfoque analiza las instrucciones de applicaciones escritas en lenguajes de programación orientados a objetos con el fin de precargar datos de almacenes de objetos persistentes. El enfoque emplea análisis estático de código hecho antes de la ejecución de las aplicaciones, y por lo tanto no afecta negativamente el rendimiento de las mismas. El enfoque también incluye varias estrategias para tratar casos que requieren información de runtime no disponible antes de ejecutar las aplicaciones. Además, integramos este enfoque en un almacén de objetos persistentes y ejecutamos una serie extensa de experimentos para medir la mejora de rendimiento que se puede obtener utilizando el enfoque. Por otro lado, el segundo enfoque analiza las sentencias y logs del lenguaje declarativo de consultas SPARQL para precargar datos de triplestores de RDF. Este enfoque aplica dos medidas para calcular la similtud entre las consultas del lenguaje SPARQL con el objetivo de detectar patrones recurrentes en los logs históricos. Posteriormente, el enfoque utiliza los patrones detectados para predecir las consultas siguientes y precargar con antelación los datos que necesitan. Nuestra evaluación muestra que este enfoque produce predicciones de alta precisión y puede lograr un alto índice de aciertos cuando los resultados de las consultas predichas se guardan en el caché.Postprint (published version

    Contextualized and personalized location-based services

    Get PDF
    Advances in the technologies of smart mobile devices and tiny sensors together with the increase in the number of web resources open up a plethora of new mobile information services where people can acquire and disseminate information at any place and any time. Location-based services (LBS) are characterized by providing users with useful and local information, i.e. information that belongs to a particular domain of interest to the user and can be of use while the user remains in a particular area. In addition, LBS need to take into account the interactions and dependencies between services, user and context for the information filtering and delivery in order to fulfill the needs and constraints of mobile users. We argue that consequently it brings up a series of technical challenges in terms of data semantics and infrastructure, context-awareness and personalization, as well as query formulation and answering etc. They can not be simply extended from existing traditional data management strategies. Instead, they need a new solution. Firstly, we propose a semantic LBS infrastructure on the basis of the modularized ontologies approach. We elaborate a core ontology which is mainly composed of three modules describing the services, users and contexts. The core ontology aims at presenting an abstract view (a model) of all information in LBS. In contrast, data describing the instances (of services user and actual contextual data) are stored in three independent data stores, called the service profiles, user profiles and context profiles. These data are semantically aligned with the concepts in the core ontology through a set of mappings. This approach enables the distributed data sources to be maintained in a autonomous manner, which is well adapted to the high dynamics and mobility of the data sources. Secondly, we separately address the function, features, and our modelling approach of the three major players, i.e. service, context and user in LBS. Then, we define a set of constructs to represent their interactions and inter-dependencies and illustrate how these semantic constructs can contribute to personalized and contextualized query processing. Service classes are organized in a taxonomy, which distinguishes the services by their business functions. This concept hierarchy helps to analyze and reformulate the users' queries. We introduce three new kinds of relationships in the service module to enhance the semantics of interactions and dependencies between services. We identify five key components of contexts in LBS and regard them as a semantic contextual basis for LBS. Component contexts are related together by specific composition relationships that can describe spatio-temporal constraints. A user profile contains personal information about a given user and possibly a set of self-defined rules, which offer hints on what the user likes or dislikes, and what could attract him or her. In the core ontology clustering users with common features can help the cooperative query answering. Each of the three modules of the core ontology is an ontology in itself. They are inter-related by relationships that link concepts belonging to two different modules. The LBS fully benefits from the modularized structure of the core ontology. It allows restricting the search space, as well as facilitating the maintenance of each module. Finally, we studied the query reformulation and processing issues in LBS. How to make the query interface tangible and provide rapid and relevant answers are typical concerns in all information services. Our query format not only fully obeys the "simple, tangible and effective" golden-rules of user-interface design, but also satisfies the needs of domain-independent interface and emphasizes the importance of spatio-temporal constraints in LBS. With pre-defined spatio-temporal operators, users can easily specify in their queries the spatio-temporal availability they need for the services they are looking for. This allows eliminating most of irrelevant answers that are usually generated by keyword-based approaches. Constraints in the various dimensions (what, when, where and what-else) can be expressed by a conjunctive query, and then be smoothly translated to RDF-patterns. We illustrate our query answering strategy by using the SPARQL syntax, and explain how the relaxation can be done with rules specified in the query relaxation profile
    corecore