885 research outputs found

    An Efficient Query Optimizer with Materialized Intermediate Views in Distributed and Cloud Environment

    Get PDF
    In cloud computing environment hardware resources required for the execution of query using distributed relational database system are scaled up or scaled down according to the query workload performance. Complex queries require large scale of resources in order to complete their execution efficiently. The large scale of resource requirements can be reduced by minimizing query execution time that maximizes resource utilization and decreases payment overhead of customers. Complex queries or batch queries contain some common subexpressions. If these common subexpressions evaluated once and their results are cached, they can be used for execution of further queries. In this research, we have come up with an algorithm for query optimization, which aims at storing intermediate results of the queries and use these by-products for execution of future queries. Extensive experiments have been carried out with the help of simulation model to test the algorithm efficiency

    Towards an efficient API for optimisation problems data

    Get PDF
    The literature presents many application programming interfaces (APIs) and frameworks that provide state of the art algorithms and techniques for solving optimisation problems. The same cannot be said about APIs and frameworks focused on the problem data itself because with the peculiarities and details of each variant of a problem, it is virtually impossible to provide general tools that are broad enough to be useful on a large scale. However, there are benefits of employing problem-centred APIs in a R&D environment: improving the understanding of the problem, providing fairness on the results comparison, providing efficient data structures for different solving techniques, etc. Therefore, in this work we propose a novel design methodology for an API focused on an optimisation problem. Our methodology relies on a data parser to handle the problem specification files and on a set of efficient data structures to handle the information on memory, in an intuitive fashion for researchers and efficient for the solving algorithms. Also, we present the concepts of a solution dispenser that can manage solutions objects in memory better than built-in garbage collectors. Finally, we describe the positive results of employing a tailored API to a project involving the development of optimisation solutions for workforce scheduling and routing problems

    Coping with Alternate Formulations of Questions and Answers

    Get PDF
    We present in this chapter the QALC system which has participated in the four TREC QA evaluations. We focus here on the problem of linguistic variation in order to be able to relate questions and answers. We present first, variation at the term level which consists in retrieving questions terms in document sentences even if morphologic, syntactic or semantic variations alter them. Our second subject matter concerns variation at the sentence level that we handle as different partial reformulations of questions. Questions are associated with extraction patterns based on the question syntactic type and the object that is under query. We present the whole system thus allowing situating how QALC deals with variation, and different evaluations

    A Memory-Based Explanation of Antecedent-Ellipsis Mismatches New Insights From Computational Modeling

    Get PDF
    An active question in psycholinguistics is whether or not the parser and grammar reflect distinct cognitive systems. Recent evidence for a distinct-systems view comes from cases of ungrammatical but acceptable antecedent-ellipsis mismatches (e.g., *Tom kicked Bill, and Matt was kicked by Tom too.). The finding that these mismatches show varying degrees of acceptability has been presented as evidence for the use of extra-grammatical parsing strategies that restructure a mismatched antecedent to satisfy the syntactic constraints on ellipsis (Arregui et al. 2006; Kim et al. 2011). In this paper, I argue that it is unnecessary to posit a special class of parser-specific rules to capture the observed profiles, and that acceptable mismatches do not reflect a parser-grammar misalignment. Rather, such effects are a natural consequence of a single structure-building system (i.e., the grammar) that relies on noisy, domain-general memory access mechanisms to retrieve an antecedent from memory. In Experiment 1, I confirm the acceptability profiles reported in previous work. Then in Experiment 2, as proof-of-concept, I show using an established computational model of memory retrieval that the observed acceptability profiles follow from independently motivated principles of working memory, without invoking multiple representational systems. These results contribute to a uniform memory-based account of acceptable ungrammaticalities for a wide range of dependencies

    Extraction of Principle Knowledge from Process Patents for Manufacturing Process Innovation

    Get PDF
    Process patents contain substantial knowledge of the principles behind manufacturing process problems-solving; however, this knowledge is implicit in lengthy texts and cannot be directly reused in innovation design. To effectively support systematic manufacturing process innovation, this paper presents an approach to extracting principle innovation knowledge from process patents. The proposed approach consists of (1) classifying process patents by taking process method, manufacturing object and manufacturing feature as the references; (2) extracting generalized process contradiction parameters and the principles behind solving such process contradictions based on patent mining and technology abstraction of TRIZ (the theory of inventive problem solving); and (3) constructing a domain process contradiction matrix and mapping the relationship between the matrix and the corresponding process patents. Finally, a case study is presented to illustrate the applicability of the proposed approach

    Study and Implementation of Blockchain Compression

    Get PDF
    Blockchain technology is being recognized as the technology innovation that is going to change how society and people interact. Despite the excitement, these application require hundred of gigabyte of memory space and they are not suitable for IoT devices. Our research goals are to compress the blockchain data and to measure the minimum memory required to participate in a blockchain application maintaining both privacy and security
    corecore