8 research outputs found

    The Structural Multiple and Information Satisfied Mixture of XML

    Get PDF
    Perhaps the order of the most relevant results for the question and return to the most common form of XML query processing. To solve this problem, we first propose an elegant query release framework that supports approximate XML data queries. The solutions that underpin this framework are not forced to strictly conform to the specified query format, but may be based on attributes that cannot be inferred in the original query. However, the current proposals do not take sufficient account of structures, nor do they have the power to combine structures and content neatly to answer relaxation questions. Within our solution we divide nodes into two groups: categorization attribute contracts and statistical attribute nodes. We continue to use a comprehensive set of experience to demonstrate the effectiveness of our proposed approach in terms of accuracy and the restoration of benchmarks. In practical applications, it is often impossible to query XML data because the hierarchical structure of XML documents can be heterogeneous, so any misunderstanding of the document structure can certainly increase the risk of formulating unsatisfactory queries. This is really difficult, especially given the fact that such queries lead to empty solutions, although there are no translation errors. In addition, we propose an evidence-based acyclic graph that generates and regulates the relaxation of the structure and develops an inefficient assessment coefficient to evaluate the relationship of structure similarity. We are therefore developing a new top-to-search approach that can intelligently create promising solutions in a ranking-related order

    Grade And Exact In Order Of Textual Substance

    Get PDF
    Ranking and returning the most relevant results for a question is probably the most popular form of XML query processing. To resolve this issue, we first suggest an elegant framework for query relaxation processes to support difficult XML queries. The solutions on which this framework is based are not required, however, to satisfy the precisely defined query syntax, as they can be based on the qualities that can be deduced in the initial query. It does not have the power to elegantly combine structures and content to answer comfortable questions. In our solution, we classify nodes into two groups: categorical nodes and statistical nodes and pattern-based approaches in assessing the similarity relationship of categorical nodes and statistical nodes. We continue to use a comprehensive set of experiences to demonstrate the effectiveness of our proposed approach to the accuracy and recovery of values. Querying XML data often becomes difficult in practical applications because the hierarchical structure of XML documents can be heterogeneous, so any slight misunderstanding of the document structure can certainly increase the risk of unsatisfactory queries. This is very difficult, especially given that such queries produce empty solutions, even if there are no translation errors. In addition, we design a non-periodic evidence-based vector diagram to create and adjust the weakening of the structure and develop an inefficient evaluation parameter to evaluate the similarity relationship on structures. So, we design a new approach to take the highest k that can intelligently create the most promising solutions in a linked order using the ranking scale

    ARRANGE AND EXTRACT ACCURATE INFORMATION ABOUT XML CONTENT

    Get PDF
    Order and Return The most relevant results may be the most common form of XML query processing. To work around this problem, we first suggest an elegant query framework to support rough queries across XML data. The solutions based on this framework do not have to accurately fulfill the wording of the query but may be based on attributes that can be inferred in the original query. However, the current proposals do not take the structures into account adequately, in addition they do not have the power to combine structures and contents neatly to answer relaxation queries. Within our solution, we classify the contract into two groups: class attribute points, statistical attribute points, and pattern of related methods in relation to similarity ratings for holding the class attribute and statistical attribute points. We continue to benefit from a comprehensive set of experiments to demonstrate the effectiveness of our proposed approach when it comes to accuracy and recall metrics. XML data cannot be queried in practical applications, because the hierarchical structure of XML documents may be heterogeneous, or any slight misunderstanding of the structure of the document can certainly increase the risk of unsatisfactory query formulation. This is really difficult, especially given the fact that such inquiries give empty solutions, although they are not aggregative errors. In addition, we design a polygonal diagram based on an idea to create and regulate the relaxation of the structure and develop an inefficient evaluation coefficient to assess the relative relationship to structures. We therefore create a new retrieval approach from top k that can intelligently create promising solutions in a contextual arrangement using the order scale

    A secure data outsourcing scheme based on Asmuth – Bloom secret sharing

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Data outsourcing is an emerging paradigm for data management in which a database is provided as a service by third-party service providers. One of the major benefits of offering database as a service is to provide organisations, which are unable to purchase expensive hardware and software to host their databases, with efficient data storage accessible online at a cheap rate. Despite that, several issues of data confidentiality, integrity, availability and efficient indexing of users’ queries at the server side have to be addressed in the data outsourcing paradigm. Service providers have to guarantee that their clients’ data are secured against internal (insider) and external attacks. This paper briefly analyses the existing indexing schemes in data outsourcing and highlights their advantages and disadvantages. Then, this paper proposes a secure data outsourcing scheme based on Asmuth–Bloom secret sharing which tries to address the issues in data outsourcing such as data confidentiality, availability and order preservation for efficient indexing

    Fast and Accurate Computation of Equi-Depth Histograms over Data Streams

    No full text
    Equi-depth histograms represent a fundamental synopsis widely used in both database and data stream applications, as they provide the cornerstone of many techniques such as query optimization, approximate query answering, distribution fitting, and parallel database partitioning. Equi-depth histograms try to partition a sequence of data in a way that every part has the same number of data items. In this paper, we present a new algorithm to estimate equi-depth histograms for high speed data streams over sliding windows. While many previous methods were based on quantile computations, we propose a new method called BAr Splitting Histogram (BASH) that provides an expected ϵ-approximate solution to compute the equi-depth histogram. Extensive experiments show that BASH is at least four times faster than one of the best existing approaches, while achieving similar or better accuracy and in some cases using less memory. The experimental results also indicate that BASH is more stable on data affected by frequent concept shifts

    Exploring Data Partitions for What-if Analysis

    Get PDF
    What-if analysis is a data-intensive exploration to inspect how changes in a set of input parameters of a model influence some outcomes. It is motivated by a user trying to understand the sensitivity of a model to a certain parameter in order to reach a set of goals that are defined over the outcomes. To avoid an exploration of all possible combinations of parameter values, efficient what-if analysis calls for a partitioning of parameter values into data ranges and a unified representation of the obtained outcomes per range. Traditional techniques to capture data ranges, such as histograms, are limited to one outcome dimension. Yet, in practice, what-if analysis often involves conflicting goals that are defined over different dimensions of the outcome. Working on each of those goals independently cannot capture the inherent trade-off between them. In this paper, we propose techniques to recommend data ranges for what-if analysis, which capture not only data regularities, but also the trade-off between conflicting goals. Specifically, we formulate a parametric data partitioning problem and propose a method to find an optimal solution for it. Targeting scalability to large datasets, we further provide a heuristic solution to this problem. By theoretical and empirical analyses, we establish performance guarantees in terms of runtime and result quality

    Engineering self-awareness with knowledge management in dynamic systems: a case for volunteer computing

    Get PDF
    The complexity of the modem dynamic computing systems has motivated software engineering researchers to explore new sources of inspiration for equipping such systems with autonomic behaviours. Self-awareness has recently gained considerable attention as a prominent property for enriching the self-adaptation capabilities in systems operating in dynamic, heterogeneous and open environments. This thesis investigates the role of knowledge and its dynamic management in realising various levels of self-awareness for enabling self­adaptivity with different capabilities and strengths. The thesis develops a novel multi-level dynamic knowledge management approach for managing and representing the evolving knowledge. The approach is able to acquire 'richer' knowledge about the system's internal state and its environment in addition to managing the trade-offs arising from the adaptation conflicting goals. The thesis draws on a case from the volunteer computing, as an environment characterised by openness, heterogeneity, dynamism, and unpredictability to develop and evaluate the approach. This thesis takes an experimental approach to evaluate the effectiveness of the of the dynamic knowledge management approach. The results show the added value of the approach to the self-adaptivity of the system compared to classic self­adaptation capabilities
    corecore