1,593 research outputs found

    Exploiting self-monitoring sample views for cardinality estimation

    Get PDF
    Good cardinality estimates are critical for generating good execution plans during query optimization. Complex predicates, correlations between columns, and user-defined functions are extremely hard to handle when using the traditional histogram approach. This demo illustrates the use of sample views for cardinality estimations as prototyped in Microsoft SQL Server. We show the creation of sample views, discuss how they are exploited during query optimization, and explain their potential effect on query plans. In addition, we also show our implementation of maintenance policies using statistical quality control techniques based on query feedback

    10381 Summary and Abstracts Collection -- Robust Query Processing

    Get PDF
    Dagstuhl seminar 10381 on robust query processing (held 19.09.10 - 24.09.10) brought together a diverse set of researchers and practitioners with a broad range of expertise for the purpose of fostering discussion and collaboration regarding causes, opportunities, and solutions for achieving robust query processing. The seminar strove to build a unified view across the loosely-coupled system components responsible for the various stages of database query processing. Participants were chosen for their experience with database query processing and, where possible, their prior work in academic research or in product development towards robustness in database query processing. In order to pave the way to motivate, measure, and protect future advances in robust query processing, seminar 10381 focused on developing tests for measuring the robustness of query processing. In these proceedings, we first review the seminar topics, goals, and results, then present abstracts or notes of some of the seminar break-out sessions. We also include, as an appendix, the robust query processing reading list that was collected and distributed to participants before the seminar began, as well as summaries of a few of those papers that were contributed by some participants

    Robust Query Optimization Methods With Respect to Estimation Errors: A Survey

    Get PDF
    International audienceThe quality of a query execution plan chosen by a Cost-Based Optimizer (CBO) depends greatly on the estimation accuracy of input parameter values. Many research results have been produced on improving the estimation accuracy, but they do not work for every situation. Therefore, "robust query optimization" was introduced, in an effort to minimize the sub-optimality risk by accepting the fact that estimates could be inaccurate. In this survey, we aim to provide an overview of robust query optimization methods by classifying them into different categories, explaining the essential ideas, listing their advantages and limitations, and comparing them with multiple criteria

    Prochlo: Strong Privacy for Analytics in the Crowd

    Full text link
    The large-scale monitoring of computer users' software activities has become commonplace, e.g., for application telemetry, error reporting, or demographic profiling. This paper describes a principled systems architecture---Encode, Shuffle, Analyze (ESA)---for performing such monitoring with high utility while also protecting user privacy. The ESA design, and its Prochlo implementation, are informed by our practical experiences with an existing, large deployment of privacy-preserving software monitoring. (cont.; see the paper

    Query Optimization in Dynamic Environments

    Get PDF
    Most modern applications deal with very large amounts of data. Having to deal with such huge amounts of data is in itself a challenge. This challenge is complicated even more by the fact that, in many cases, this data is constantly changing and evolving. For instance, relational databases that handle the data of day-to-day transactional applications often have tables with very high data change rates. It is not uncommon to even have temporary or volatile tables that get created from scratch and completely dropped over the course of one query workload. This dissertation focuses on optimizing structured queries over dynamic and constantly changing data sets. Our work address this issue, and some of the challenges related to it. We address the issue of database statistics becoming stale and inaccurate due to constantly changing data. We introduce ways to automatically analyze the existing statistics and recommend and collect the necessary statistics to optimize a single query or a query workload. We introduce a mechanism to automate the recommendation and collection of statistical views for a given query workload. We also compare two methods of using these statistical views in selectivity estimation. We evaluate our methods and techniques with experimental studies using prototypes that we built into commercial database systems

    Gait recognition and understanding based on hierarchical temporal memory using 3D gait semantic folding

    Get PDF
    Gait recognition and understanding systems have shown a wide-ranging application prospect. However, their use of unstructured data from image and video has affected their performance, e.g., they are easily influenced by multi-views, occlusion, clothes, and object carrying conditions. This paper addresses these problems using a realistic 3-dimensional (3D) human structural data and sequential pattern learning framework with top-down attention modulating mechanism based on Hierarchical Temporal Memory (HTM). First, an accurate 2-dimensional (2D) to 3D human body pose and shape semantic parameters estimation method is proposed, which exploits the advantages of an instance-level body parsing model and a virtual dressing method. Second, by using gait semantic folding, the estimated body parameters are encoded using a sparse 2D matrix to construct the structural gait semantic image. In order to achieve time-based gait recognition, an HTM Network is constructed to obtain the sequence-level gait sparse distribution representations (SL-GSDRs). A top-down attention mechanism is introduced to deal with various conditions including multi-views by refining the SL-GSDRs, according to prior knowledge. The proposed gait learning model not only aids gait recognition tasks to overcome the difficulties in real application scenarios but also provides the structured gait semantic images for visual cognition. Experimental analyses on CMU MoBo, CASIA B, TUM-IITKGP, and KY4D datasets show a significant performance gain in terms of accuracy and robustness

    Random finite sets in multi-target tracking - efficient sequential MCMC implementation

    Get PDF
    Over the last few decades multi-target tracking (MTT) has proved to be a challenging and attractive research topic. MTT applications span a wide variety of disciplines, including robotics, radar/sonar surveillance, computer vision and biomedical research. The primary focus of this dissertation is to develop an effective and efficient multi-target tracking algorithm dealing with an unknown and time-varying number of targets. The emerging and promising Random Finite Set (RFS) framework provides a rigorous foundation for optimal Bayes multi-target tracking. In contrast to traditional approaches, the collection of individual targets is treated as a set-valued state. The intent of this dissertation is two-fold; first to assert that the RFS framework not only is a natural, elegant and rigorous foundation, but also leads to practical, efficient and reliable algorithms for Bayesian multi-target tracking, and second to provide several novel RFS based tracking algorithms suitable for the specific Track-Before-Detect (TBD) surveillance application. One main contribution of this dissertation is a rigorous derivation and practical implementation of a novel algorithm well suited to deal with multi-target tracking problems for a given cardinality. The proposed Interacting Population-based MCMC-PF algorithm makes use of several Metropolis-Hastings samplers running in parallel, which interact through genetic variation. Another key contribution concerns the design and implementation of two novel algorithms to handle a varying number of targets. The first approach exploits Reversible Jumps. The second approach is built upon the concepts of labeled RFSs and multiple cardinality hypotheses. The performance of the proposed algorithms is also demonstrated in practical scenarios, and shown to significantly outperform conventional multi-target PF in terms of track accuracy and consistency. The final contribution seeks to exploit external information to increase the performance of the surveillance system. In multi-target scenarios, kinematic constraints from the interaction of targets with their environment or other targets can restrict target motion. Such motion constraint information is integrated by using a fixed-lag smoothing procedure, named Knowledge-Based Fixed-Lag Smoother (KB-Smoother). The proposed combination IP-MCMC-PF/KB-Smoother yields enhanced tracking
    • …
    corecore