5 research outputs found

    Testing systems of identical components

    Get PDF
    We consider the problem of testing sequentially the components of a multi-component reliability system in order to figure out the state of the system via costly tests. In particular, systems with identical components are considered. The notion of lexicographically large binary decision trees is introduced and a heuristic algorithm based on that notion is proposed. The performance of the heuristic algorithm is demonstrated by computational results, for various classes of functions. In particular, in all 200 random cases where the underlying function is a threshold function, the proposed heuristic produces optimal solutions

    Optimizing Query Predicates with Disjunctions for Column Stores

    Full text link
    Since its inception, database research has given limited attention to optimizing predicates with disjunctions. What little past work there is has focused on optimizations for traditional row-oriented databases. A key difference in predicate evaluation for row stores and column stores is that while row stores apply predicates to one record at a time, column stores apply predicates to sets of records. Not only must the execution engine decide the order in which to apply the predicates, but it must also decide how many times each predicate should be applied and on which sets of records it should be applied to. In our work, we tackle exactly this problem. We formulate, analyze, and solve the predicate evaluation problem for column stores. Our results include proofs about various properties of the problem, and in turn, these properties have allowed us to derive the first polynomial-time (i.e., O(n log n)) algorithm ShallowFish which evaluates predicates optimally for all predicate expressions with a depth of 2 or less. We capture the exact property which makes the problem more difficult for predicate expressions of depth 3 or greater and propose an approximate algorithm DeepFish which outperforms ShallowFish in these situations. Finally, we show that both ShallowFish and DeepFish outperform the corresponding state of the art by two orders of magnitude

    Predictable performance and high query concurrency for data analytics

    Get PDF
    Conventional data warehouses employ the query- at-a-time model, which maps each query to a distinct physical plan. When several queries execute concurrently, this model introduces contention and thrashing, because the physical plans—unaware of each other—compete for access to the underlying I/O and computation resources. As a result, while modern systems can efficiently optimize and evaluate a single complex data analysis query, their performance suffers significantly and can be highly erratic when multiple complex queries run at the same time. We present in this paper Cjoin , a new design that substantially improves throughput in large-scale data analytics systems processing many concurrent join queries. In contrast to the conventional query-at-a-time model, our approach employs a single physical plan that shares I/O, computation, and tuple storage across all in-flight join queries. We use an “always on” pipeline of non-blocking operators, managed by a controller that continuously examines the current query mix and optimizes the pipeline on the fly. Our design enables data analytics engines to scale gracefully to large data sets, provide predictable execution times, and reduce contention. We implemented Cjoin as an extension to the PostgreSQL DBMS. This prototype outperforms conventional commercial systems by an order of magnitude for tens to hundreds of concurrent queries

    Mapper: an efficient data transformation operator

    Get PDF
    Tese de doutoramento em Informática (Engenharia Informática), apresentada à Universidade de Lisboa através da Faculdade de Ciências, 2008Data transformations are fundamental operations in legacy data migration, data integration, data cleaning, and data warehousing. These operations are often implemented as relational queries that aim at leveraging the optimization capabilities of most DBMSs. However, relational query languages like SQL are not expressive enough to specify one-to-many data transformations, an important class of data transformations that produce several output tuples for a single input tuple. These transformations are required for solving several types of data heterogeneities, like those that occur when the source data represents aggregations of the target data. This thesis proposes a new relational operator, named data mapper, as an extension to the relational algebra to address one-to-many data transformations and focus on its optimization. It also provides algebraic rewriting rules and execution algorithms for the logical and physical optimization, respectively. As a result, queries may be expressed as a combination of standard relational operators and mappers. The proposed optimizations have been experimentally validated and the key factors that influence the obtained performance gains identified. Keywords: Relational Algebra, Data Transformation, Data Integration, Data Cleaning, Data WarehousingAs transformações de dados são operações fundamentais em processos de migração de dados de sistemas legados, integração de dados, limpeza de dados e ao refrescamento de Data Warehouses. Usualmente, estas operações são implementadas através de interrogações relacionais por forma a explorar as optimizações proporcionadas pela maioria dos SGBDs. No entanto, as linguagens de interrogação relacionais, como o SQL, não são suficientemente expressivas para especificar as transformações de dados do tipo um-para-muitos. Esta importante classe de transformações é necessária para resolver de forma adequada diversos tipos de heterogeneidades de dados tais como as que decorrem de situações em que os dados do esquema origem representam uma agregação dos dados do sistema destino. Esta tese propõe a extensão da álgebra relacional com um novo operador relacional denominado data mapper, por forma a permitir a especificação e optimização de transformações de dados um-para-muitos. O trabalho apresenta regras de reescrita algébrica juntamente com diversos algoritmos de execução que proporcionam, respectivamente, a optimização lógica e física de transformações de dados um-para-muitos. Como resultado, é possivel optimizar transformações de dados que combinem operadores relacionais comuns com data mappers. As optimizações propostas foram validadas experimentalmente e identificados os factores que influênciam os seus respectivos ganhos
    corecore