306 research outputs found

    Cost distributions in a symmetric Euclidean traveling salesman problems : asupplement to TSPLIB

    Get PDF
    We present analytically and experimentally determined cost distributions for all euclidean two-dimensional symmetric instances of the Traveling Salesman Problem in the TSPLIB library. Results obtained show characteristic cost distributions in all cases with and a high stability against degeneration

    Handling Non-deterministic Data Availability in Parallel Query Execution.

    Get PDF
    The situation of non-deterministic data availability, where it is not known a priori which of two or more processes will respond first, cannot be handled with standard techniques. The consequence is sub-optimal processing because of inefficient resource allocation and unnecessary delays. In this paper we develop an effective solution to the problem by extending the demand-driven evaluation paradigm to the end of using operators with more than just one output stream. We show how inter-process communication and non-deterministic data availability in parallel query processing reduce to cases that can be executed efficiently with the new evaluation paradigm

    Counting, Enumerating and Sampling of Execution Plans in a Cost-Based Query Optimizer

    Get PDF
    Testing an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query but strongly influenced by a large number of parameters describing the database and the hardware environment. Modifying these parameters in order to steer the optimizer to select other plans is difficult since this means anticipating often complex search strategies implemented in the optimizer. In this paper we devise algorithms for counting, exhaustive generation, and uniform sampling of plans from the complete search space. Our techniques allow extensive validation of both generation of alternatives, and execution algorithms with plans other than the optimized one---if two candidate plans fail to produce the same results, then either the optimizer considered an invalid plan, or the execution code is faulty. When the space of alternatives becomes too large for exhaustive testing, which can occur even with a handful of joins, uniform random sampling provides a mechanism for unbiased testing. The technique is implemented in Microsoft's SQL Server, where it is an integral part of the validation and testing process

    Memory aware query scheduling in a database cluster

    Get PDF
    Query throughput is one of the primary optimization goals in interactive web-based information systems in order to achieve the performance necessary to serve large user communities. Queries in this application domain differ significantly from those in traditional database applications: they are of lower complexity and almost exclusively read-only. The architecture we propose here is specifically tailored to take advantage of the query characteristics. It is based on a large parallel shared-nothing database cluster where each node runs a separate server with a fully replicated copy of the database. A query is assigned and entirely executed on one single node avoiding network contention or synchronization effects. However, the actual key to enhanced throughput is a resource efficient scheduling of the arriving queries. We develop a simple and robust scheduling scheme that takes the currently memory resident data at each server into account and trades off memory re-use and execution time, reordering queries as necessary. Our experimental evaluation demonstrates the effectiveness when scaling the system beyond hundreds of nodes showing super-linear speedup

    Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    Get PDF
    Testing an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query but strongly influenced by a large number of parameters describing the database and the hardware environment. Modifying these parameters in order to steer the optimizer to select other plans is difficult since this means anticipating often complex search strategies implemented in the optimizer. In this paper we devise algorithms for counting, exhaustive generation, and uniform sampling of plans from the complete search space. Our techniques allow extensive validation of both generation of alternatives, and execution algorithms with plans other than the optimized one---if two candidate plans fail to produce the same results, then either the optimizer considered an invalid plan, or the execution code is faulty. When the space of alternatives becomes too large for exhaustive testing, which can occur even with a handful of joins, uniform random sampling provides a mechanism for unbiased testing. The technique is implemented in Microsoft's SQL Server, where it is an integral part of the validation and testing process

    A Look Back on the XML Benchmark Project

    Get PDF
    The XML Benchmark Project was started to provide a framework for evaluating the interplay of XML technologies and Database Management Systems. The benchmark lays emphasis on engineering aspects as well as on performance of the query processor. In this chapter the authors present a quick overview of the benchmark and point at some of the experience they gathered during the design of the benchmark and while running it on a variety of platforms. Since the benchmark was designed early in the evolution of XML, our experiences also reflect how the perception of XML changed during the three years that have passed since we started working on the subject. The chapter comprises an overview of the benchmark as well as discussions of some lessons learned

    Interactive Visualization of Multidimensional Feature Spaces

    Get PDF
    Image similarity models characterize images as points in high-dimensional feature spaces. Each point is represented by a combination of distinct features, such as brightness, color histograms or texture characteristics of the image, etc. For the design and tuning of features, and thus the effectiveness of the image similarity model, it is important to understand the interrelations of individual features and the implications on the structure of the feature space. In this paper, we discuss an interactive visualization tool for the exploration of multidimensional feature spaces. Our tool uses a graph as an intermediate representation of the points in the feature space. A mass spring algorithm is used to layout the graph in a 2D space in which arrangements of similar images are attracted to each other and dissimilar images are repelled. The emphasis of the visualization tool is on interaction: users may influence the layout by interactively scaling dimensions of the feature space. In this way, the user can explore how a feature behaves in relation to other features

    Recent progress on the chiral unitary approach to meson meson and meson baryon interactions

    Get PDF
    We report on recent progress on the chiral unitary approach, analogous to the effective range expansion in Quantum Mechanics, which is shown to have a much larger convergence radius than ordinary chiral perturbation theory, allowing one to reproduce data for meson meson interaction up to 1.2 GeV. Applications to physical processes so far unsuited for a standard chiral perturbative approach are presented. Results for the extension of these ideas to the meson baryon sector are discussed, together with applications to kaons in a nuclear medium and KK^- atoms.Comment: Contribution to the KEK Tanashi Symposium on Physics of Hadrons and Nuclei, Tokyo, December 1998, 10 pages, 3 postscript figures. To be published as a special issue of Nuclear Physics

    The XML benchmark project

    Get PDF
    With standardization efforts of a query language for XML documents drawing to a close, researchers and users increasingly focus their attention on the database technology that has to deliver on the new challenges that the sheer amount of XML documents produced by applications pose to data management: validation, performance evaluation and optimization of XML query processors are the upcoming issues. Following a long tradition in database research, the XML Store Benchmark Project provides a framework to assess an XML database's abilities to cope with a broad spectrum of different queries, typically posed in real-world application scenarios. The benchmark is intended to help both implementors and users to compare XML databases independent of their own, specific application scenario. To this end, the benchmark offers a set queries each of which is intended to challenge a particular primitive of the query processor or storage engine. The overall workload we propose consists of a scalable document database and a concise, yet comprehensive set of queries, which covers the major aspects of query processing. The queries' challenges range from stressing the textual character of the document to data analysis queries, but include also typical ad-hoc queries. We complement our research with results obtained from running the benchmark on our XML database platform. They are intended to give a first baseline, illustrating the state of the art

    ANALISIS KESESUAIAN LOKASI BUDIDAYA RUMPUT LAUT (Eucheuma cottonii) DENGAN MENGGUNAKAN SISTEM INFORMASI GEOGRAFIS DI PERAIRAN TELUK AMBON BAGUALA

    Get PDF
    Baguala Bay is a high potential area with a rich fishery resources. Marine aquaculture field area should be determined by considering the ecological, technical, hygienic, socio-economic conditions simultaneously to the laws and regulation. The research is done in order to analyze chemical and biophysical indicators in the waters of Baguala Bay and to determine seaweed (Eucheuma cottonii) cultivation area considering the determination of seaweed cultivation zone. The result is done in February to March 2020 continues by laboratory analysis and data tabulation based on the time schedule. Interpolation Inverse Distance Weighted (IDW) is used to analyze data. The result shows the suitability cultivation area in Baguala Bay is 1048.296 ha or 78.7% is in high suitable rate and 282.483 ha or 21.2% is in suitable rate. Among all of the criteria of suitable cultivation area in Baguala Bay, nitrate is the criteria with the high suitability parameter overall.   ABSTRAK: Teluk Baguala merupakan kawasan yang memiliki potensi sumberdaya perikanan yang cukup melimpah. Penentuan lokasi lahan budidaya perikanan laut harus didasarkan pertimbangan ekologis, teknis, higienis, sosio-ekonomis, dan ketentuan peraturan perundang-undangan yang berlaku. Tujuan penelitian ini ialah menganalisis indikator biofisik dan kimia perairain Teluk Baguala serta menentukan lokasi budidaya rumput laut (Eucheuma cottonii) dalam kaitan penetapan zonasi budidaya rumput laut. Penelitian berlangsung dari Februari hingga Maret 2020 dilanjutkan dengan analisa laboratorium dan tabulasi data berdasarkan waktu yang ditetapkan. Analisis data yang di gunakan penelitian ini adalah metode Interpolasi Inverse Distance Weighted (IDW). Hasil penelitian menunjukan kesesuaian lahan budidaya di Teluk Baguala memperoleh 1048.296 ha atau 78.7% untuk kelas sangat sesuai dan 282.483 ha atau 21.2% untuk kelas sesuai. Dari semua kriteria kesesuaian lahan budidaya di Teluk Baguala, nitrat merupakan kriteria yang mempunyai parameter perairan masuk dalam kelas sangat sesuai secara keseluruhan.   Kata Kunci: Teluk Baguala, lahan budidaya, rumput laut, kesesuaian, SI
    corecore