489 research outputs found
Query Optimization Techniques for OLAP Applications: An ORACLE versus MS-SQL Server Comparative Study
Query optimization in OLAP applications is a novel problem. A lot of research was introduced in the area of optimizing query performance, however great deal of research focused on OLTP applications rather than OLAP. In order to reach the output results OLAP queries extensively asks the database, inefficient processing of those queries will have its negative impact on the performance and may make the results useless. Techniques for optimizing queries include memory caching, indexing, hardware solutions, and physical database storage. Oracle and MS SQL Server both offer OLAP optimization techniques, the paper will review both packages’ approaches and then proposes a query optimization strategy for OLAP applications. The proposed strategy is based on use of the following four ingredients: 1- intermediate queries; 2- indexes both BTrees and Bitmaps; 3- memory cache (for the syntax of the query) and secondary storage cache (for the result data set); and 4- the physical database storage (i.e. binary storage model) accompanied by its hardware solution
Realizing the Technical Advantages of Star Transformation
Data warehousing and business intelligence go hand in hand, each gives the other purpose for development, maintenance and improvement. Both have evolved over a few decades and build upon initial development. Management initiatives further drive the need and complexity of business intelligence, while in turn expanding the end user community so that business change, results and strategy are affected at the business unit level. The literature, including a recent business intelligence user survey, demonstrates that query performance is the most significant issue encountered. Oracle\u27s data warehouse 10g.2 is examined with improvements to query optimization via best practice through Star Transformation. Star Transformation is a star schema query rewrite and join back through a hash join, which provides extensive query performance improvement. Most data warehouses exist as normalized or in 3rd normal form (3NF), while star schemas in a denormalized warehouse are not the norm . Changes in the database environment must be implemented, along with agreement from business leadership and alignment of business objectives with a Star Transformation project. Often, so much change, shifting priorities and lack of understanding about query optimization benefits can stifle a project. Critical to the success of gaining support and financial backing is the official plan and demonstration of return on investment documentation. Query optimization is highly complex. Both the technological and business entities should prioritize goals and consider the benefits of improved query response time, realizing the technical advantages of Star Transformation
Recommended from our members
Optimizing Frequency Queries for Data Mining Applications
Data mining algorithms use various Trie and bitmap-based representations to optimize the support (i.e., frequency) counting performance. In this paper, we compare the memory requirements and support counting performance of FP Tree, and Compressed Patricia Trie against several novel variants of vertical bit vectors. First, borrowing ideas from the VLDB domain, we compress vertical bit vectors using WAH encoding. Second, we evaluate the Gray code rank-based transaction reordering scheme, and show that in practice, simple lexicographic ordering, obtained by applying LSB Radix sort, outperforms this scheme. Led by these results, we propose HDO, a novel Hamming-distance-based greedy transaction reordering scheme, and aHDO, a linear-time approximation to HDO. We present results of experiments performed on 15 common datasets with varying degrees of sparseness, and show that HDO- reordered, WAH encoded bit vectors can take as little as 5% of the uncompressed space, while aHDO achieves similar compression on sparse datasets. Finally, with results from over a billion database and data mining style frequency query executions, we show that bitmap-based approaches result in up to hundreds of times faster support counting, and HDO-WAH encoded bitmaps offer the best space-time tradeoff
ROLE OF DATA MINING IN E-GOVERNMENT FRAMEWORK
In e-government, the mining techniques are considered as a procedure for extracting data from the related webapplication to be converted into useful knowledge. In addition, there are different methods of mining that can be applied to differentgovernment data. The significant ideas behind this paper are to produce a comprehensive study amongst the previous research workin improving the speed of queries to access the database and obtaining specific predictions. The provided study compares datamining methods, database management, and types of data. Moreover, a proposed model is introduced to put these different methodstogether for improving the online applications. These applications produce the ability to retrieve the information, matching keywords,indexing database, and performing the prediction from a vast amount of data
MIL primitives for querying a fragmented world
In query-intensive database application areas, like decision support and data mining, systems that use vertical fragmentation have a significant performance advantage. In order to support relational or object oriented applications on top of such a fragmented data model, a flexible yet powerful intermediate language is needed. This problem has been successfully tackled in Monet, a modern extensible database kernel developed by our group. We focus on the design choices made in the Monet Interpreter Language (MIL), its algebraic query language, and outline how its concept of tactical optimization enhances and simplifies the optimization of complex queries. Finally, we summarize the experience gained in Monet by creating a highly efficient implementation of MIL
ROLE OF DATA MINING IN E-GOVERNMENT FRAMEWORK
In e-government, the mining techniques are
considered as a procedure for extracting data from the related web application to be converted into useful knowledge. In addition, there are different methods of mining that can be applied to different government data. The significant ideas behind this paper are to produce a comprehensive study amongst the previous research work in improving the speed of queries to access the database and obtaining specific predictions. The provided study compares data mining methods, database management, and types of data. Moreover, a proposed model is introduced to put these different methods together for improving the online applications. These applications produce the ability to retrieve the information, matching keywords, indexing database
- …