1,717 research outputs found

    Mobile Transaction Supports for DBMS

    Get PDF
    National audienceIn recent years data management in mobile environments has generated a great interest. Several proposals concerning mobile transactions have been done. However, it is very difficult to have an overview of all these approaches. In this paper we analyze and compare several contributions on mobile transactions and introduce our ongoing research: the design and implementation of a Mobile Transaction Service. The focus of our study is on execution models, the manner ACID properties are provided and the way geographical movements of hosts (during transaction executions) is supported

    XWeB: the XML Warehouse Benchmark

    Full text link
    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems

    Challenging Issues of Spatio-Temporal Data Mining

    Get PDF
    The spatio-temporal database (STDB) has received considerable attention during the past few years, due to the emergence of numerous applications (e.g., flight control systems, weather forecast, mobile computing, etc.) that demand efficient management of moving objects. These applications record objects' geographical locations (sometimes also shapes) at various timestamps and support queries that explore their historical and future (predictive) behaviors. The STDB significantly extends the traditional spatial database, which deals with only stationary data and hence is inapplicable to moving objects, whose dynamic behavior requires re-investigation of numerous topics including data modeling, indexes, and the related query algorithms. In many application areas, huge amounts of data are generated, explicitly or implicitly containing spatial or spatiotemporal information. However, the ability to analyze these data remains inadequate, and the need for adapted data mining tools becomes a major challenge. In this paper, we have presented the challenging issues of spatio-temporal data mining. Keywords: database, data mining, spatial, temporal, spatio-tempora

    Wireless Location Based Services (Wi-LBS)

    Get PDF
    The needs and demands on location information have arisen rapidly. With the advancement of mobile computing technologies, many researches and studies have been conducted in providing reliable location information solution. Location based information has become an important resource for mobile users especially in giving direction or locating places. Wireless Location Based Services (Wi-LBS) highlight this scenario by explaining the application and usage of the LBS in wireless environment. Wi-LBS is introduced with respect to providing wireless method in locating places within UTP campus instead of referring to static map. The objective of this research is to integrate GIS with MMS technology as a system called Wireless Location Based Services. It is mainly focused on applying GIS elements in providing location information by utilizing the advancement of today's wireless handheld devices. Users will request for location by sending short messages using their mobile phones to the WiLBS system and the system will reply back sending the location information containing pictures and direction. Rapid Application Development (RAD) is used as the methodology in designing the system of Wi-LBS. This research also details the study on Multimedia Messaging Service (MMS) that covers the sending of picture messages from Wi-LBS to mobile phones and system's function in responding to the users' requests. Various references and studies have been done regarding the successful LBS applications implemented in foreign country that leads to the interest of doing the research on LBS implementation in this country. The result from the research will be the proposed framework for Wi-LBS, the discussion on the GIS and MMS as well as the system of Wi-LBS. This study proved that Wireless Location Based Services has great potential to be commercially implemented with the growth of wireless application nowadays as today's community is eager of getting more services from wireless system

    History Of Databases

    Get PDF
    The database and database management systems have become an integral part of every kind of work, whether in managing business-related data or managing our household accounts. The purpose of this paper is to take a look back in time and track the origin of the database, its development over the years, and to take a look forward at what the future may hold for databases

    Radar exchange: Open market for goods and services at no cost

    Get PDF
    The current society is distinguished by its consumerism and rapid development, which has resulted in the emergence of numerous unutilized items. People concerned about this urgency, on the other hand, have joined a collaborative economy in which goods and services are shared amongst themselves, reducing desperation. With the widespread use of social networks, the emergence of communities and services managed in a distributed manner has grown. As a result, organizations such as Free Cycle and Community Exchange have emerged to aid in the search and exchange of goods. Following the example of these two groups and using available technology, this dissertation documents the development of a Geographic Information System with the goal of providing a web-based platform for the exchange of goods and services. This new approach, in which geospatial information is a key focus, allows users to view available products based on their location, increasing the probability of discovering products nearby. Aside from that, a classification of the user is implemented based on the products and services requested and offered, in order to increase the user's credibility in the eyes of the others, increasing the likelihood of a deal occurring. Throughout this work, the process is described, that is, there search carried out to gain a better understanding of the topic, the assessment of requirements, and the planning and creation of a prototype. To conclude, usability tests on the created solution were conducted in order to evaluate its performance and interface.A sociedade atual é caracterizada pelo seu consumismo e rápido desenvolvimento, o que por sua vez leva ao surgimento de inúmeros bens não utilizados. Contudo, pessoas preocupadas com este desperdício têm aderido a uma economia colaborativa, onde partilham bens e serviços entre si e assim reduzem o desperdício. Com a generalização do uso das redes sociais, o aparecimento de comunidades e serviços geridos de forma partilhada tem aumentado. Como tal, grupos como o Free Cycle e Community Exchange têm emergido para ajudar a procura e a troca dos produtos. Seguindo o exemplo destes dois grupos e aproveitando a tecnologia disponível, esta dissertação documenta a criação de um Sistema de Informação Geográfica com o objetivo de disponibilizar uma plataforma web para a troca dos produtos e de serviços. Esta nova abordagem, onde a informação geográfica é o grande foco, permite aos utilizadores, durante a sua mobilidade quotidiana, a visualização dos produtos disponibilizados consoante a sua localização, aumentando assim a probabilidade de descobrir produtos perto de si. Para além disso,é implementada uma classificação sobre o utilizador com base nos produtos e serviços recebidos e doados, de forma a aumentar a sua credibilidade perante os restantes, elevando as chances de a troca se efetuar. Ao longo deste documento, é descrito o processo tomado, isto é, a pesquisa levada a cabo para melhor compreensão do tema, levantamento dos requisitos, planeamento e criação de um protótipo. Por fim, foram realizados testes de usabilidade sobre a solução criada, para avaliar o desempenho e a interface da solução criada

    Merging Queries in OLTP Workloads

    Get PDF
    OLTP applications are usually executed by a high number of clients in parallel and are typically faced with high throughput demand as well as a constraint latency requirement for individual statements. In enterprise scenarios, they often face the challenge to deal with overload spikes resulting from events such as Cyber Monday or Black Friday. The traditional solution to prevent running out of resources and thus coping with such spikes is to use a significant over-provisioning of the underlying infrastructure. In this thesis, we analyze real enterprise OLTP workloads with respect to statement types, complexity, and hot-spot statements. Interestingly, our findings reveal that workloads are often read-heavy and comprise similar query patterns, which provides a potential to share work of statements belonging to different transactions. In the past, resource sharing has been extensively studied for OLAP workloads. Naturally, the question arises, why studies mainly focus on OLAP and not on OLTP workloads? At first sight, OLTP queries often consist of simple calculations, such as index look-ups with little sharing potential. In consequence, such queries – due to their short execution time – may not have enough potential for the additional overhead. In addition, OLTP workloads do not only execute read operations but also updates. Therefore, sharing work needs to obey transactional semantics, such as the given isolation level and read-your-own-writes. This thesis presents THE LEVIATHAN, a novel batching scheme for OLTP workloads, an approach for merging read statements within interactively submitted multi-statement transactions consisting of reads and updates. Our main idea is to merge the execution of statements by merging their plans, thus being able to merge the execution of not only complex, but also simple calculations, such as the aforementioned index look-up. We identify mergeable statements by pattern matching of prepared statement plans, which comes with low overhead. For obeying the isolation level properties and providing read-your-own-writes, we first define a formal framework for merging transactions running under a given isolation level and provide insights into a prototypical implementation of merging within a commercial database system. Our experimental evaluation shows that, depending on the isolation level, the load in the system, and the read-share of the workload, an improvement of the transaction throughput by up to a factor of 2.5x is possible without compromising the transactional semantics. Another interesting effect we show is that with our strategy, we can increase the throughput of a real enterprise workload by 20%.:1 INTRODUCTION 1.1 Summary of Contributions 1.2 Outline 2 WORKLOAD ANALYSIS 2.1 Analyzing OLTP Benchmarks 2.1.1 YCSB 2.1.2 TATP 2.1.3 TPC Benchmark Scenarios 2.1.4 Summary 2.2 Analyzing OLTP Workloads from Open Source Projects 2.2.1 Characteristics of Workloads 2.2.2 Summary 2.3 Analyzing Enterprise OLTP Workloads 2.3.1 Overview of Reports about OLTP Workload Characteristics 2.3.2 Analysis of SAP Hybris Workload 2.3.3 Summary 2.4 Conclusion 3 RELATED WORK ON QUERY MERGING 3.1 Merging the Execution of Operators 3.2 Merging the Execution of Subplans 3.3 Merging the Results of Subplans 3.4 Merging the Execution of Full Plans 3.5 Miscellaneous Works on Merging 3.6 Discussion 4 MERGING STATEMENTS IN MULTI STATEMENT TRANSACTIONS 4.1 Overview of Our Approach 4.1.1 Examples 4.1.2 Why Naïve Merging Fails 4.2 THE LEVIATHAN Approach 4.3 Formalizing THE LEVIATHAN Approach 4.3.1 Transaction Theory 4.3.2 Merging Under MVCC 4.4 Merging Reads Under Different Isolation Levels 4.4.1 Read Uncommitted 4.4.2 Read Committed 4.4.3 Repeatable Read 4.4.4 Snapshot Isolation 4.4.5 Serializable 4.4.6 Discussion 4.5 Merging Writes Under Different Isolation Levels 4.5.1 Read Uncommitted 4.5.2 Read Committed 4.5.3 Snapshot Isolation 4.5.4 Serializable 4.5.5 Handling Dependencies 4.5.6 Discussion 5 SYSTEM MODEL 5.1 Definition of the Term “Overload” 5.2 Basic Queuing Model 5.2.1 Option (1): Replacement with a Merger Thread 5.2.2 Option (2): Adding Merger Thread 5.2.3 Using Multiple Merger Threads 5.2.4 Evaluation 5.3 Extended Queue Model 5.3.1 Option (1): Replacement with a Merger Thread 5.3.2 Option (2): Adding Merger Thread 5.3.3 Evaluation 6 IMPLEMENTATION 6.1 Background: SAP HANA 6.2 System Design 6.2.1 Read Committed 6.2.2 Snapshot Isolation 6.3 Merger Component 6.3.1 Overview 6.3.2 Dequeuing 6.3.3 Merging 6.3.4 Sending 6.3.5 Updating MTx State 6.4 Challenges in the Implementation of Merging Writes 6.4.1 SQL String Implementation 6.4.2 Update Count 6.4.3 Error Propagation 6.4.4 Abort and Rollback 7 EVALUATION 7.1 Benchmark Settings 7.2 System Settings 7.2.1 Experiment I: End-to-end Response Time Within a SAP Hybris System 7.2.2 Experiment II: Dequeuing Strategy 7.2.3 Experiment III: Merging Improvement on Different Statement, Transaction and Workload Types 7.2.4 Experiment IV: End-to-End Latency in YCSB 7.2.5 Experiment V: Breakdown of Execution in YCSB 7.2.6 Discussion of System Settings 7.3 Merging in Interactive Transactions 7.3.1 Experiment VI: Merging TATP in Read Uncommitted 7.3.2 Experiment VII: Merging TATP in Read Committed 7.3.3 Experiment VIII: Merging TATP in Snapshot Isolation 7.4 Merging Queries in Stored Procedures Experiment IX: Merging TATP Stored Procedures in Read Committed 7.5 Merging SAP Hybris 7.5.1 Experiment X: CPU-time Breakdown on HANA Components 7.5.2 Experiment XI: Merging Media Query in SAP Hybris 7.5.3 Discussion of our Results in Comparison with Related Work 8 CONCLUSION 8.1 Summary 8.2 Future Research Directions REFERENCES A UML CLASS DIAGRAM

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed
    corecore