5 research outputs found

    Modeling of the decision-supporting process on the possibility of concluding the contract on the therapeutic services provision. РОЗВИТОК АРХІТЕКТУР, ТЕОРЕМ ТА МОДЕЛЕЙ ВЛАСТИВОСТЕЙ РОЗПОДІЛЕНИХ СИСТЕМ ЗБЕРІГАННЯ ІНФОРМАЦІЇ

    Get PDF
    Today, we live in the world of information technologies, which have penetrated into all possible spheres of human activity. Recent developments in database management systems have coincided with advances in parallel computing technologies. In view of this fact, a new class of data storage has appeared, namely globally distributed non-relational database management systems, and they are now widely used in Twitter, Facebook, Google and other modern distributed information systems to store and process huge volumes of data. Databases have undergone a certain evolution from mainframe architecture to globally distributed non-relational repositories designed to store huge amounts of information and serve millions of users. The article indicates the drivers and prerequisites of this development, and also considers the transformation of models of properties of database management systems and theorems that formalize the relationship between them. In particular, the conditionality of the transition from the ACID property model to the BASE model is considered, which relaxes the requirements for data consistency, which is necessary to ensure the high performance of distributed databases with many replicas. In addition, a concise justification of the SAR and PACELC theorems, which establish mutually exclusive relationships between availability, consistency, and speed in replicated information systems, is provided, and their limitations are analyzed. The compatibility issues of the consistency models used by different non-relational data stores are noted, and, as an example, the possible consistency settings of the NoSQL databases Cassandra, MongoDB, and Azure CosmosDB are discussed in detail. The results of the evolution of distributed database architectures are summarized using the GSN (Goal Structuring Notation). Further directions of scientific research and ways of further developing globally distributed information systems and data repositories are also outlined

    Enhancements to SQLite Library to Improve Performance on Mobile Platforms

    Get PDF
    This thesis aims to present solutions to improve the performance of SQLite library on mobile systems. In particular, two approaches are presented to add lightweight locking mechanisms to the SQLite library and improve concurrency of the SQLite library on Android operating system. The impact on performance is discussed after each section. Many applications on the Android operating system rely on the SQLite library to store ordered data. However, due to heavy synchronization primitives used by the library, it becomes a performance bottleneck for applications which push large amount of data into the database. Related work in this area also points to SQLite database as one of the factors for limiting performance. With increasing network speeds, the storage system can become a performance bottleneck, as applications download larger amounts of data. The following work in this thesis addresses these issues by providing approaches to increase concurrency and add light-weight locking mechanisms. The factors determining the performance of Application Programming Interfaces provided by SQLite are first gathered from IO traces of common database operations. By analyzing these traces, opportunities for improvements are noticed. An alternative locking mechanism is added to the database file using byte-range locks for fine-grained locking. Its impact on performance is measured using SQLite benchmarks as well as real applications. A multi-threaded benchmark is designed to measure the performance of fine grained locking in multi-threaded applications using common database operations. Recent versions of SQLite use write ahead logs for journaling. We see that writes to this sequential log can occur concurrently, especially in flash drives. By adding a sequencing mechanism for the write ahead log, the writes can proceed simultaneously. The performance of this method is also analyzed using the synthetic benchmarks and multi-threaded benchmarks. By using these mechanisms, the library is observed to gain significant performance for concurrent writes

    A framework for selecting NoSQL Databases:a NetFlow use case

    Get PDF
    Making decisions regarding technology is difficult for IT practitioners, especially when they lack formal guidance. Ad hoc decisions are prone to be influenced by biases. This research study specifically considered decisions regarding NoSQL. The primary objective of this study was to develop a framework that can assist IT practitioners with decisions regarding NoSQL technologies. An investigation into typical decision-making problems encountered when having to make technology-based decisions provided an understanding of the problem context. The application context was explored through a literature study of the four NoSQL families. This study produces a framework to assist IT practitioners in making decisions regarding technology. The framework comprises two models. Firstly, a weighted decision model combines several constructs, thereby providing a general method of making decisions. Secondly, a 6-step process model that can be used to adapt the weighted decision-model to a specific type of technology and a specific use case is proposed. The feasibility and utility of the proposed framework are demonstrated by applying the framework to a NetFlow use case. If NetFlow data is to be used for analytical decision-making, the data must be stored long-term. NoSQL databases have increased in popularity, especially in decision-making contexts. Therefore, NoSQL is a logical storage choice. However, which NoSQL family to use is not self-evident. Therefore, the decision-maker may require assistance to make the right decision. To assist with this decision, the framework was adapted to be used in the NoSQL context. A set of criteria was developed to allow various NoSQL options to be uniformly compared. Furthermore, the four NoSQL families were graded based on this set of criteria. After adaptation, experts provided input regarding the requirements of the NetFlow use case. This resulted in the weighting of the criteria for this specific use case. Finally, a weighted score was calculated for each family. For the NetFlow use case, the model suggests that a document-based NoSQL database be used. The framework ensures that all NoSQL technologies are systematically investigated, thereby reducing the effect of biases. Thus, the problem identified in this study is addressed. The proposed model can also serve as a foundation for future researc

    Generalization of ACID Properties

    Get PDF
    ACID (Atomicity, Consistency, Isolation, and Durability) is a set of properties that guarantee the reliability of database transactions [2]. ACID properties were initially developed with traditional, business-oriented applications (e.g., banking) in mind. Hence, they do not fully support the functional and performance requirements of advanced database applications such as computer-aided design, computer-aided manufacturing, office automation, network management, multidatabases, and mobile databases. For instance, transactions in computer-aided design applications are generally of long duration and preserving the traditional ACID properties in such transactions would require locking resources for long periods of time. This has lead to the generalization of ACID properties as Recovery, Consistency, Visibility and Permanence. The aim of such generalization is to relax some of the constraints and restrictions imposed by the ACID properties. For example, visibility relaxes the isolation property by enabling the sharing of partial results and hence promoting cooperation among concurrent transactions. Hence, the more generalized are ACID properties, the more flexible is the corresponding transaction model
    corecore