1,929 research outputs found

    Architectural Principles for Database Systems on Storage-Class Memory

    Get PDF
    Database systems have long been optimized to hide the higher latency of storage media, yielding complex persistence mechanisms. With the advent of large DRAM capacities, it became possible to keep a full copy of the data in DRAM. Systems that leverage this possibility, such as main-memory databases, keep two copies of the data in two different formats: one in main memory and the other one in storage. The two copies are kept synchronized using snapshotting and logging. This main-memory-centric architecture yields nearly two orders of magnitude faster analytical processing than traditional, disk-centric ones. The rise of Big Data emphasized the importance of such systems with an ever-increasing need for more main memory. However, DRAM is hitting its scalability limits: It is intrinsically hard to further increase its density. Storage-Class Memory (SCM) is a group of novel memory technologies that promise to alleviate DRAM’s scalability limits. They combine the non-volatility, density, and economic characteristics of storage media with the byte-addressability and a latency close to that of DRAM. Therefore, SCM can serve as persistent main memory, thereby bridging the gap between main memory and storage. In this dissertation, we explore the impact of SCM as persistent main memory on database systems. Assuming a hybrid SCM-DRAM hardware architecture, we propose a novel software architecture for database systems that places primary data in SCM and directly operates on it, eliminating the need for explicit IO. This architecture yields many benefits: First, it obviates the need to reload data from storage to main memory during recovery, as data is discovered and accessed directly in SCM. Second, it allows replacing the traditional logging infrastructure by fine-grained, cheap micro-logging at data-structure level. Third, secondary data can be stored in DRAM and reconstructed during recovery. Fourth, system runtime information can be stored in SCM to improve recovery time. Finally, the system may retain and continue in-flight transactions in case of system failures. However, SCM is no panacea as it raises unprecedented programming challenges. Given its byte-addressability and low latency, processors can access, read, modify, and persist data in SCM using load/store instructions at a CPU cache line granularity. The path from CPU registers to SCM is long and mostly volatile, including store buffers and CPU caches, leaving the programmer with little control over when data is persisted. Therefore, there is a need to enforce the order and durability of SCM writes using persistence primitives, such as cache line flushing instructions. This in turn creates new failure scenarios, such as missing or misplaced persistence primitives. We devise several building blocks to overcome these challenges. First, we identify the programming challenges of SCM and present a sound programming model that solves them. Then, we tackle memory management, as the first required building block to build a database system, by designing a highly scalable SCM allocator, named PAllocator, that fulfills the versatile needs of database systems. Thereafter, we propose the FPTree, a highly scalable hybrid SCM-DRAM persistent B+-Tree that bridges the gap between the performance of transient and persistent B+-Trees. Using these building blocks, we realize our envisioned database architecture in SOFORT, a hybrid SCM-DRAM columnar transactional engine. We propose an SCM-optimized MVCC scheme that eliminates write-ahead logging from the critical path of transactions. Since SCM -resident data is near-instantly available upon recovery, the new recovery bottleneck is rebuilding DRAM-based data. To alleviate this bottleneck, we propose a novel recovery technique that achieves nearly instant responsiveness of the database by accepting queries right after recovering SCM -based data, while rebuilding DRAM -based data in the background. Additionally, SCM brings new failure scenarios that existing testing tools cannot detect. Hence, we propose an online testing framework that is able to automatically simulate power failures and detect missing or misplaced persistence primitives. Finally, our proposed building blocks can serve to build more complex systems, paving the way for future database systems on SCM

    Instant restore after a media failure

    Full text link
    Media failures usually leave database systems unavailable for several hours until recovery is complete, especially in applications with large devices and high transaction volume. Previous work introduced a technique called single-pass restore, which increases restore bandwidth and thus substantially decreases time to repair. Instant restore goes further as it permits read/write access to any data on a device undergoing restore--even data not yet restored--by restoring individual data segments on demand. Thus, the restore process is guided primarily by the needs of applications, and the observed mean time to repair is effectively reduced from several hours to a few seconds. This paper presents an implementation and evaluation of instant restore. The technique is incrementally implemented on a system starting with the traditional ARIES design for logging and recovery. Experiments show that the transaction latency perceived after a media failure can be cut down to less than a second and that the overhead imposed by the technique on normal processing is minimal. The net effect is that a few "nines" of availability are added to the system using simple and low-overhead software techniques

    Resilience in agri-food supply chains: a critical analysis of the literature and synthesis of a novel framework

    Get PDF
    Purpose: Resilience in Agri-Food Supply Chains (AFSCs) is an area of significant importance due to growing supply chain volatility. Whilst the majority of research exploring supply chain resilience has originated from a supply chain management perspective, many other disciplines (such as environmental systems science and the social sciences) have also explored the topic. As complex social, economic and environmental constructs, the priority of resilience in AFSCs goes far beyond the company specific focus of supply chain management works and would conceivably benefit from including more diverse academic disciplines. However, this is hindered by inconsistencies in terminology and the conceptual components of resilience across different disciplines. In response, this work utilises a systematic literature review to identify which multidisciplinary aspects of resilience are applicable to AFSCs and to generate a novel AFSC resilience framework. Design/methodology/approach: This paper employs a structured and multidisciplinary review of 137 articles in the resilience literature followed by critical analysis and synthesis of findings to generate new knowledge in the form of a novel AFSC resilience framework. Findings: Findings indicate that the complexity of AFSCs and subsequent exposure to almost constant external interference means that disruptions cannot be seen as a one off event and thus resilience must concern not only the ability to maintain core function but also to adapt to changing conditions. Practical implications: A number of resilience elements can be used to enhance resilience but their selection and implementation must be carefully matched to relevant phases of disruption and assessed on their broader supply chain impacts. In particular, the focus must be on overall impact on the ability of the supply chain as a whole to provide food security rather than to boost individual company performance. Originality/value: The research novelty lies in the utilization of wider understandings of resilience from various research fields to propose a rigorous and food specific resilience framework with end consumer food security as its main focus

    Integrated methodological frameworks for modelling agent-based advanced supply chain planning systems: a systematic literature review

    Get PDF
    Purpose: The objective of this paper is to provide a systematic literature review of recent developments in methodological frameworks for the modelling and simulation of agent-based advanced supply chain planning systems. Design/methodology/approach: A systematic literature review is provided to identify, select and make an analysis and a critical summary of all suitable studies in the area. It is organized into two blocks: the first one covers agent-based supply chain planning systems in general terms, while the second one specializes the previous search to identify those works explicitly containing methodological aspects. Findings: Among sixty suitable manuscripts identified in the primary literature search, only seven explicitly considered the methodological aspects. In addition, we noted that, in general, the notion of advanced supply chain planning is not considered unambiguously, that the social and individual aspects of the agent society are not taken into account in a clear manner in several studies and that a significant part of the works are of a theoretical nature, with few real-scale industrial applications. An integrated framework covering all phases of the modelling and simulation process is still lacking in the literature visited. Research limitations/implications: The main research limitations are related to the period covered (last four years), the selected scientific databases, the selected language (i.e. English) and the use of only one assessment framework for the descriptive evaluation part. Practical implications: The identification of recent works in the domain and discussion concerning their limitations can help pave the way for new and innovative researches towards a complete methodological framework for agent-based advanced supply chain planning systems. Originality/value: As there are no recent state-of-the-art reviews in the domain of methodological frameworks for agent-based supply chain planning, this paper contributes to systematizing and consolidating what has been done in recent years and uncovers interesting research gaps for future studies in this emerging fieldPeer Reviewe

    Letter from the Special Issue Editor

    Get PDF
    Editorial work for DEBULL on a special issue on data management on Storage Class Memory (SCM) technologies

    Coordination, cooperation and collaboration in relief supply chain management

    Get PDF
    In recent years, an increasing number of natural and man-made disasters has demonstrated that a working relief supply chain management (RSCM) is crucial in order to alleviate the suffering of the affected population. Coordination, cooperation and collaboration within RSCM is essential for overcoming these destructive incidents. This paper explores the research undertaken in recent years, focusing on coordination, cooperation and collaboration in the field of supply chain management (SCM) and RSCM in order to provide unique definitions of these concepts taking the disaster setting into consideration. A systematic literature review including 202 academic papers published from 1996 onwards in top journals dealing with commercial supply and relief supply chain coordination, cooperation and collaboration is applied. In order to answer the underlying research questions in a proper way, a descriptive analysis and qualitative and quantitative content analysis of the papers are conducted. Descriptive results indicate that RSCM coordination, cooperation and collaboration have increasingly shifted into the focus of scientific research since 2001/2004 (i.e., 9/11 and the Indian Ocean Tsunami). Based on the qualitative content analysis, clear definitions of the terms coordination, cooperation and collaboration in SCM and RSCM were elaborated. The research landscape, as a result of the quantitative content analysis, allowed the identification of three issues that need to be addressed in future research work

    ACUTA Journal of Telecommunications in Higher Education

    Get PDF
    In This Issue Network Security: An Achilles Heel for Organizations of All Sizes Providing Backup in a VolP World Security Concerns Shift lnward Cell Phones, Land Lines, and E911 Security Checklists Higher Ed\u27s Tricky Equation: Directories Help Balance Availability with Security Disaster Recovery Planning Essentials Passing the Test of productivity Interview President\u27s Message From the Executive Director Here\u27s My Advic

    The smart supply chain: a conceptual cyclic framework

    Get PDF
    Purpose: The objective of this work is to analyze the characteristics of the smart supply chain (SSC) and to propose a conceptual framework research. Given the pace of current technological change, there is a need to analyze the new features of the SSC, related to digital technologies and the incorporation of services. Design/methodology/approach: A systematic review of the literature is addressed, analyzing the latest studies on the subject. This methodology allows to propose a conceptualization of the SSC and incorporate new elements of analysis. Findings: The results show that much of the innovation and instrumentalization of supply chains involves incorporating digital services to expand their functionalities, especially in terms of agility and connectivity. The servitization of supply chains is therefore a key new feature. Put in relation to other characteristics identified in the literature, a conceptual cyclic framework is proposed for the SSC. Originality/value: This study contributes to strengthening the theoretical foundations of SSCs and serves as a guide for researchers and practitionersPeer Reviewe

    Inverse software configuration management

    Get PDF
    Software systems are playing an increasingly important role in almost every aspect of today’s society such that they impact on our businesses, industry, leisure, health and safety. Many of these systems are extremely large and complex and depend upon the correct interaction of many hundreds or even thousands of heterogeneous components. Commensurate with this increased reliance on software is the need for high quality products that meet customer expectations, perform reliably and which can be cost-effectively and safely maintained. Techniques such as software configuration management have proved to be invaluable during the development process to ensure that this is the case. However, there are a very large number of legacy systems which were not developed under controlled conditions, but which still, need to be maintained due to the heavy investment incorporated within them. Such systems are characterised by extremely high program comprehension overheads and the probability that new errors will be introduced during the maintenance process often with serious consequences. To address the issues concerning maintenance of legacy systems this thesis has defined and developed a new process and associated maintenance model, Inverse Software Configuration Management (ISCM). This model centres on a layered approach to the program comprehension process through the definition of a number of software configuration abstractions. This information together with the set of rules for reclaiming the information is stored within an Extensible System Information Base (ESIB) via, die definition of a Programming-in-the- Environment (PITE) language, the Inverse Configuration Description Language (ICDL). In order to assist the application of the ISCM process across a wide range of software applications and system architectures, die PISCES (Proforma Identification Scheme for Configurations of Existing Systems) method has been developed as a series of defined procedures and guidelines. To underpin the method and to offer a user-friendly interface to the process a series of templates, the Proforma Increasing Complexity Series (PICS) has been developed. To enable the useful employment of these techniques on large-scale systems, the subject of automation has been addressed through the development of a flexible meta-CASE environment, the PISCES M4 (MultiMedia Maintenance Manager) system. Of particular interest within this environment is the provision of a multimedia user interface (MUI) to die maintenance process. As a means of evaluating the PISCES method and to provide feedback into die ISCM process a number of practical applications have been modelled. In summary, this research has considered a number of concepts some of which are innovative in themselves, others of which are used in an innovative manner. In combination these concepts may be considered to considerably advance the knowledge and understanding of die comprehension process during the maintenance of legacy software systems. A number of publications have already resulted from the research and several more are in preparation. Additionally a number of areas for further study have been identified some of which are already underway as funded research and development projects
    corecore