1,297 research outputs found

    Der Herrscher als Pilger im westlichen und östlichen Mittelalter –: eine Skizze

    Get PDF
    Investigating medieval royal pilgrimage from a comparative perspective, certain characteristics can be observed: Byzantine emperors normally did not go on pilgrimage, preferring to collect holy relics in their capital. In the West, royal pilgrimage flourished especially during the Later Middle Ages, playing a key role in the practice of rulership, without, however, making a significant contribution to the sacral character of monarchy. Christian kings seldom crossed the borders of their realms to visit sacred places. Muslim rulers (apart from the early caliphs) followed a similar pattern, often avoiding to perform the hajj to Mecca personally. Instead, royal pilgrimage over long distances was a significant phenomenon at the periphery of both religious spheres: in West Africa and Scandinavia

    Resiliency Mechanisms for In-Memory Column Stores

    Get PDF
    The key objective of database systems is to reliably manage data, while high query throughput and low query latency are core requirements. To date, database research activities mostly concentrated on the second part. However, due to the constant shrinking of transistor feature sizes, integrated circuits become more and more unreliable and transient hardware errors in the form of multi-bit flips become more and more prominent. In a more recent study (2013), in a large high-performance cluster with around 8500 nodes, a failure rate of 40 FIT per DRAM device was measured. For their system, this means that every 10 hours there occurs a single- or multi-bit flip, which is unacceptably high for enterprise and HPC scenarios. Causes can be cosmic rays, heat, or electrical crosstalk, with the latter being exploited actively through the RowHammer attack. It was shown that memory cells are more prone to bit flips than logic gates and several surveys found multi-bit flip events in main memory modules of today's data centers. Due to the shift towards in-memory data management systems, where all business related data and query intermediate results are kept solely in fast main memory, such systems are in great danger to deliver corrupt results to their users. Hardware techniques can not be scaled to compensate the exponentially increasing error rates. In other domains, there is an increasing interest in software-based solutions to this problem, but these proposed methods come along with huge runtime and/or storage overheads. These are unacceptable for in-memory data management systems. In this thesis, we investigate how to integrate bit flip detection mechanisms into in-memory data management systems. To achieve this goal, we first build an understanding of bit flip detection techniques and select two error codes, AN codes and XOR checksums, suitable to the requirements of in-memory data management systems. The most important requirement is effectiveness of the codes to detect bit flips. We meet this goal through AN codes, which exhibit better and adaptable error detection capabilities than those found in today's hardware. The second most important goal is efficiency in terms of coding latency. We meet this by introducing a fundamental performance improvements to AN codes, and by vectorizing both chosen codes' operations. We integrate bit flip detection mechanisms into the lowest storage layer and the query processing layer in such a way that the remaining data management system and the user can stay oblivious of any error detection. This includes both base columns and pointer-heavy index structures such as the ubiquitous B-Tree. Additionally, our approach allows adaptable, on-the-fly bit flip detection during query processing, with only very little impact on query latency. AN coding allows to recode intermediate results with virtually no performance penalty. We support our claims by providing exhaustive runtime and throughput measurements throughout the whole thesis and with an end-to-end evaluation using the Star Schema Benchmark. To the best of our knowledge, we are the first to present such holistic and fast bit flip detection in a large software infrastructure such as in-memory data management systems. Finally, most of the source code fragments used to obtain the results in this thesis are open source and freely available.:1 INTRODUCTION 1.1 Contributions of this Thesis 1.2 Outline 2 PROBLEM DESCRIPTION AND RELATED WORK 2.1 Reliable Data Management on Reliable Hardware 2.2 The Shift Towards Unreliable Hardware 2.3 Hardware-Based Mitigation of Bit Flips 2.4 Data Management System Requirements 2.5 Software-Based Techniques For Handling Bit Flips 2.5.1 Operating System-Level Techniques 2.5.2 Compiler-Level Techniques 2.5.3 Application-Level Techniques 2.6 Summary and Conclusions 3 ANALYSIS OF CODING TECHNIQUES 3.1 Selection of Error Codes 3.1.1 Hamming Coding 3.1.2 XOR Checksums 3.1.3 AN Coding 3.1.4 Summary and Conclusions 3.2 Probabilities of Silent Data Corruption 3.2.1 Probabilities of Hamming Codes 3.2.2 Probabilities of XOR Checksums 3.2.3 Probabilities of AN Codes 3.2.4 Concrete Error Models 3.2.5 Summary and Conclusions 3.3 Throughput Considerations 3.3.1 Test Systems Descriptions 3.3.2 Vectorizing Hamming Coding 3.3.3 Vectorizing XOR Checksums 3.3.4 Vectorizing AN Coding 3.3.5 Summary and Conclusions 3.4 Comparison of Error Codes 3.4.1 Effectiveness 3.4.2 Efficiency 3.4.3 Runtime Adaptability 3.5 Performance Optimizations for AN Coding 3.5.1 The Modular Multiplicative Inverse 3.5.2 Faster Softening 3.5.3 Faster Error Detection 3.5.4 Comparison to Original AN Coding 3.5.5 The Multiplicative Inverse Anomaly 3.6 Summary 4 BIT FLIP DETECTING STORAGE 4.1 Column Store Architecture 4.1.1 Logical Data Types 4.1.2 Storage Model 4.1.3 Data Representation 4.1.4 Data Layout 4.1.5 Tree Index Structures 4.1.6 Summary 4.2 Hardened Data Storage 4.2.1 Hardened Physical Data Types 4.2.2 Hardened Lightweight Compression 4.2.3 Hardened Data Layout 4.2.4 UDI Operations 4.2.5 Summary and Conclusions 4.3 Hardened Tree Index Structures 4.3.1 B-Tree Verification Techniques 4.3.2 Justification For Further Techniques 4.3.3 The Error Detecting B-Tree 4.4 Summary 5 BIT FLIP DETECTING QUERY PROCESSING 5.1 Column Store Query Processing 5.2 Bit Flip Detection Opportunities 5.2.1 Early Onetime Detection 5.2.2 Late Onetime Detection 5.2.3 Continuous Detection 5.2.4 Miscellaneous Processing Aspects 5.2.5 Summary and Conclusions 5.3 Hardened Intermediate Results 5.3.1 Materialization of Hardened Intermediates 5.3.2 Hardened Bitmaps 5.4 Summary 6 END-TO-END EVALUATION 6.1 Prototype Implementation 6.1.1 AHEAD Architecture 6.1.2 Diversity of Physical Operators 6.1.3 One Concrete Operator Realization 6.1.4 Summary and Conclusions 6.2 Performance of Individual Operators 6.2.1 Selection on One Predicate 6.2.2 Selection on Two Predicates 6.2.3 Join Operators 6.2.4 Grouping and Aggregation 6.2.5 Delta Operator 6.2.6 Summary and Conclusions 6.3 Star Schema Benchmark Queries 6.3.1 Query Runtimes 6.3.2 Improvements Through Vectorization 6.3.3 Storage Overhead 6.3.4 Summary and Conclusions 6.4 Error Detecting B-Tree 6.4.1 Single Key Lookup 6.4.2 Key Value-Pair Insertion 6.5 Summary 7 SUMMARY AND CONCLUSIONS 7.1 Future Work A APPENDIX A.1 List of Golden As A.2 More on Hamming Coding A.2.1 Code examples A.2.2 Vectorization BIBLIOGRAPHY LIST OF FIGURES LIST OF TABLES LIST OF LISTINGS LIST OF ACRONYMS LIST OF SYMBOLS LIST OF DEFINITION

    Aerodynamic Feeding 4.0: A New Concept for Flexible Part Feeding

    Get PDF
    In modern production environments, the need for flexible handling systems constantly increases due to increasing uncertainties, shorter product life cycles and higher cost pressure. Part feeding systems are vital to modern handling systems, but conventional solutions are often characterized by low flexibility, high retooling times, and complex design. Therefore, in previous research, multiple approaches towards aerodynamic feeding technology were developed. Using air instead of mechanical chicanes to manipulate workpieces, aerodynamic feeding systems can achieve high feeding rates while at the same time being very flexible and reliable. Still, the complexity of the workpieces that can be oriented relies on the number of aerodynamic actuators used in the system. Previously developed systems either used one nozzle with a constant air jet or one nozzle and an air cushion, allowing a maximum of two orientation changes. This work presents a new concept for an aerodynamic feeding system with higher flexibility (with regard to the workpiece geometry) and drastically reduced retooling times compared to conventional feeding systems. In contrast to previous implementations of aerodynamic feeding systems, using only one air nozzle or an air cushion, the new concept uses multiple, individually controllable air nozzles. Using a simulation-based approach, the orientation process is divided into several basic rotations - from a random initial orientation to the desired end orientation - each performed by a distinct nozzle. An optimization algorithm is then used to determine an optimal layout of the air nozzles, enabling the feeding system to feed any desired workpiece, regardless of the initial orientation. With the proposed concept, high flexibility, low retooling times and relatively low costs are expected, setting up aerodynamic feeding as an enabler for changeable production environments

    Teaching In-Memory Database Systems the Detection of Hardware Errors

    Get PDF
    The key objective of database systems is to reliably manage data, whereby high query throughput and low query latency are core requirements. To satisfy these requirements, database systems constantly adapt to novel hardware features. Although it has been intensively studied and commonly accepted that hardware error rates in terms of bit flips increase dramatically with the decrease of the underlying chip structures, most database system research activities neglected this fact, leaving error (bit flip) detection as well as correction to the underlying hardware. Especially for main memory, silent data corruption (SDC) as a result of transient bit flips leading to faulty data is mainly detected and corrected at the DRAM and memory-controller layer. However, since future hardware becomes less reliable and error detection as well as correction by hardware becomes more expensive, this free ride will come to an end in the near future. To further provide a reliable data management, an emerging research direction is employing specific and tailored protection techniques at the database system level. Following that, we are currently developing and implementing an adopted system design for state-of-the-art in-memory column stores. In our lightning talk, we will summarize our current state and outline future work

    Leadership When It Matters Most Lessons on Influence from In Extremis Contexts

    Get PDF
    None of us would study or read about leadership if we did not think that leadership is important to people. Assuming that leadership is, indeed, important to people, it then follows that it is most important when people\u27s lives are at risk. This chapter is a discussion of the most important niche in leadership thinking and analysisleader influence in dangerous contexts. There is social benefit to such a discussion. When one adds up the publicly released figures for numbers of active duty military personnel, law enforcement officers, and firefighters-all people who live and work in dangerous contexts-the total is in the millions. Adding mountain climbers, skydivers, and other extreme sports enthusiasts to the list swells this figure. Not to be overlooked are ordinary individuals suddenly and unexpectedly thrust into a dangerous circumstance (for example, shootings, floods, mine disasters, airline incidents) where leadership matters or could have mattered. Dangerous contexts are ubiquitous, and leadership during them can make a difference

    On the necessity and a generalized conceptual model for the consideration of large strains in rock mechanics

    Get PDF
    This contribution presents a generalized conceptual model for the finite element solution of quasi-static isothermal hydro-mechanical processes in (fractured) porous media at large strains. A frequently used averaging procedure, known as Theory of Porous Media, serves as background for the complex multifield approach presented here. Within this context, a consistent representation of the weak formulation of the governing equations (i.e., overall balance equations for mass and momentum) in the reference configuration of the solid skeleton is preferred. The time discretization and the linearization are performed for the individual variables and nonlinear functions representing the integrands of the weak formulation instead of applying these conceptual steps to the overall nonlinear system of weighted residuals. Constitutive equations for the solid phase deformation are based on the multiplicative split of the deformation gradient allowing the adaptation of existing approaches for technical materials and biological tissues to rock materials in order to describe various inelastic effects, growth and remodeling in a thermodynamically consistent manner. The presented models will be a feature of the next version of the scientific open-source finite element code OpenGeoSys developed by an international developer and user group, and coordinated by the authors

    A modified combined active-set Newton method for solving phase-field fracture into the monolithic limit

    Full text link
    In this work, we examine a numerical phase-field fracture framework in which the crack irreversibility constraint is treated with a primal-dual active set method and a linearization is used in the degradation function to enhance the numerical stability. The first goal is to carefully derive from a complementarity system our primal-dual active set formulation, which has been used in the literature in numerous studies, but for phase-field fracture without its detailed mathematical derivation yet. Based on the latter, we formulate a modified combined active-set Newton approach that significantly reduces the computational cost in comparison to comparable prior algorithms for quasi-monolithic settings. For many practical problems, Newton converges fast, but active set needs many iterations, for which three different efficiency improvements are suggested in this paper. Afterwards, we design an iteration on the linearization in order to iterate the problem to the monolithic limit. Our new algorithms are implemented in the programming framework pfm-cracks [T. Heister, T. Wick; pfm-cracks: A parallel-adaptive framework for phase-field fracture propagation, Software Impacts, Vol. 6 (2020), 100045]. In the numerical examples, we conduct performance studies and investigate efficiency enhancements. The main emphasis is on the cost complexity by keeping the accuracy of numerical solutions and goal functionals. Our algorithmic suggestions are substantiated with the help of several benchmarks in two and three spatial dimensions. Therein, predictor-corrector adaptivity and parallel performance studies are explored as well.Comment: 49 pages, 45 figures, 9 table

    On the term and concepts of numerical model validation in geoscientific applications

    Get PDF
    Modeling and numerical simulation of the coupled physical and chemical processes observed in the subsurface are the only options for long-term analyses of complex geological systems. This contribution discusses some more general aspects of the (dynamic) process modeling for geoscientific applications including reflections about the slightly different understanding of the terms model and model validation in different scientific communities, and about the term and methods of model calibration in the geoscientifc context. Starting from the analysis of observations of a certain part of the perceived reality, the process of model development comprises the establishment of the physical model characterizing relevant processes in a problem-oriented manner, and subsequently the mathematical and numerical models. Considering the steps of idealization and approximation in the course of model development, Oreskes et al. [1] state that process and numerical models can neither be verified nor validated in general. Rather the adequacy of models with specific assumptions and parameterizations made during model set-up can be confirmed. If the adequacy of process models with observations can be confirmed using lab as well as field tests and process monitoring, the adequacy of numerical models can be confirmed using numerical benchmarking and code comparison. Model parameters are intrinsic elements of process and numerical models, in particular constitutive parameters. As they are often not directly measurable, they have to be established by solving inverse problems based on an optimal numerical adaptation of observation results. In addition, numerical uncertainty analyses should be an obligatory part of numerical studies for critical real world applications

    A Learning Method for Automated Disassembly

    Get PDF
    While joining tolerances, and therefore forces, are known in the assembly process, the determination of disassembly forces is not possible. This is caused by changes in the product properties during the product operation, which has multiple reasons such as thermal or mechanical stress on the product. Regarding the planning of disassembly tasks, disassembly times and tools cannot be planned properly. They have to be determined in the process or stay undefined, which can result in damaging of the product. This article shows an approach to describe the necessary disassembly forces without having to investigate the complex physical influences caused by the usage of the product. To do so, a Learning Method is developed, which is sustained by a Lookup-Table for the estimation of disassembly forces based on basic input data such as hours of operation and operating characteristics. Missing values will be interpolated by using multiple linear regression. The concept will be illustrated in the example of a turbine blade connection

    Batch time optimization for an aerodynamic feeding system under changing ambient conditions

    Get PDF
    In order to meet the demands for flexible feeding technology, a self-learning aerodynamic part feeding system has been developed. The actuated system uses a genetic algorithm to find the optimal parameter set for a high rate of correctly oriented parts. This orientation rate can change due to changes in the ambient conditions (e.g. ambient pressure, coefficient of friction). When the orientation rate in pre-defined interval of parts drops below a determined value, a correction algorithm is triggered. The objective of this work is to develop a mathematical model to define the optimal control interval and limit of the orientation rate for triggering the corrective algorithm depending on the total amount of parts still to be fed at any point in time. To evaluate the mathematical approach, a macroscopic simulation model of the aerodynamic feeding system was developed. It was shown, that the feeding time of a batch of 10,000 parts can be reduced by up to 7% and the number of activations of the corrective algorithm can be reduced by up to 50%. Finally, the mathematical model was implemented in the system control
    corecore