163 research outputs found

    Transactional concurrency control for resource constrained applications

    Get PDF
    PhD ThesisTransactions have long been used as a mechanism for ensuring the consistency of databases. Databases, and associated transactional approaches, have always been an active area of research as different application domains and computing architectures have placed ever more elaborate requirements on shared data access. As transactions typically provide consistency at the expense of timeliness (abort/retry) and resource (duplicate shared data and locking), there has been substantial efforts to limit these two aspects of transactions while still satisfying application requirements. In environments where clients are geographically distant from a database the consistency/performance trade-off becomes acute as any retrieval of data over a network is not only expensive, but relatively slow compared to co-located client/database systems. Furthermore, for battery powered clients the increased overhead of transactions can also be viewed as a significant power overhead. However, for all their drawbacks transactions do provide the data consistency that is a requirement for many application types. In this Thesis we explore the solution space related to timely transactional systems for remote clients and centralised databases with a focus on providing a solution, that, when compared to other's work in this domain: (a) maintains consistency; (b) lowers latency; (c) improves throughput. To achieve this we revisit a technique first developed to decrease disk access times via local caching of state (for aborted transactions) to tackle the problems prevalent in real-time databases. We demonstrate that such a technique (rerun) allows a significant change in the typical structure of a transaction (one never before considered, even in rerun systems). Such a change itself brings significant performance success not only in the traditional rerun local database solution space, but also in the distributed solution space. A byproduct of our improvements also, one can argue, brings about a "greener" solution as less time coupled with improved throughput affords improved battery life for mobile devices

    Administrative Profiles Unit and Multi-Sessions Management Scheme for IPBrick Private Cloud

    Get PDF
    IPBrick solution for the Private Cloud has some limitations which can be challenged in future by its competitors in the market. The administrative access to the web interface of an IPBrick server is restricted to one user who manages everything by using a given set of features and services. The existing solution also does not support Multi-Sessions Management due to which multiple update operations can't be applied on the database simultaneously and if it is attempted then it can have adverse affects on the consistency of data stored in the Database. The aim of this thesis is to develop features like Administrative Profiles and Multi-Sessions Management for the betterment of IPBrick Solution

    Scaling Up Concurrent Analytical Workloads on Multi-Core Servers

    Get PDF
    Today, an ever-increasing number of researchers, businesses, and data scientists collect and analyze massive amounts of data in database systems. The database system needs to process the resulting highly concurrent analytical workloads by exploiting modern multi-socket multi-core processor systems with non-uniform memory access (NUMA) architectures and increasing memory sizes. Conventional execution engines, however, are not designed for many cores, and neither scale nor perform efficiently on modern multi-core NUMA architectures. Firstly, their query-centric approach, where each query is optimized and evaluated independently, can result in unnecessary contention for hardware resources due to redundant work found across queries in highly concurrent workloads. Secondly, they are unaware of the non-uniform memory access costs and the underlying hardware topology, incurring unnecessarily expensive memory accesses and bandwidth saturation. In this thesis, we show how these scalability and performance impediments can be solved by exploiting sharing among concurrent queries and incorporating NUMA-aware adaptive task scheduling and data placement strategies in the execution engine. Regarding sharing, we identify and categorize state-of-the-art techniques for sharing data and work across concurrent queries at run-time into two categories: reactive sharing, which shares intermediate results across common query sub-plans, and proactive sharing, which builds a global query plan with shared operators to evaluate queries. We integrate the original research prototypes that introduce reactive and proactive sharing, perform a sensitivity analysis, and show how and when each technique benefits performance. Our most significant finding is that reactive and proactive sharing can be combined to exploit the advantages of both sharing techniques for highly concurrent analytical workloads. Regarding NUMA-awareness, we identify, implement, and compare various combinations of task scheduling and data placement strategies under a diverse set of highly concurrent analytical workloads. We develop a prototype based on a commercial main-memory column-store database system. Our most significant finding is that there is no single strategy for task scheduling and data placement that is best for all workloads. In specific, inter-socket stealing of memory-intensive tasks can hurt overall performance, and unnecessary partitioning of data across sockets involves an overhead. For this reason, we implement algorithms that adapt task scheduling and data placement to the workload at run-time. Our experiments show that both sharing and NUMA-awareness can significantly improve the performance and scalability of highly concurrent analytical workloads on modern multi-core servers. Thus, we argue that sharing and NUMA-awareness are key factors for supporting faster processing of big data analytical applications, fully exploiting the hardware resources of modern multi-core servers, and for more responsive user experience

    Flexible Composition of Robot Logic with Computer Vision Services

    Get PDF
    Vision-based robotics is an ever-growing field within industrial automation. Demands for greater flexibility and higher quality motivate manufacturing companies to adopt these technologies for such tasks as material handling, assembly, and inspection. In addition to the direct use in the manufacturing setting, robots combined with vision systems serve as highly flexible means for realization of prototyping test-beds in the R&D context.Traditionally, the problem areas of robotics and computer vision are attacked separately. An exception is the study of vision-based servo control, the focus of which constitutes control-theoretic aspects of vision-based robot guidance under assumption that robot joints can be controlled directly. The missing part is a systemic approach to implementing robotic application with vision sensing given industrial robots constrained by their programming interface. This thesis targets the development process of vision-based robotic systems in an event-driven environment. It focuses on design and composition of three functional components: (1) robot control function, (2) image acquisition function, and (3) image processing function. The thesis approaches its goal by a combination of laboratory results, a case study of an industrial company (Kongsberg Automotive AS), and formalization of computational abstractions and architectural solutions. The image processing function is tackled with the application of reactive pipelines. The proposed system development method allows for smooth transition from early-stage vision algorithm prototyping to the integration phase. The image acquisition function in this thesis is exposed in a service-oriented manner with the help of a flexible set of concurrent computational primitives. To realize control of industrial robots, a distributed architecture is devised, which supports composability of communication-heavy robot logic, as well as flexible coupling of the robot control node with vision services

    Transactional and analytical data management on persistent memory

    Get PDF
    Die zunehmende Anzahl von Smart-Geräten und Sensoren, aber auch die sozialen Medien lassen das Datenvolumen und damit die geforderte Verarbeitungsgeschwindigkeit stetig wachsen. Gleichzeitig müssen viele Anwendungen Daten persistent speichern oder sogar strenge Transaktionsgarantien einhalten. Die neuartige Speichertechnologie Persistent Memory (PMem) mit ihren einzigartigen Eigenschaften scheint ein natürlicher Anwärter zu sein, um diesen Anforderungen effizient nachzukommen. Sie ist im Vergleich zu DRAM skalierbarer, günstiger und dauerhaft. Im Gegensatz zu Disks ist sie deutlich schneller und direkt adressierbar. Daher wird in dieser Dissertation der gezielte Einsatz von PMem untersucht, um den Anforderungen moderner Anwendung gerecht zu werden. Nach der Darlegung der grundlegenden Arbeitsweise von und mit PMem, konzentrieren wir uns primär auf drei Aspekte der Datenverwaltung. Zunächst zerlegen wir mehrere persistente Daten- und Indexstrukturen in ihre zugrundeliegenden Entwurfsprimitive, um Abwägungen für verschiedene Zugriffsmuster aufzuzeigen. So können wir ihre besten Anwendungsfälle und Schwachstellen, aber auch allgemeine Erkenntnisse über das Entwerfen von PMem-basierten Datenstrukturen ermitteln. Zweitens schlagen wir zwei Speicherlayouts vor, die auf analytische Arbeitslasten abzielen und eine effiziente Abfrageausführung auf beliebigen Attributen ermöglichen. Während der erste Ansatz eine verknüpfte Liste von mehrdimensionalen gruppierten Blöcken verwendet, handelt es sich beim zweiten Ansatz um einen mehrdimensionalen Index, der Knoten im DRAM zwischenspeichert. Drittens zeigen wir unter Verwendung der bisherigen Datenstrukturen und Erkenntnisse, wie Datenstrom- und Ereignisverarbeitungssysteme mit transaktionaler Zustandsverwaltung verbessert werden können. Dabei schlagen wir ein neuartiges Transactional Stream Processing (TSP) Modell mit geeigneten Konsistenz- und Nebenläufigkeitsprotokollen vor, die an PMem angepasst sind. Zusammen sollen die diskutierten Aspekte eine Grundlage für die Entwicklung noch ausgereifterer PMem-fähiger Systeme bilden. Gleichzeitig zeigen sie, wie Datenverwaltungsaufgaben PMem ausnutzen können, indem sie neue Anwendungsgebiete erschließen, die Leistung, Skalierbarkeit und Wiederherstellungsgarantien verbessern, die Codekomplexität vereinfachen sowie die ökonomischen und ökologischen Kosten reduzieren.The increasing number of smart devices and sensors, but also social media are causing the volume of data and thus the demanded processing speed to grow steadily. At the same time, many applications need to store data persistently or even comply with strict transactional guarantees. The novel storage technology Persistent Memory (PMem), with its unique properties, seems to be a natural candidate to meet these requirements efficiently. Compared to DRAM, it is more scalable, less expensive, and durable. In contrast to disks, it is significantly faster and directly addressable. Therefore, this dissertation investigates the deliberate employment of PMem to fit the needs of modern applications. After presenting the fundamental work of and with PMem, we focus primarily on three aspects of data management. First, we disassemble several persistent data and index structures into their underlying design primitives to reveal the trade-offs for various access patterns. It allows us to identify their best use cases and vulnerabilities but also to gain general insights into the design of PMem-based data structures. Second, we propose two storage layouts that target analytical workloads and enable an efficient query execution on arbitrary attributes. While the first approach employs a linked list of multi-dimensional clustered blocks that potentially span several storage layers, the second approach is a multi-dimensional index that caches nodes in DRAM. Third, we show how to improve stream and event processing systems involving transactional state management using the preceding data structures and insights. In this context, we propose a novel Transactional Stream Processing (TSP) model with appropriate consistency and concurrency protocols adapted to PMem. Together, the discussed aspects are intended to provide a foundation for developing even more sophisticated PMemenabled systems. At the same time, they show how data management tasks can take advantage of PMem by opening up new application domains, improving performance, scalability, and recovery guarantees, simplifying code complexity, plus reducing economic and environmental costs

    Engineering Automation for Reliable Software Interim Progress Report (10/01/2000 - 09/30/2001)

    Get PDF
    Prepared for: U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211The objective of our effort is to develop a scientific basis for producing reliable software that is also flexible and cost effective for the DoD distributed software domain. This objective addresses the long term goals of increasing the quality of service provided by complex systems while reducing development risks, costs, and time. Our work focuses on "wrap and glue" technology based on a domain specific distributed prototype model. The key to making the proposed approach reliable, flexible, and cost-effective is the automatic generation of glue and wrappers based on a designer's specification. The "wrap and glue" approach allows system designers to concentrate on the difficult interoperability problems and defines solutions in terms of deeper and more difficult interoperability issues, while freeing designers from implementation details. Specific research areas for the proposed effort include technology enabling rapid prototyping, inference for design checking, automatic program generation, distributed real-time scheduling, wrapper and glue technology, and reliability assessment and improvement. The proposed technology will be integrated with past research results to enable a quantum leap forward in the state of the art for rapid prototyping.U. S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-22110473-MA-SPApproved for public release; distribution is unlimited

    Space station data system analysis/architecture study. Task 2: Options development, DR-5. Volume 2: Design options

    Get PDF
    The primary objective of Task 2 is the development of an information base that will support the conduct of trade studies and provide sufficient data to make key design/programmatic decisions. This includes: (1) the establishment of option categories that are most likely to influence Space Station Data System (SSDS) definition; (2) the identification of preferred options in each category; and (3) the characterization of these options with respect to performance attributes, constraints, cost and risk. This volume contains the options development for the design category. This category comprises alternative structures, configurations and techniques that can be used to develop designs that are responsive to the SSDS requirements. The specific areas discussed are software, including data base management and distributed operating systems; system architecture, including fault tolerance and system growth/automation/autonomy and system interfaces; time management; and system security/privacy. Also discussed are space communications and local area networking

    Improving GPU SIMD Control Flow Efficiency via Hybrid Warp Size Mechanism

    Get PDF
    High single instruction multiple data (SIMD) efficiency and low power consumption have made graphic processing units (GPUs) an ideal platform for many complex computational applications. Thousands of threads can be created by programmers and grouped into fixed-size SIMD batches, known as warps. High throughput is then achieved by concurrently executing such warps with minimal control overhead. However, if a branch instruction occurs, which assigns different paths to different threads, this warp will be broken into multiple warps that have to be executed serially, consequently reducing the efficiency advantage of SIMD. In this thesis, the contemporary fixed-size warp design is abandoned and a hybrid warp size (HWS) mechanism is proposed. Mixed-size warps are generated according to HWS and are scheduled and issued flexibly. Once a branch divergence occurs, split warps are squeezed according to the proposed algorithm, and warp sizes are downscaled wherever applicable. Based on updated warp sizes, warp schedulers calculate the number of cycles the current warp needs and issue the next warp accordingly. As a result, hybrid warps are pushed into pipelines as soon as possible and more pipeline stages are overlapped. The simulation results show that this mechanism yields an average speedup of 1.20 over the baseline architecture for a wide variety of general purpose GPU applications. This work also integrates HWS with dynamic warp formation (DWF), which is a well-known branch handling mechanism aimed at improving SIMD utilization by forming new warps out of split warps in real time. The warp forming policy is modified to better tolerate warp conflicts. Also, squeeze operations are added before a warp merges with other warps. The simulation shows that the combination of DWF and HWS generates an average speedup of 1.27 over the DWF-only platform for the same set of GPU benchmarks

    Ereignisbasierte Software-Architektur für Verzögerungs- und Unterbrechungstolerante Netze

    Get PDF
    Continuous end-to-end connectivity is not available all the time, not even in wired networks. Delay- and Disruption-Tolerant Networking (DTN) allows devices to communicate even if there is no continuous path to the destination by replacing the end-to-end semantics with a hop-by-hop store-carry-and-forward approach. Since existing implementations of DTN software suffer from various limitations, this work presents the event-driven software architecture of IBR-DTN, a lean, lightweight, and extensible implementation of a networking stack for Delay- and Disruption-Tolerant Networking. In a comprehensive description of the architecture and the underlying design decisions, this work focuses on eliminating weaknesses of the Bundle Protocol (RFC 5050). One of these is the dependency on synchronized clocks. Thus, this work takes a closer look on that requirement and presents approaches to bypass that dependency for some cases. For scenarios which require synchronized clocks, an approach is presented to distribute time information which is used to adjust the individual clock of nodes. To compare the accuracy of time information provided by each node, this approach introduces a clock rating. Additionally, a self-aligning algorithm is used to automatically adjust the node's clock rating parameters according to the estimated accuracy of the node's clock. In an evaluation, the general portability of the bundle node software is proven by porting it to various systems. Further, a performance analysis compares the new implementation with existing software. To perform an evaluation of the time-synchronization algorithm, the ONE simulator is modified to provide individual clocks with randomized clock errors for every node. Additionally, a specialized testbed, called Hydra, is being developed to test the implementation of the time-synchronization approach in real software. Hydra instantiates virtualized nodes running a complete operating system and provides a way to test real software in large DTN scenarios. Both the simulation and the emulation in Hydra show that the algorithm for time-synchronization can provide an adequate accuracy depending on the inter-contact times.Eine kontinuierliche Ende-zu-Ende-Konnektivität ist nicht immer verfügbar, nicht einmal in drahtgebundenen Netzen. Verzögerungs- und unterbrechungstolerante Kommunikation (DTN) ersetzt die Ende-zu-Ende-Semantik mit einem Hop-by-Hop Store-Carry-and-Forward Ansatz und erlaubt es so Geräten miteinander zu kommunizieren, auch wenn es keinen kontinuierlichen Pfad gibt. Da bestehende DTN Implementierungen unter verschiedenen Einschränkungen leiden, stellt diese Arbeit die ereignisgesteuerte Software-Architektur von IBR-DTN, eine schlanke, leichte und erweiterbare Implementierung eines Netzwerk-Stacks für Verzögerungs- und unterbrechungstolerante Netze vor. In einer umfassenden Beschreibung der Architektur und den zugrunde liegenden Design-Entscheidungen, konzentriert sich diese Arbeit auf die Beseitigung von Schwächen des Bundle Protocols (RFC 5050). Eine davon ist die Abhängigkeit zu synchronisierten Uhren. Daher wirft diese Arbeit einen genaueren Blick auf diese Anforderung und präsentiert Ansätze, um diese Abhängigkeit in einigen Fällen zu umgehen. Für Szenarien die synchronisierte Uhren voraussetzen wird außerdem ein Ansatz vorgestellt, um die Uhren der einzelnen Knoten mit Hilfe von verteilten Zeitinformationen zu korrigieren. Um die Genauigkeit der Zeitinformationen von jedem Knoten vergleichen zu können, wird eine Bewertung der Uhren eingeführt. Zusätzlich wird ein Algorithmus vorgestellt, der die Parameter der Bewertung in Abhängigkeit von der ermittelten Genauigkeit der lokalen Uhr anpasst. In einer Evaluation wird die allgemeine Portabilität der Software zu verschiedenen Systemen gezeigt. Ferner wird bei einer Performance-Analyse die neue Software mit existierenden Implementierungen verglichen. Um eine Evaluation des Zeitsynchronisationsalgorithmus durchzuführen, wird der ONE Simlator so angepasst, dass jeder Knoten eine individuelle Uhr mit zufälligem Fehler besitzt. Außerdem wird eine spezielle Testumgebung namens Hydra vorgestellt um eine echte Implementierung des Zeitsynchronisationsalgorithmus zu testen. Hydra instanziiert virtualisierte Knoten mit einem kompletten Betriebssystem und bietet die Möglichkeit echte Software in großen DTN Szenarien zu testen. Sowohl die Simulation als auch die Emulation in Hydra zeigen, dass der Algorithmus für die Zeitsynchronisation eine ausreichende Genauigkeit in Abhängigkeit von Kontakthäufigkeit erreicht
    corecore