1,192 research outputs found

    Efficient Model Checking: The Power of Randomness

    Get PDF

    NCC: Natural Concurrency Control for Strictly Serializable Datastores by Avoiding the Timestamp-Inversion Pitfall

    Full text link
    Strictly serializable datastores greatly simplify the development of correct applications by providing strong consistency guarantees. However, existing techniques pay unnecessary costs for naturally consistent transactions, which arrive at servers in an order that is already strictly serializable. We find these transactions are prevalent in datacenter workloads. We exploit this natural arrival order by executing transaction requests with minimal costs while optimistically assuming they are naturally consistent, and then leverage a timestamp-based technique to efficiently verify if the execution is indeed consistent. In the process of designing such a timestamp-based technique, we identify a fundamental pitfall in relying on timestamps to provide strict serializability, and name it the timestamp-inversion pitfall. We find timestamp-inversion has affected several existing works. We present Natural Concurrency Control (NCC), a new concurrency control technique that guarantees strict serializability and ensures minimal costs -- i.e., one-round latency, lock-free, and non-blocking execution -- in the best (and common) case by leveraging natural consistency. NCC is enabled by three key components: non-blocking execution, decoupled response control, and timestamp-based consistency check. NCC avoids timestamp-inversion with a new technique: response timing control, and proposes two optimization techniques, asynchrony-aware timestamps and smart retry, to reduce false aborts. Moreover, NCC designs a specialized protocol for read-only transactions, which is the first to achieve the optimal best-case performance while ensuring strict serializability, without relying on synchronized clocks. Our evaluation shows that NCC outperforms state-of-the-art solutions by an order of magnitude on many workloads

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    On-premise containerized, light-weight software solutions for Biomedicine

    Get PDF
    Bioinformatics software systems are critical tools for analysing large-scale biological data, but their design and implementation can be challenging due to the need for reliability, scalability, and performance. This thesis investigates the impact of several software approaches on the design and implementation of bioinformatics software systems. These approaches include software patterns, microservices, distributed computing, containerisation and container orchestration. The research focuses on understanding how these techniques affect bioinformatics software systems’ reliability, scalability, performance, and efficiency. Furthermore, this research highlights the challenges and considerations involved in their implementation. This study also examines potential solutions for implementing container orchestration in bioinformatics research teams with limited resources and the challenges of using container orchestration. Additionally, the thesis considers microservices and distributed computing and how these can be optimised in the design and implementation process to enhance the productivity and performance of bioinformatics software systems. The research was conducted using a combination of software development, experimentation, and evaluation. The results show that implementing software patterns can significantly improve the code accessibility and structure of bioinformatics software systems. Specifically, microservices and containerisation also enhanced system reliability, scalability, and performance. Additionally, the study indicates that adopting advanced software engineering practices, such as model-driven design and container orchestration, can facilitate efficient and productive deployment and management of bioinformatics software systems, even for researchers with limited resources. Overall, we develop a software system integrating all our findings. Our proposed system demonstrated the ability to address challenges in bioinformatics. The thesis makes several key contributions in addressing the research questions surrounding the design, implementation, and optimisation of bioinformatics software systems using software patterns, microservices, containerisation, and advanced software engineering principles and practices. Our findings suggest that incorporating these technologies can significantly improve bioinformatics software systems’ reliability, scalability, performance, efficiency, and productivity.Bioinformatische Software-Systeme stellen bedeutende Werkzeuge für die Analyse umfangreicher biologischer Daten dar. Ihre Entwicklung und Implementierung kann jedoch aufgrund der erforderlichen Zuverlässigkeit, Skalierbarkeit und Leistungsfähigkeit eine Herausforderung darstellen. Das Ziel dieser Arbeit ist es, die Auswirkungen von Software-Mustern, Microservices, verteilten Systemen, Containerisierung und Container-Orchestrierung auf die Architektur und Implementierung von bioinformatischen Software-Systemen zu untersuchen. Die Forschung konzentriert sich darauf, zu verstehen, wie sich diese Techniken auf die Zuverlässigkeit, Skalierbarkeit, Leistungsfähigkeit und Effizienz von bioinformatischen Software-Systemen auswirken und welche Herausforderungen mit ihrer Konzeptualisierungen und Implementierung verbunden sind. Diese Arbeit untersucht auch potenzielle Lösungen zur Implementierung von Container-Orchestrierung in bioinformatischen Forschungsteams mit begrenzten Ressourcen und die Einschränkungen bei deren Verwendung in diesem Kontext. Des Weiteren werden die Schlüsselfaktoren, die den Erfolg von bioinformatischen Software-Systemen mit Containerisierung, Microservices und verteiltem Computing beeinflussen, untersucht und wie diese im Design- und Implementierungsprozess optimiert werden können, um die Produktivität und Leistung bioinformatischer Software-Systeme zu steigern. Die vorliegende Arbeit wurde mittels einer Kombination aus Software-Entwicklung, Experimenten und Evaluation durchgeführt. Die erzielten Ergebnisse zeigen, dass die Implementierung von Software-Mustern, die Zuverlässigkeit und Skalierbarkeit von bioinformatischen Software-Systemen erheblich verbessern kann. Der Einsatz von Microservices und Containerisierung trug ebenfalls zur Steigerung der Zuverlässigkeit, Skalierbarkeit und Leistungsfähigkeit des Systems bei. Darüber hinaus legt die Arbeit dar, dass die Anwendung von SoftwareEngineering-Praktiken, wie modellgesteuertem Design und Container-Orchestrierung, die effiziente und produktive Bereitstellung und Verwaltung von bioinformatischen Software-Systemen erleichtern kann. Zudem löst die Implementierung dieses SoftwareSystems, Herausforderungen für Forschungsgruppen mit begrenzten Ressourcen. Insgesamt hat das System gezeigt, dass es in der Lage ist, Herausforderungen im Bereich der Bioinformatik zu bewältigen und stellt somit ein wertvolles Werkzeug für Forscher in diesem Bereich dar. Die vorliegende Arbeit leistet mehrere wichtige Beiträge zur Beantwortung von Forschungsfragen im Zusammenhang mit dem Entwurf, der Implementierung und der Optimierung von Software-Systemen für die Bioinformatik unter Verwendung von Prinzipien und Praktiken der Softwaretechnik. Unsere Ergebnisse deuten darauf hin, dass die Einbindung dieser Technologien die Zuverlässigkeit, Skalierbarkeit, Leistungsfähigkeit, Effizienz und Produktivität bioinformatischer Software-Systeme erheblich verbessern kann

    Toward Fault-Tolerant Applications on Reconfigurable Systems-on-Chip

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Architecture and Advanced Electronics Pathways Toward Highly Adaptive Energy- Efficient Computing

    Get PDF
    With the explosion of the number of compute nodes, the bottleneck of future computing systems lies in the network architecture connecting the nodes. Addressing the bottleneck requires replacing current backplane-based network topologies. We propose to revolutionize computing electronics by realizing embedded optical waveguides for onboard networking and wireless chip-to-chip links at 200-GHz carrier frequency connecting neighboring boards in a rack. The control of novel rate-adaptive optical and mm-wave transceivers needs tight interlinking with the system software for runtime resource management

    An Internet of Things (IoT) based wide-area Wireless Sensor Network (WSN) platform with mobility support.

    Get PDF
    Wide-area remote monitoring applications use cellular networks or satellite links to transfer sensor data to the central storage. Remote monitoring applications uses Wireless Sensor Networks (WSNs) to accommodate more Sensor Nodes (SNs) and for better management. Internet of Things (IoT) network connects the WSN with the data storage and other application specific services using the existing internet infrastructure. Both cellular networks, such as the Narrow-Band IoT (NB-IoT), and satellite links will not be suitable for point-to-point connections of the SNs due to their lack of coverage, high cost, and energy requirement. Low Power Wireless Area Network (LPWAN) is used to interconnect all the SNs and accumulate the data to a single point, called Gateway, before sending it to the IoT network. WSN implements clustering of the SNs to increase the network coverage and utilizes multiple wireless links between the repeater nodes (called hops) to reach the gateway at a longer distance. Clustered WSN can cover up to a few km using the LPWAN technologies such as Zigbee using multiple hops. Each Zigbee link can be from 200 m to 500 m long. Other LPWAN technologies, such as LoRa, can facilitate an extended range from 1km to 15km. However, the LoRa will not be suitable for the clustered WSN due to its long Time on Air (TOA) which will introduce data transmission delay and become severe with the increase of hop count. Besides, a sensor node will need to increase the antenna height to achieve the long-range benefit of Lora using a single link (hop) instead of using multiple hops to cover the same range. With the increased WSN coverage area, remote monitoring applications such as smart farming may require mobile sensor nodes. This research focuses on the challenges to overcome LoRa’s limitations (long TOA and antenna height) and accommodation of mobility in a high-density and wide-area WSN for future remote monitoring applications. Hence, this research proposes lightweight communication protocols and networking algorithms using LoRa to achieve mobility, energy efficiency and wider coverage of up to a few hundred km for the WSN. This thesis is divided into four parts. It presents two data transmission protocols for LoRa to achieve a higher data rate and wider network coverage, one networking algorithm for wide-area WSN and a channel synchronization algorithm to improve the data rate of LoRa links. Part one presents a lightweight data transmission protocol for LoRa using a mobile data accumulator (called data sink) to increase the monitoring coverage area and data transmission energy efficiency. The proposed Lightweight Dynamic Auto Reconfigurable Protocol (LDAP) utilizes direct or single hop to transmit data from the SNs using one of them as the repeater node. Wide-area remote monitoring applications such as Water Quality Monitoring (WQM) can acquire data from geographically distributed water resources using LDAP, and a mobile Data Sink (DS) mounted on an Unmanned Aerial Vehicle (UAV). The proposed LDAP can acquire data from a minimum of 147 SNs covering 128 km in one direction reducing the DS requirement down to 5% comparing other WSNs using Zigbee for the same coverage area with static DS. Applications like smart farming and environmental monitoring may require mobile sensor nodes (SN) and data sinks (DS). The WSNs for these applications will require real-time network management algorithms and routing protocols for the dynamic WSN with mobility that is not feasible using static WSN technologies. This part proposes a lightweight clustering algorithm for the dynamic WSN (with mobility) utilizing the proposed LDAP to form clusters in real-time during the data accumulation by the mobile DS. The proposed Lightweight Dynamic Clustering Algorithm (LDCA) can form real-time clusters consisting of mobile or stationary SNs using mobile DS or static GW. WSN using LoRa and LDCA increases network capacity and coverage area reducing the required number of DS. It also reduces clustering energy to 33% and shows clustering efficiency of up to 98% for single-hop clustering covering 100 SNs. LoRa is not suitable for a clustered WSN with multiple hops due to its long TOA, depending on the LoRa link configurations (bandwidth and spreading factor). This research proposes a channel synchronization algorithm to improve the data rate of the LoRa link by combining multiple LoRa radio channels in a single logical channel. This increased data rate will enhance the capacity of the clusters in the WSN supporting faster clustering with mobile sensor nodes and data sink. Along with the LDCA, the proposed Lightweight Synchronization Algorithm for Quasi-orthogonal LoRa channels (LSAQ) facilitating multi-hop data transfer increases WSN capacity and coverage area. This research investigates quasi-orthogonality features of LoRa in terms of radio channel frequency, spreading factor (SF) and bandwidth. It derived mathematical models to obtain the optimal LoRa parameters for parallel data transmission using multiple SFs and developed a synchronization algorithm for LSAQ. The proposed LSAQ achieves up to a 46% improvement in network capacity and 58% in data rate compared with the WSN using the traditional LoRa Medium Access Control (MAC) layer protocols. Besides the high-density clustered WSN, remote monitoring applications like plant phenotyping may require transferring image or high-volume data using LoRa links. Wireless data transmission protocols used for high-volume data transmission using the link with a low data rate (like LoRa) requiring multiple packets create a significant amount of packet overload. Besides, the reliability of these data transmission protocols is highly dependent on acknowledgement (ACK) messages creating extra load on overall data transmission and hence reducing the application-specific effective data rate (goodput). This research proposes an application layer protocol to improve the goodput while transferring an image or sequential data over the LoRa links in the WSN. It uses dynamic acknowledgement (DACK) protocol for the LoRa physical layer to reduce the ACK message overhead. DACK uses end-of-transmission ACK messaging and transmits multiple packets as a block. It retransmits missing packets after receiving the ACK message at the end of multiple blocks. The goodput depends on the block size and the number of lossy packets that need to be retransmitted. It shows that the DACK LoRa can reduce the total ACK time 10 to 30 times comparing stop-wait protocol and ten times comparing multi-packet ACK protocol. The focused wide-area WSN and mobility requires different matrices to be evaluated. The performance evaluation matrices used for the static WSN do not consider the mobility and the related parameters, such as clustering efficiency in the network and hence cannot evaluate the performance of the proposed wide-area WSN platform supporting mobility. Therefore, new, and modified performance matrices are proposed to measure dynamic performance. It can measure the real-time clustering performance using the mobile data sink and sensor nodes, the cluster size, the coverage area of the WSN and more. All required hardware and software design, dimensioning, and performance evaluation models are also presented

    Lessons from Formally Verified Deployed Software Systems (Extended version)

    Full text link
    The technology of formal software verification has made spectacular advances, but how much does it actually benefit the development of practical software? Considerable disagreement remains about the practicality of building systems with mechanically-checked proofs of correctness. Is this prospect confined to a few expensive, life-critical projects, or can the idea be applied to a wide segment of the software industry? To help answer this question, the present survey examines a range of projects, in various application areas, that have produced formally verified systems and deployed them for actual use. It considers the technologies used, the form of verification applied, the results obtained, and the lessons that can be drawn for the software industry at large and its ability to benefit from formal verification techniques and tools. Note: a short version of this paper is also available, covering in detail only a subset of the considered systems. The present version is intended for full reference.Comment: arXiv admin note: text overlap with arXiv:1211.6186 by other author

    Transactional memory for high-performance embedded systems

    Get PDF
    The increasing demand for computational power in embedded systems, which is required for various tasks, such as autonomous driving, can only be achieved by exploiting the resources offered by modern hardware. Due to physical limitations, hardware manufacturers have moved to increase the number of cores per processor instead of further increasing clock rates. Therefore, in our view, the additionally required computing power can only be achieved by exploiting parallelism. Unfortunately writing parallel code is considered a difficult and complex task. Hardware Transactional Memories (HTMs) are a suitable tool to write sophisticated parallel software. However, HTMs were not specifically developed for embedded systems and therefore cannot be used without consideration. The use of conventional HTMs increases complexity and makes it more difficult to foresee implications with other important properties of embedded systems. This thesis therefore describes how an HTM for embedded systems could be implemented. The HTM was designed to allow the parallel execution of software and to offer functionality which is useful for embedded systems. Hereby the focus lay on: elimination of the typical limitations of conventional HTMs, several conflict resolution mechanisms, investigation of real time behavior, and a feature to conserve energy. To enable the desired functionalities, the structure of the HTM described in this work strongly differs from a conventional HTM. In comparison to the baseline HTM, which was also designed and implemented in this thesis, the biggest adaptation concerns the conflict detection. It was modified so that conflicts can be detected and resolved centrally. For this, the cache hierarchy as well as the cache coherence had to be adapted and partially extended. The system was implemented in the cycle-accurate gem5 simulator. The eight benchmarks of the STAMP benchmark suite were used for evaluation. The evaluation of the various functionalities shows that the mechanisms work and add value for the operation in embedded systems.Der immer größer werdende Bedarf an Rechenleistung in eingebetteten Systemen, der für verschiedene Aufgaben wie z. B. dem autonomen Fahren benötigt wird, kann nur durch die effiziente Nutzung der zur Verfügung stehenden Ressourcen erreicht werden. Durch physikalische Grenzen sind Prozessorhersteller dazu übergegangen, Prozessoren mit mehreren Prozessorkernen auszustatten, statt die Taktraten weiter anzuheben. Daher kann die zusätzlich benötigte Rechenleistung aus unserer Sicht nur durch eine Steigerung der Parallelität gelingen. Hardwaretransaktionsspeicher (HTS) erlauben es ihren Nutzern schnell und einfach parallele Programme zu schreiben. Allerdings wurden HTS nicht speziell für eingebettete Systeme entwickelt und sind daher nur eingeschränkt für diese nutzbar. Durch den Einsatz herkömmlicher HTS steigt die Komplexität und es wird somit schwieriger abzusehen, ob andere wichtige Eigenschaften erreicht werden können. Um den Einsatz von HTS in eingebettete Systeme besser zu ermöglichen, beschreibt diese Arbeit einen konkreten Ansatz. Der HTS wurde hierzu so entwickelt, dass er eine parallele Ausführung von Programmen ermöglicht und Eigenschaften besitzt, welche für eingebettete Systeme nützlich sind. Dazu gehören unter anderem: Wegfall der typischen Limitierungen herkömmlicher HTS, Einflussnahme auf den Konfliktauflösungsmechanismus, Unterstützung einer abschätzbaren Ausführung und eine Funktion, um Energie einzusparen. Um die gewünschten Funktionalitäten zu ermöglichen, unterscheidet sich der Aufbau des in dieser Arbeit beschriebenen HTS stark von einem klassischen HTS. Im Vergleich zu dem Referenz HTS, der ebenfalls im Rahmen dieser Arbeit entworfen und implementiert wurde, betrifft die größte Anpassung die Konflikterkennung. Sie wurde derart verändert, dass die Konflikte zentral erkannt und aufgelöst werden können. Hierfür mussten die Cache-Hierarchie und Cache-Kohärenz stark angepasst und teilweise erweitert werden. Das System wurde in einem taktgenauen Simulator, dem gem5-Simulator, umgesetzt. Zur Evaluation wurden die acht Benchmarks der STAMP-Benchmark-Suite eingesetzt. Die Evaluation der verschiedenen Funktionen zeigt, dass die Mechanismen funktionieren und somit einen Mehrwert für eingebettete Systeme bieten
    • …
    corecore