28 research outputs found

    Uncomputation in the Qrisp high-level Quantum Programming Framework

    Full text link
    Uncomputation is an essential part of reversible computing and plays a vital role in quantum computing. Using this technique, memory resources can be safely deallocated without performing a nonreversible deletion process. For the case of quantum computing, several algorithms depend on this as they require disentangled states in the course of their execution. Thus, uncomputation is not only about resource management, but is also required from an algorithmic point of view. However, synthesizing uncomputation circuits is tedious and can be automated. In this paper, we describe the interface for automated generation of uncomputation circuits in our Qrisp framework. Our algorithm for synthesizing uncomputation circuits in Qrisp is based on an improved version of "Unqomp", a solution presented by Paradis et. al. Our paper also presents some improvements to the original algorithm, in order to make it suitable for the needs of a high-level programming framework. Qrisp itself is a fully compilable, high-level programming language/framework for gate-based quantum computers, which abstracts from many of the underlying hardware details. Qrisp's goal is to support a high-level programming paradigm as known from classical software development

    Data Governance and Sovereignty in Urban Data Spaces Based on Standardized ICT Reference Architectures

    Get PDF
    European cities and communities (and beyond) require a structured overview and a set of tools as to achieve a sustainable transformation towards smarter cities/municipalities, thereby leveraging on the enormous potential of the emerging data driven economy. This paper presents the results of a recent study that was conducted with a number of German municipalities/cities. Based on the obtained and briefly presented recommendations emerging from the study, the authors propose the concept of an Urban Data Space (UDS), which facilitates an eco-system for data exchange and added value creation thereby utilizing the various types of data within a smart city/municipality. Looking at an Urban Data Space from within a German context and considering the current situation and developments in German municipalities, this paper proposes a reasonable classification of urban data that allows the relation of various data types to legal aspects, and to conduct solid considerations regarding technical implementation designs and decisions. Furthermore, the Urban Data Space is described/analyzed in detail, and relevant stakeholders are identified, as well as corresponding technical artifacts are introduced. The authors propose to setup Urban Data Spaces based on emerging standards from the area of ICT reference architectures for Smart Cities, such as DIN SPEC 91357 “Open Urban Platform” and EIP SCC. In the course of this, the paper walks the reader through the construction of a UDS based on the above-mentioned architectures and outlines all the goals, recommendations and potentials, which an Urban Data Space can reveal to a municipality/city. Finally, we aim at deriving the proposed concepts in a way that they have the potential to be part of the required set of tools towards the sustainable transformation of German and European cities in the direction of smarter urban environments, based on utilizing the hidden potential of digitalization and efficient interoperable data exchange.EC/H2020/646578/EU/Triangulum: The Three Point Project / Demonstrate. Disseminate. Replicate./TriangulumBMBF, 13NKE012, Datenaustausch und Zusammenarbeit im urbanen Raum - Bestandsanalyse (Urban Data Space

    Skalierbare und effiziente verteilte Selbstheilung mit Selbstoptimierungsfunktionen in festen IP-Netzwerken

    No full text
    The Internet is continuously gaining importance in our society. Indeed, the Internet is slowly turning into the backbone of the modern world, having impact on all possible aspects, such as politics, communication, intercultural exchange, and emergency services, to give some examples. As these aspects are developing, the technical infrastructure around the Internet’s core protocol - IP (Internet Protocol) - is increasingly exposed to various challenges. One of these challenges is given by the requirement for sophisticated resilience mechanisms that can guarantee the robustness of the IP infrastructure in case of faults, failures, and natural disasters. This dissertation aims to develop a new architectural framework for improving the resilience of network nodes in fixed IP network infrastructures, i.e. IP networks without any mobility and continuously changing physical topology. The current thesis approaches the topic of resilience from two different perspectives. First, it is recognized that resilient self-healing mechanisms are already embedded inside diverse network protocols, as well as in applications and services running on top of a fixed IP network. Secondly, the importance of network and systems management processes for the availability of the network and IT infrastructure is also analyzed. This leads to the identification of a gap between the resilient features which are intrinsically embedded inside the protocols and applications, on one hand, and the network and systems management processes, on the other hand. This gap is constituted by the lack of a framework that runs on top of the protocols and applications and manages them with respect to incidents, thereby automating aspects of the established management standards. In addition, this framework is meant to serve as a layer between the network/system’s administrator and the networked infrastructure. That is, on one hand, the framework is configured and provided with knowledge by the human experts tweaking and improving the system. On the other hand, the framework is designed to escalate faulty conditions, which it is not able to resolve, to the operations personnel, such that responsive managerial actions can be initiated. The architectural framework consists of software components that operate in a distributed manner inside the nodes of the networked system in question. These software components are able to proactively and reactively respond to faulty conditions, i.e. on one hand failures are predicted and avoided, and on the other hand, an automatic response to already existing faulty conditions is realized. To evaluate the concepts and mechanisms, a number of case studies are executed. Additionally, the scalability and overhead (e.g. memory consumption) of the proposed framework are evaluated. Furthermore, the framework and algorithms are designed in a way that enables the realization of real-time self-healing whereby the reaction strategy is always optimized such that key performance indicators of the networked system are improved.Das Internet gewinnt in unserer Gesellschaft kontinuierlich an Bedeutung. In der Tat verwandelt sich das Internet in das Rückgrat der modernen Welt, mit Auswirkungen auf zahlreiche Aspekte wie Politik, Kommunikation, interkultureller Austausch und Notdienste, um einige Beispiele zu nennen. Die Weiterentwicklung dieser Bereiche stellt die technische Struktur des Internets – insbesondere das Internet Protokoll (IP) - zunehmend vor verschiedene Herausforderungen. Aus diesen Herausforderungen ergibt sich unter anderem die Notwendigkeit für skalierbare und effiziente Selbstheilungsmechanismen, die in den darunterliegenden Netzen zu integrieren sind. Das Ziel dieser Selbstheilungsmechanismen besteht in der Gewährleistung von Robustheit und Widerstandsfähigkeit der IP-Infrastruktur bei Störungen, Angriffen und Naturkatastrophen. Diese Dissertation setzt sich als Ziel, eine Softwarearchitektur zu entwickeln, durch die eine Verbesserung der Widerstandsfähigkeit von Netzwerkknoten in festen IP-Netzwerkinfrastrukturen realisiert werden kann, das heißt IP-Netzwerke ohne Mobilität und kontinuierlich wechselnde physikalische Topologie. Die Dissertation betrachtet das Thema der Widerstandsfähigkeit aus zwei verschiedenen Perspektiven. Zuerst wird analysiert und festgestellt, dass dynamische Selbstheilungsmechanismen bereits in diversen Netzwerkprotokollen sowie in Anwendungen und Diensten, die in einem festen IP-Netzwerk laufen, eingebettet sind. Im Anschluss daran wird die Bedeutung von etablierten Netzwerk- und Systemmanagementprozessen für die Verfügbarkeit von Netzwerk- und IT-Infrastruktur analysiert. Dies führt zur Identifizierung einer Lücke zwischen den dynamischen Selbstheilungs- und Robustheitsmechanismen, die einerseits in den Netzwerkprotokollen und Anwendungen/Diensten und andererseits in den etablierten Netzwerk- und Systemmanagementprozessen eingebettet/integriert sind. Die Lücke besteht darin, dass es an einer Softwarearchitektur fehlt, die on-top der Netzwerkprotokolle und Anwendungen läuft, diese in Bezug auf Vorfälle verwaltet und damit die Aspekte der etablierten Management-Standards im Sinne eines Selbstheilungsprozesses automatisiert. Darüber hinaus soll dieses Framework als Zwischenschicht zwischen dem Netzwerk-/Systemadministrator und der Netzwerk-/Dienstinfrastruktur dienen. Das heißt, einerseits wird das Framework konfiguriert und mit Kenntnissen der Experten/Administratoren ausgestattet, die das Netzwerk/System ausbauen, pflegen und detailliert kennen. Andererseits ist die anvisierte Softwarearchitektur so konzipiert, dass sie Störungen, die sie nicht lösen kann, an die Mitarbeiter des Network Operations Center (NOC) eskaliert. Die NOC-Mitarbeiter sind anschließend in der Lage entsprechende Aktivitäten einzuleiten, um das bestehende Netzwerkproblem zu behandeln, das von dem anvisierten Framework nicht gelöst werden konnte. Die vorgeschlagene Architektur besteht aus Softwarekomponenten, die in verteilter Weise innerhalb der Knoten des betreffenden vernetzten Systems arbeiten. Diese Softwarekomponenten sind in der Lage, proaktiv und reaktiv auf fehlerhafte Zustände zu reagieren, das heißt einerseits werden Ausfälle vorhergesagt und vermieden, während andererseits eine automatische Antwort auf bereits vorhandene fehlerhafte Bedingungen realisiert wird. Zur Bewertung der vorgeschlagenen Konzepte und Mechanismen wird eine Reihe von Fallstudien durchgeführt. Zusätzlich werden die Skalierbarkeit und der Overhead (z.B. Speicherverbrauch) der vorgeschlagenen Softwarearchitektur ausgewertet. Darüber hinaus sind das Framework und die zugehörigen Algorithmen so konzipiert, dass die Realisierung von Echtzeit-Selbstheilung möglich ist. Dabei wird die Reaktionsstrategie des Frameworks immer so optimiert, dass wichtige Leistungsindikatoren des vernetzten Systems verbessert werden

    A CKAN plugin for data harvesting to the Hadoop distributed file system

    No full text
    Smart Cities will mainly emerge around the opening of large amounts of data, which are currently kept closed by various stakeholders within an urban ecosystem. This data requires to be cataloged and made available to the community, such that applications and services can be developed for citizens, companies and for optimizing processes within a city itself. In that scope, the current work seeks to develop concepts and prototypes, in order to enable and demonstrate, how data cataloging and data storage can be merged towards the provisioning of large amounts of data in urban environments. The developed concepts, prototype, case study and belonging evaluations are based on the integration of common technologies from the domains of Open Data and large scale data processing in data centers, namely CKAN and Hadoop

    Experiences designing a multi-tier architecture for a decentralized blockchain application in the energy domain

    No full text
    In recent years the emergence of the Ethereum Blockchain has introduced a new alternative perspective on how web applications can be build. More precisely, the Ethereum Blockchain allows the development of applications, where programming code can be executed in a decentralized manner with no restrictions imposed by a central authority. However, as it is the case with many emerging technologies, there is a fair amount of trade-offs that have to be considered when this technology is used as a platform for implementing decentralized applications. In this work we present two architectural designs for building decentralized applications (DApps) based on the Ethereum Blockchain technology. Within this context, we discuss the inherent strengths and weaknesses of each of the architectural designs as well as the set of challenges that we faced during the development process

    Visualization of traffic flows in a simulated network environment to investigate abnormal network behavior in complex network infrastructures

    No full text
    The design and implementation of complex network infrastructures requires early and extensive planning. However, increased complexity may make it more difficult to anticipate potential risks associated with the implementation and later with the operation of the network in question. In order to minimize the risk and potential effort fault/error/failure/alarm analysis during network operation, there exist a variety of network simulation tools, which allow to evaluate the feasibility of complex network environments and can be used to examine the envisioned infrastructure before implementation. If one wants to go one step further and simulate networks as realistically as possible for an extensive analysis, the choice of available tools is rather small. An interaction of emulated routers - i.e. with their virtualized operating systems and configuration interfaces - and a simulation environment can remedy this. The fact is that there is only limited existence of such software solutions on the market. Indeed, these products usually lack an essential function - the visual representation of the traffic flows for investigating network behavior under various test or failure scenarios. This article aims to show ways to solve this problem and to investigate its applicability to other scenarios based on the utilization of various standards
    corecore