3,000 research outputs found

    Distributed Ledger Technology (DLT) Applications in Payment, Clearing, and Settlement Systems:A Study of Blockchain-Based Payment Barriers and Potential Solutions, and DLT Application in Central Bank Payment System Functions

    Get PDF
    Payment, clearing, and settlement systems are essential components of the financial markets and exert considerable influence on the overall economy. While there have been considerable technological advancements in payment systems, the conventional systems still depend on centralized architecture, with inherent limitations and risks. The emergence of Distributed ledger technology (DLT) is being regarded as a potential solution to transform payment and settlement processes and address certain challenges posed by the centralized architecture of traditional payment systems (Bank for International Settlements, 2017). While proof-of-concept projects have demonstrated the technical feasibility of DLT, significant barriers still hinder its adoption and implementation. The overarching objective of this thesis is to contribute to the developing area of DLT application in payment, clearing and settlement systems, which is still in its initial stages of applications development and lacks a substantial body of scholarly literature and empirical research. This is achieved by identifying the socio-technical barriers to adoption and diffusion of blockchain-based payment systems and the solutions proposed to address them. Furthermore, the thesis examines and classifies various applications of DLT in central bank payment system functions, offering valuable insights into the motivations, DLT platforms used, and consensus algorithms for applicable use cases. To achieve these objectives, the methodology employed involved a systematic literature review (SLR) of academic literature on blockchain-based payment systems. Furthermore, we utilized a thematic analysis approach to examine data collected from various sources regarding the use of DLT applications in central bank payment system functions, such as central bank white papers, industry reports, and policy documents. The study's findings on blockchain-based payment systems barriers and proposed solutions; challenge the prevailing emphasis on technological and regulatory barriers in the literature and industry discourse regarding the adoption and implementation of blockchain-based payment systems. It highlights the importance of considering the broader socio-technical context and identifying barriers across all five dimensions of the social technical framework, including technological, infrastructural, user practices/market, regulatory, and cultural dimensions. Furthermore, the research identified seven DLT applications in central bank payment system functions. These are grouped into three overarching themes: central banks' operational responsibilities in payment and settlement systems, issuance of central bank digital money, and regulatory oversight/supervisory functions, along with other ancillary functions. Each of these applications has unique motivations or value proposition, which is the underlying reason for utilizing in that particular use case

    Language integrated relational lenses

    Get PDF
    Relational databases are ubiquitous. Such monolithic databases accumulate large amounts of data, yet applications typically only work on small portions of the data at a time. A subset of the database defined as a computation on the underlying tables is called a view. Querying views is helpful, but it is also desirable to update them and have these changes be applied to the underlying database. This view update problem has been the subject of much previous work before, but support by database servers is limited and only rarely available. Lenses are a popular approach to bidirectional transformations, a generalization of the view update problem in databases to arbitrary data. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in databases. Bohannon, Pierce and Vaughan propose an approach to updatable views called relational lenses. However, to the best of our knowledge this proposal has not been implemented or evaluated prior to the work reported in this thesis. This thesis proposes programming language support for relational lenses. Language integrated relational lenses support expressive and efficient view updates, without relying on updatable view support from the database server. By integrating relational lenses into the programming language, application development becomes easier and less error-prone, avoiding the impedance mismatch of having two programming languages. Integrating relational lenses into the language poses additional challenges. As defined by Bohannon et al. relational lenses completely recompute the database, making them inefficient as the database scales. The other challenge is that some parts of the well-formedness conditions are too general for implementation. Bohannon et al. specify predicates using possibly infinite abstract sets and define the type checking rules using relational algebra. Incremental relational lenses equip relational lenses with change-propagating semantics that map small changes to the view into (potentially) small changes to the source tables. We prove that our incremental semantics are functionally equivalent to the non-incremental semantics, and our experimental results show orders of magnitude improvement over the non-incremental approach. This thesis introduces a concrete predicate syntax and shows how the required checks are performed on these predicates and show that they satisfy the abstract predicate specifications. We discuss trade-offs between static predicates that are fully known at compile time vs dynamic predicates that are only known during execution and introduce hybrid predicates taking inspiration from both approaches. This thesis adapts the typing rules for relational lenses from sequential composition to a functional style of sub-expressions. We prove that any well-typed functional relational lens expression can derive a well-typed sequential lens. We use these additions to relational lenses as the foundation for two practical implementations: an extension of the Links functional language and a library written in Haskell. The second implementation demonstrates how type-level computation can be used to implement relational lenses without changes to the compiler. These two implementations attest to the possibility of turning relational lenses into a practical language feature

    Software Design Change Artifacts Generation through Software Architectural Change Detection and Categorisation

    Get PDF
    Software is solely designed, implemented, tested, and inspected by expert people, unlike other engineering projects where they are mostly implemented by workers (non-experts) after designing by engineers. Researchers and practitioners have linked software bugs, security holes, problematic integration of changes, complex-to-understand codebase, unwarranted mental pressure, and so on in software development and maintenance to inconsistent and complex design and a lack of ways to easily understand what is going on and what to plan in a software system. The unavailability of proper information and insights needed by the development teams to make good decisions makes these challenges worse. Therefore, software design documents and other insightful information extraction are essential to reduce the above mentioned anomalies. Moreover, architectural design artifacts extraction is required to create the developer’s profile to be available to the market for many crucial scenarios. To that end, architectural change detection, categorization, and change description generation are crucial because they are the primary artifacts to trace other software artifacts. However, it is not feasible for humans to analyze all the changes for a single release for detecting change and impact because it is time-consuming, laborious, costly, and inconsistent. In this thesis, we conduct six studies considering the mentioned challenges to automate the architectural change information extraction and document generation that could potentially assist the development and maintenance teams. In particular, (1) we detect architectural changes using lightweight techniques leveraging textual and codebase properties, (2) categorize them considering intelligent perspectives, and (3) generate design change documents by exploiting precise contexts of components’ relations and change purposes which were previously unexplored. Our experiment using 4000+ architectural change samples and 200+ design change documents suggests that our proposed approaches are promising in accuracy and scalability to deploy frequently. Our proposed change detection approach can detect up to 100% of the architectural change instances (and is very scalable). On the other hand, our proposed change classifier’s F1 score is 70%, which is promising given the challenges. Finally, our proposed system can produce descriptive design change artifacts with 75% significance. Since most of our studies are foundational, our approaches and prepared datasets can be used as baselines for advancing research in design change information extraction and documentation

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    Rethink Digital Health Innovation: Understanding Socio-Technical Interoperability as Guiding Concept

    Get PDF
    Diese Dissertation sucht nach einem theoretischem GrundgerĂŒst, um komplexe, digitale Gesundheitsinnovationen so zu entwickeln, dass sie bessere Erfolgsaussichten haben, auch in der alltĂ€glichen Versorgungspraxis anzukommen. Denn obwohl es weder am Bedarf von noch an Ideen fĂŒr digitale Gesundheitsinnovationen mangelt, bleibt die Flut an erfolgreich in der Praxis etablierten Lösungen leider aus. Dieser unzureichende Diffusionserfolg einer entwickelten Lösung - gern auch als Pilotitis pathologisiert - offenbart sich insbesondere dann, wenn die geplante Innovation mit grĂ¶ĂŸeren Ambitionen und KomplexitĂ€t verbunden ist. Dem geĂŒbten Kritiker werden sofort ketzerische Gegenfragen in den Sinn kommen. Beispielsweise was denn unter komplexen, digitalen Gesundheitsinnovationen verstanden werden soll und ob es ĂŒberhaupt möglich ist, eine universale Lösungsformel zu finden, die eine erfolgreiche Diffusion digitaler Gesundheitsinnovationen garantieren kann. Beide Fragen sind nicht nur berechtigt, sondern mĂŒnden letztlich auch in zwei ForschungsstrĂ€nge, welchen ich mich in dieser Dissertation explizit widme. In einem ersten Block erarbeite ich eine Abgrenzung jener digitalen Gesundheitsinnovationen, welche derzeit in Literatur und Praxis besondere Aufmerksamkeit aufgrund ihres hohen Potentials zur Versorgungsverbesserung und ihrer resultierenden KomplexitĂ€t gewidmet ist. Genauer gesagt untersuche ich dominante Zielstellungen und welche Herausforderung mit ihnen einhergehen. Innerhalb der Arbeiten in diesem Forschungsstrang kristallisieren sich vier Zielstellungen heraus: 1. die UnterstĂŒtzung kontinuierlicher, gemeinschaftlicher Versorgungsprozesse ĂŒber diverse Leistungserbringer (auch als inter-organisationale Versorgungspfade bekannt); 2. die aktive Einbeziehung der Patient:innen in ihre Versorgungsprozesse (auch als Patient Empowerment oder Patient Engagement bekannt); 3. die StĂ€rkung der sektoren-ĂŒbergreifenden Zusammenarbeit zwischen Wissenschaft und Versorgungpraxis bis hin zu lernenden Gesundheitssystemen und 4. die Etablierung daten-zentrierter Wertschöpfung fĂŒr das Gesundheitswesen aufgrund steigender bzgl. VerfĂŒgbarkeit valider Daten, neuen Verarbeitungsmethoden (Stichwort KĂŒnstliche Intelligenz) sowie den zahlreichen Nutzungsmöglichkeiten. Im Fokus dieser Dissertation stehen daher weniger die autarken, klar abgrenzbaren Innovationen (bspw. eine Symptomtagebuch-App zur Beschwerdedokumentation). Vielmehr adressiert diese Doktorarbeit jene Innovationsvorhaben, welche eine oder mehrere der o.g. Zielstellung verfolgen, ein weiteres technologisches Puzzleteil in komplexe Informationssystemlandschaften hinzufĂŒgen und somit im Zusammenspiel mit diversen weiteren IT-Systemen zur Verbesserung der Gesundheitsversorgung und/ oder ihrer Organisation beitragen. In der Auseinandersetzung mit diesen Zielstellungen und verbundenen Herausforderungen der Systementwicklung rĂŒckte das Problem fragmentierter IT-Systemlandschaften des Gesundheitswesens in den Mittelpunkt. Darunter wird der unerfreuliche Zustand verstanden, dass unterschiedliche Informations- und Anwendungssysteme nicht wie gewĂŒnscht miteinander interagieren können. So kommt es zu Unterbrechungen von InformationsflĂŒssen und Versorgungsprozessen, welche anderweitig durch fehleranfĂ€llige ZusatzaufwĂ€nde (bspw. Doppeldokumentation) aufgefangen werden mĂŒssen. Um diesen EinschrĂ€nkungen der EffektivitĂ€t und Effizienz zu begegnen, mĂŒssen eben jene IT-System-Silos abgebaut werden. Alle o.g. Zielstellungen ordnen sich dieser defragmentierenden Wirkung unter, in dem sie 1. verschiedene Leistungserbringer, 2. Versorgungsteams und Patient:innen, 3. Wissenschaft und Versorgung oder 4. diverse Datenquellen und moderne Auswertungstechnologien zusammenfĂŒhren wollen. Doch nun kommt es zu einem komplexen Ringschluss. Einerseits suchen die in dieser Arbeit thematisierten digitalen Gesundheitsinnovationen Wege zur Defragmentierung der Informationssystemlandschaften. Andererseits ist ihre eingeschrĂ€nkte Erfolgsquote u.a. in eben jener bestehenden Fragmentierung begrĂŒndet, die sie aufzulösen suchen. Mit diesem Erkenntnisgewinn eröffnet sich der zweite Forschungsstrang dieser Arbeit, der sich mit der Eigenschaft der 'InteroperabilitĂ€t' intensiv auseinandersetzt. Er untersucht, wie diese Eigenschaft eine zentrale Rolle fĂŒr Innovationsvorhaben in der Digital Health DomĂ€ne einnehmen soll. Denn InteroperabilitĂ€t beschreibt, vereinfacht ausgedrĂŒckt, die FĂ€higkeit von zwei oder mehreren Systemen miteinander gemeinsame Aufgaben zu erfĂŒllen. Sie reprĂ€sentiert somit das Kernanliegen der identifizierten Zielstellungen und ist Dreh- und Angelpunkt, wenn eine entwickelte Lösung in eine konkrete Zielumgebung integriert werden soll. Von einem technisch-dominierten Blickwinkel aus betrachtet, geht es hierbei um die GewĂ€hrleistung von validen, performanten und sicheren Kommunikationsszenarien, sodass die o.g. InformationsflussbrĂŒche zwischen technischen Teilsystemen abgebaut werden. Ein rein technisches InteroperabilitĂ€tsverstĂ€ndnis genĂŒgt jedoch nicht, um die Vielfalt an Diffusionsbarrieren von digitalen Gesundheitsinnovationen zu umfassen. Denn beispielsweise das Fehlen adĂ€quater VergĂŒtungsoptionen innerhalb der gesetzlichen Rahmenbedingungen oder eine mangelhafte PassfĂ€higkeit fĂŒr den bestimmten Versorgungsprozess sind keine rein technischen Probleme. Vielmehr kommt hier eine Grundhaltung der Wirtschaftsinformatik zum Tragen, die Informationssysteme - auch die des Gesundheitswesens - als sozio-technische Systeme begreift und dabei Technologie stets im Zusammenhang mit Menschen, die sie nutzen, von ihr beeinflusst werden oder sie organisieren, betrachtet. Soll eine digitale Gesundheitsinnovation, die einen Mehrwert gemĂ€ĂŸ der o.g. Zielstellungen verspricht, in eine existierende Informationssystemlandschaft der Gesundheitsversorgung integriert werden, so muss sie aus technischen sowie nicht-technischen Gesichtspunkten 'interoperabel' sein. Zwar ist die Notwendigkeit von InteroperabilitĂ€t in der Wissenschaft, Politik und Praxis bekannt und auch positive Bewegungen der DomĂ€ne hin zu mehr InteroperabilitĂ€t sind zu verspĂŒren. Jedoch dominiert dabei einerseits ein technisches VerstĂ€ndnis und andererseits bleibt das Potential dieser Eigenschaft als Leitmotiv fĂŒr das Innovationsmanagement bislang weitestgehend ungenutzt. An genau dieser Stelle knĂŒpft nun der Hauptbeitrag dieser Doktorarbeit an, in dem sie eine sozio-technische Konzeptualisierung und Kontextualisierung von InteroperabilitĂ€t fĂŒr kĂŒnftige digitale Gesundheitsinnovationen vorschlĂ€gt. Literatur- und expertenbasiert wird ein Rahmenwerk erarbeitet - das Digital Health Innovation Interoperability Framework - das insbesondere Innovatoren und Innovationsfördernde dabei unterstĂŒtzen soll, die Diffusionswahrscheinlichkeit in die Praxis zu erhöhen. Nun sind mit diesem Framework viele Erkenntnisse und Botschaften verbunden, die ich fĂŒr diesen Prolog wie folgt zusammenfassen möchte: 1. Um die Entwicklung digitaler Gesundheitsinnovationen bestmöglich auf eine erfolgreiche Integration in eine bestimmte Zielumgebung auszurichten, sind die Realisierung eines neuartigen Wertversprechens sowie die GewĂ€hrleistung sozio-technischer InteroperabilitĂ€t die zwei zusammenhĂ€ngenden Hauptaufgaben eines Innovationsprozesses. 2. Die GewĂ€hrleistung von InteroperabilitĂ€t ist eine aktiv zu verantwortende Managementaufgabe und wird durch projektspezifische Bedingungen sowie von externen und internen Dynamiken beeinflusst. 3. Sozio-technische InteroperabilitĂ€t im Kontext digitaler Gesundheitsinnovationen kann ĂŒber sieben, interdependente Ebenen definiert werden: Politische und regulatorische Bedingungen; Vertragsbedingungen; Versorgungs- und GeschĂ€ftsprozesse; Nutzung; Information; Anwendungen; IT-Infrastruktur. 4. Um InteroperabilitĂ€t auf jeder dieser Ebenen zu gewĂ€hrleisten, sind Strategien differenziert zu definieren, welche auf einem Kontinuum zwischen KompatibilitĂ€tsanforderungen aufseiten der Innovation und der Motivation von Anpassungen aufseiten der Zielumgebung verortet werden können. 5. Das Streben nach mehr InteroperabilitĂ€t fördert sowohl den nachhaltigen Erfolg der einzelnen digitalen Gesundheitsinnovation als auch die Defragmentierung existierender Informationssystemlandschaften und trĂ€gt somit zur Verbesserung des Gesundheitswesens bei. Zugegeben: die letzte dieser fĂŒnf Botschaften trĂ€gt eher die FĂ€rbung einer Überzeugung, als dass sie ein Ergebnis wissenschaftlicher BeweisfĂŒhrung ist. Dennoch empfinde ich diese, wenn auch persönliche Erkenntnis als Maxim der DomĂ€ne, der ich mich zugehörig fĂŒhle - der IT-Systementwicklung des Gesundheitswesens

    Sociotechnical Imaginaries, the Future and the Third Offset Strategy

    Get PDF

    Design and Implementation of a Portable Framework for Application Decomposition and Deployment in Edge-Cloud Systems

    Get PDF
    The emergence of cyber-physical systems has brought about a significant increase in complexity and heterogeneity in the infrastructure on which these systems are deployed. One particular example of this complexity is the interplay between cloud, fog, and edge computing. However, the complexity of these systems can pose challenges when it comes to implementing self-organizing mechanisms, which are often designed to work on flat networks. Therefore, it is essential to separate the application logic from the specific deployment aspects to promote reusability and flexibility in infrastructure exploitation. To address this issue, a novel approach called "pulverization" has been proposed. This approach involves breaking down the system into smaller computational units, which can then be deployed on the available infrastructure. In this thesis, the design and implementation of a portable framework that enables the "pulverization" of cyber-physical systems are presented. The main objective of the framework is to pave the way for the deployment of cyber-physical systems in the edge-cloud continuum by reducing the complexity of the infrastructure and exploit opportunistically the heterogeneous resources available on it. Different scenarios are presented to highlight the effectiveness of the framework in different heterogeneous infrastructures and devices. Current limitations and future work are examined to identify improvement areas for the framework

    A BIM - GIS Integrated Information Model Using Semantic Web and RDF Graph Databases

    Get PDF
    In recent years, 3D virtual indoor and outdoor urban modelling has become an essential geospatial information framework for civil and engineering applications such as emergency response, evacuation planning, and facility management. Building multi-sourced and multi-scale 3D urban models are in high demand among architects, engineers, and construction professionals to achieve these tasks and provide relevant information to decision support systems. Spatial modelling technologies such as Building Information Modelling (BIM) and Geographical Information Systems (GIS) are frequently used to meet such high demands. However, sharing data and information between these two domains is still challenging. At the same time, the semantic or syntactic strategies for inter-communication between BIM and GIS do not fully provide rich semantic and geometric information exchange of BIM into GIS or vice-versa. This research study proposes a novel approach for integrating BIM and GIS using semantic web technologies and Resources Description Framework (RDF) graph databases. The suggested solution's originality and novelty come from combining the advantages of integrating BIM and GIS models into a semantically unified data model using a semantic framework and ontology engineering approaches. The new model will be named Integrated Geospatial Information Model (IGIM). It is constructed through three stages. The first stage requires BIMRDF and GISRDF graphs generation from BIM and GIS datasets. Then graph integration from BIM and GIS semantic models creates IGIMRDF. Lastly, the information from IGIMRDF unified graph is filtered using a graph query language and graph data analytics tools. The linkage between BIMRDF and GISRDF is completed through SPARQL endpoints defined by queries using elements and entity classes with similar or complementary information from properties, relationships, and geometries from an ontology-matching process during model construction. The resulting model (or sub-model) can be managed in a graph database system and used in the backend as a data-tier serving web services feeding a front-tier domain-oriented application. A case study was designed, developed, and tested using the semantic integrated information model for validating the newly proposed solution, architecture, and performance

    Synchronous and asynchronous collaborative writing

    Get PDF
    Collaborative writing has been greatly stimulated by digital technologies, particularly by word processors that have made it easy for co-authors to exchange and edit texts and also led to the development of many experimental tools for collaborative, synchronous writing. When the world wide web was established, the arrival of wikis was hailed with great enthusiasm as an opportunity for joint knowledge creation and publishing. Later, cloud-based computer systems provided another powerful access to collaborative text production. The breakthrough for synchronous collaborative writing was the release of Google Docs in 2006, a browser-based word processor offering full rights to up to a hundred users for synchronous access to a virtual writing space. Next to its easy accessibility, it was the free offer of Google Docs that opened this new chapter of writing technology to a broader audience. When Microsoft and Apple followed with their own online versions, collaborative writing became an established standard of text production. In this chapter, we trace back what collaboration through writing means and then look at the new opportunities and affordances of collaborative writing software. Finally, we briefly recount the impact of early technologies before we settle on the current generation of collaborative writing tools
    • 

    corecore