1,096 research outputs found

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    The Politics of Platformization: Amsterdam Dialogues on Platform Theory

    Get PDF
    What is platformization and why is it a relevant category in the contemporary political landscape? How is it related to cybernetics and the history of computation? This book tries to answer such questions by engaging in multidisciplinary dialogues about the first ten years of the emerging fields of platform studies and platform theory. It deploys a narrative and playful approach that makes use of anecdotes, personal histories, etymologies, and futurable speculations to investigate both the fragmented genealogy that led to platformization and the organizational and economic trends that guide nowadays platform sociotechnical imaginaries

    Cybersecurity applications of Blockchain technologies

    Get PDF
    With the increase in connectivity, the popularization of cloud services, and the rise of the Internet of Things (IoT), decentralized approaches for trust management are gaining momentum. Since blockchain technologies provide a distributed ledger, they are receiving massive attention from the research community in different application fields. However, this technology does not provide cybersecurity by itself. Thus, this thesis first aims to provide a comprehensive review of techniques and elements that have been proposed to achieve cybersecurity in blockchain-based systems. The analysis is intended to target area researchers, cybersecurity specialists and blockchain developers. We present a series of lessons learned as well. One of them is the rise of Ethereum as one of the most used technologies. Furthermore, some intrinsic characteristics of the blockchain, like permanent availability and immutability made it interesting for other ends, namely as covert channels and malicious purposes. On the one hand, the use of blockchains by malwares has not been characterized yet. Therefore, this thesis also analyzes the current state of the art in this area. One of the lessons learned is that covert communications have received little attention. On the other hand, although previous works have analyzed the feasibility of covert channels in a particular blockchain technology called Bitcoin, no previous work has explored the use of Ethereum to establish a covert channel considering all transaction fields and smart contracts. To foster further defence-oriented research, two novel mechanisms are presented on this thesis. First, Zephyrus takes advantage of all Ethereum fields and smartcontract bytecode. Second, Smart-Zephyrus is built to complement Zephyrus by leveraging smart contracts written in Solidity. We also assess the mechanisms feasibility and cost. Our experiments show that Zephyrus, in the best case, can embed 40 Kbits in 0.57 s. for US1.64,andretrievethemin2.8s.Smart−Zephyrus,however,isabletohidea4Kbsecretin41s.Whilebeingexpensive(aroundUS 1.64, and retrieve them in 2.8 s. Smart-Zephyrus, however, is able to hide a 4 Kb secret in 41 s. While being expensive (around US 1.82 per bit), the provided stealthiness might be worth the price for attackers. Furthermore, these two mechanisms can be combined to increase capacity and reduce costs.Debido al aumento de la conectividad, la popularización de los servicios en la nube y el auge del Internet de las cosas (IoT), los enfoques descentralizados para la gestión de la confianza están cobrando impulso. Dado que las tecnologías de cadena de bloques (blockchain) proporcionan un archivo distribuido, están recibiendo una atención masiva por parte de la comunidad investigadora en diferentes campos de aplicación. Sin embargo, esta tecnología no proporciona ciberseguridad por sí misma. Por lo tanto, esta tesis tiene como primer objetivo proporcionar una revisión exhaustiva de las técnicas y elementos que se han propuesto para lograr la ciberseguridad en los sistemas basados en blockchain. Este análisis está dirigido a investigadores del área, especialistas en ciberseguridad y desarrolladores de blockchain. A su vez, se presentan una serie de lecciones aprendidas, siendo una de ellas el auge de Ethereum como una de las tecnologías más utilizadas. Asimismo, algunas características intrínsecas de la blockchain, como la disponibilidad permanente y la inmutabilidad, la hacen interesante para otros fines, concretamente como canal encubierto y con fines maliciosos. Por una parte, aún no se ha caracterizado el uso de la blockchain por parte de malwares. Por ello, esta tesis también analiza el actual estado del arte en este ámbito. Una de las lecciones aprendidas al analizar los datos es que las comunicaciones encubiertas han recibido poca atención. Por otro lado, aunque trabajos anteriores han analizado la viabilidad de los canales encubiertos en una tecnología blockchain concreta llamada Bitcoin, ningún trabajo anterior ha explorado el uso de Ethereum para establecer un canal encubierto considerando todos los campos de transacción y contratos inteligentes. Con el objetivo de fomentar una mayor investigación orientada a la defensa, en esta tesis se presentan dos mecanismos novedosos. En primer lugar, Zephyrus aprovecha todos los campos de Ethereum y el bytecode de los contratos inteligentes. En segundo lugar, Smart-Zephyrus complementa Zephyrus aprovechando los contratos inteligentes escritos en Solidity. Se evalúa, también, la viabilidad y el coste de ambos mecanismos. Los resultados muestran que Zephyrus, en el mejor de los casos, puede ocultar 40 Kbits en 0,57 s. por 1,64 US$, y recuperarlos en 2,8 s. Smart-Zephyrus, por su parte, es capaz de ocultar un secreto de 4 Kb en 41 s. Si bien es cierto que es caro (alrededor de 1,82 dólares por bit), el sigilo proporcionado podría valer la pena para los atacantes. Además, estos dos mecanismos pueden combinarse para aumentar la capacidad y reducir los costesPrograma de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José Manuel Estévez Tapiador.- Secretario: Jorge Blasco Alís.- Vocal: Luis Hernández Encina

    Study on performance modeling and assurance of cross/permissionless/permissioned chains

    Get PDF
    This research addresses and resolves the performance modeling and assurance issues across the full spectrum of blockchain protocols, from permissionless (Chapter II) and permissioned (Chapter III) to cross-chain (Chapter IV). In Chapter II, a queueing model for permissionless blockchains and validations is proposed with respect to specific yet practical characteristics of the blockchains such as Bitcoin and Ethereum, primarily in terms of the block size and its waiting time. A set of variables considered in this model lists the network traffic intensity, the maximum number of transactions in a block, the block time, and the transaction arrival rate, to mention a few. Numerical simulations are conducted, and the efficacy of the proposed model is validated in a quantitative yet practical manner versus Bitcoin and Ethereum. In Chapter III, a set of queueing models for permissioned blockchain, which is considered an emerging technology for a trustworthy decentralized network, is proposed. Hyperledger Fabric is a well-defined permissioned blockchain. It is constructed by various types of nodes, such as the nodes for endorsement, ordering, and commitment, to realize the decentralized nature of trustworthy network operations. Each type of node is characterized in terms of transaction/block queue size and waiting time, and the transaction/block arrival rates and the transaction/block service rates are considered for simulation purposes. It is taken into account how the arrival rates and the service rates co-influence the performance and how the number of channels impact the performance in order to ultimately facilitate a more dynamic way of optimization. The efficacy of the proposed models is demonstrated by the extensive numerical simulations and analyses. In Chapter IV, a cross-chain communication protocol and a m/Cox/1 queueing model-based performance model are proposed. Cross-chain communication considers two distinct types of transactions, such as an atomic swap and an inter-ledger asset transfer. They are controlled by different types of communication mechanisms, namely, Hashed Time Lock Contract (HTLC) based on a pre-image-based technique, and inter-ledger asset transfer, based on an asynchronous verification technique. In the performance model, a Poisson arrival process is assumed, and the two services for pre-commit, verify and commit are assumed to be exponential distributions. Lastly, the selection ratio of a communication protocol between HTLC and the inter-ledger asset transfer is assumed. Extensive numerical simulations are conducted to study the performance impact of changing the parameters, such as arrival rate, service rate, and the ratio of communication protocol. In this research, the proposed models provide a comprehensive yet fundamental basis to assure and ultimately optimize the design of blockchain technology-based applications in specific terms of performance

    Next Generation Business Ecosystems: Engineering Decentralized Markets, Self-Sovereign Identities and Tokenization

    Get PDF
    Digital transformation research increasingly shifts from studying information systems within organizations towards adopting an ecosystem perspective, where multiple actors co-create value. While digital platforms have become a ubiquitous phenomenon in consumer-facing industries, organizations remain cautious about fully embracing the ecosystem concept and sharing data with external partners. Concerns about the market power of platform orchestrators and ongoing discussions on privacy, individual empowerment, and digital sovereignty further complicate the widespread adoption of business ecosystems, particularly in the European Union. In this context, technological innovations in Web3, including blockchain and other distributed ledger technologies, have emerged as potential catalysts for disrupting centralized gatekeepers and enabling a strategic shift towards user-centric, privacy-oriented next-generation business ecosystems. However, existing research efforts focus on decentralizing interactions through distributed network topologies and open protocols lack theoretical convergence, resulting in a fragmented and complex landscape that inadequately addresses the challenges organizations face when transitioning to an ecosystem strategy that harnesses the potential of disintermediation. To address these gaps and successfully engineer next-generation business ecosystems, a comprehensive approach is needed that encompasses the technical design, economic models, and socio-technical dynamics. This dissertation aims to contribute to this endeavor by exploring the implications of Web3 technologies on digital innovation and transformation paths. Drawing on a combination of qualitative and quantitative research, it makes three overarching contributions: First, a conceptual perspective on \u27tokenization\u27 in markets clarifies its ambiguity and provides a unified understanding of the role in ecosystems. This perspective includes frameworks on: (a) technological; (b) economic; and (c) governance aspects of tokenization. Second, a design perspective on \u27decentralized marketplaces\u27 highlights the need for an integrated understanding of micro-structures, business structures, and IT infrastructures in blockchain-enabled marketplaces. This perspective includes: (a) an explorative literature review on design factors; (b) case studies and insights from practitioners to develop requirements and design principles; and (c) a design science project with an interface design prototype of blockchain-enabled marketplaces. Third, an economic perspective on \u27self-sovereign identities\u27 (SSI) as micro-structural elements of decentralized markets. This perspective includes: (a) value creation mechanisms and business aspects of strategic alliances governing SSI ecosystems; (b) business model characteristics adopted by organizations leveraging SSI; and (c) business model archetypes and a framework for SSI ecosystem engineering efforts. The dissertation concludes by discussing limitations as well as outlining potential avenues for future research. These include, amongst others, exploring the challenges of ecosystem bootstrapping in the absence of intermediaries, examining the make-or-join decision in ecosystem emergence, addressing the multidimensional complexity of Web3-enabled ecosystems, investigating incentive mechanisms for inter-organizational collaboration, understanding the role of trust in decentralized environments, and exploring varying degrees of decentralization with potential transition pathways

    Distributed consensus in wireless network

    Get PDF
    Connected autonomous systems, which are powered by the synergistic integration of the Internet of Things (IoT), Artificial Intelligence (AI), and 5G technologies, predominantly rely on a central node for making mission-critical decisions. This reliance poses a significant challenge that the condition and capability of the central node largely determine the reliability and effectiveness of decision-making. Maintaining such a centralized system, especially in large-scale wireless networks, can be prohibitively expensive and encounters scalability challenges. In light of these limitations, there’s a compelling need for innovative methods to address the increasing demands of reliability and latency, especially in mission-critical networks where cooperative decision-making is paramount. One promising avenue lies in the distributed consensus protocol, a mechanism intrinsic to distributed computing systems. These protocols offer enhanced robustness, ensuring continued functionality and responsiveness in decision-making even in the face of potential node or communication failures. This thesis pivots on the idea of leveraging distributed consensus to bolster the reliability of mission-critical decision-making within wireless networks, which delves deep into the performance characteristics of wireless distributed consensus, analyzing and subsequently optimizing its attributes, specifically focusing on reliability and latency. The research begins with a fundamental model of consensus reliability in an crash fault tolerance protocol Raft. A novel metric termed ReliabilityGain is introduced to analyze the performance of distributed consensus in wireless network. This innovative concept elucidates the linear correlation between the reliability inherent to consensus-driven decision-making and the reliability of communication link transmission. An intriguing discovery made in my study is the inherent trade-off between the time latency of achieving consensus and its reliability. These two variables appear to be in contradiction, which brings further performance optimization issues. The performance of the Crash and Byzantine fault tolerance protocol is scrutinized and they are compared with original centralized consensus. This exploration becomes particularly pertinent when communication failures occur in wireless distributed consensus. The analytical results are juxtaposed with performance metrics derived from a centralized consensus mechanism. This comparative analysis illuminates the relative merits and demerits of these consensus strategies, evaluated from the dual perspectives of comprehensive consensus reliability and communication latency. In light of the insights gained from the detailed analysis of the Raft and Hotstuff BFT protocols, my thesis further ventures into the realm of optimization strategies for wireless distributed consensus. A central facet of this exploration is the introduction of a tailored communication resource allocation scheme. This scheme, rooted in maximizing the performance of consensus mechanisms, dynamically assesses the network conditions and allocates communication resources such as transmit power and bandwidth to ensure efficient and timely decision-making, which ensures that even in varied and unpredictable network conditions, consensus can be achieved with minimized latency and maximized reliability. The research introduces an adaptive protocol of distributed consensus in wireless network. This proposed adaptive protocol’s strength lies in its ability to autonomously construct consensus-enabled network even if node failures or communication disruptions occur, which ensures that the network’s decision-making process remains uninterrupted and efficient, irrespective of external challenges. The sharding mechanism, which is regarded as an effective solution to scalability issues in distributed system, does not only aid in managing vast networks more efficiently but also ensure that any disruption in one shard cannot compromise the functionality of the entire network. Therefore, this thesis shows the reliability and security analysis of sharding that implemented in wireless distributed system. In essence, these intertwined strategies, rooted in the intricate dance of communication resource allocation, adaptability, and sharding, together form the bedrock of my contributions to enhancing the performance of wireless distributed consensus

    Optimizing Flow Routing Using Network Performance Analysis

    Get PDF
    Relevant conferences were attended at which work was often presented and several papers were published in the course of this project. • Muna Al-Saadi, Bogdan V Ghita, Stavros Shiaeles, Panagiotis Sarigiannidis. A novel approach for performance-based clustering and management of network traffic flows, IWCMC, ©2019 IEEE. • M. Al-Saadi, A. Khan, V. Kelefouras, D. J. Walker, and B. Al-Saadi: Unsupervised Machine Learning-Based Elephant and Mice Flow Identification, Computing Conference 2021. • M. Al-Saadi, A. Khan, V. Kelefouras, D. J. Walker, and B. Al-Saadi: SDN-Based Routing Framework for Elephant and Mice Flows Using Unsupervised Machine Learning, Network, 3(1), pp.218-238, 2023.The main task of a network is to hold and transfer data between its nodes. To achieve this task, the network needs to find the optimal route for data to travel by employing a particular routing system. This system has a specific job that examines each possible path for data and chooses the suitable one and transmit the data packets where it needs to go as fast as possible. In addition, it contributes to enhance the performance of network as optimal routing algorithm helps to run network efficiently. The clear performance advantage that provides by routing procedures is the faster data access. For example, the routing algorithm take a decision that determine the best route based on the location where the data is stored and the destination device that is asking for it. On the other hand, a network can handle many types of traffic simultaneously, but it cannot exceed the bandwidth allowed as the maximum data rate that the network can transmit. However, the overloading problem are real and still exist. To avoid this problem, the network chooses the route based on the available bandwidth space. One serious problem in the network is network link congestion and disparate load caused by elephant flows. Through forwarding elephant flows, network links will be congested with data packets causing transmission collision, congestion network, and delay in transmission. Consequently, there is not enough bandwidth for mice flows, which causes the problem of transmission delay. Traffic engineering (TE) is a network application that concerns with measuring and managing network traffic and designing feasible routing mechanisms to guide the traffic of the network for improving the utilization of network resources. The main function of traffic engineering is finding an obvious route to achieve the bandwidth requirements of the network consequently optimizing the network performance [1]. Routing optimization has a key role in traffic engineering by finding efficient routes to achieve the desired performance of the network [2]. Furthermore, routing optimization can be considered as one of the primary goals in the field of networks. In particular, this goal is directly related to traffic engineering, as it is based on one particular idea: to achieve that traffic is routed according to accurate traffic requirements [3]. Therefore, we can say that traffic engineering is one of the applications of multiple improvements to routing; routing can also be optimized based on other factors (not just on traffic requirements). In addition, these traffic requirements are variable depending on analyzed dataset that considered if it is data or traffic control. In this regard, the logical central view of the Software Defined Network (SDN) controller facilitates many aspects compared to traditional routing. The main challenge in all network types is performance optimization, but the situation is different in SDN because the technique is changed from distributed approach to a centralized one. The characteristics of SDN such as centralized control and programmability make the possibility of performing not only routing in traditional distributed manner but also routing in centralized manner. The first advantage of centralized routing using SDN is the existence of a path to exchange information between the controller and infrastructure devices. Consequently, the controller has the information for the entire network, flexible routing can be achieved. The second advantage is related to dynamical control of routing due to the capability of each device to change its configuration based on the controller commands [4]. This thesis begins with a wide review of the importance of network performance analysis and its role for understanding network behavior, and how it contributes to improve the performance of the network. Furthermore, it clarifies the existing solutions of network performance optimization using machine learning (ML) techniques in traditional networks and SDN environment. In addition, it highlights recent and ongoing studies of the problem of unfair use of network resources by a particular flow (elephant flow) and the possible solutions to solve this problem. Existing solutions are predominantly, flow routing-based and do not consider the relationship between network performance analysis and flow characterization and how to take advantage of it to optimize flow routing by finding the convenient path for each type of flow. Therefore, attention is given to find a method that may describe the flow based on network performance analysis and how to utilize this method for managing network performance efficiently and find the possible integration for the traffic controlling in SDN. To this purpose, characteristics of network flows is identified as a mechanism which may give insight into the diversity in flow features based on performance metrics and provide the possibility of traffic engineering enhancement using SDN environment. Two different feature sets with respect to network performance metrics are employed to characterize network traffic. Applying unsupervised machine learning techniques including Principal Component Analysis (PCA) and k-means cluster analysis to derive a traffic performance-based clustering model. Afterward, thresholding-based flow identification paradigm has been built using pre-defined parameters and thresholds. Finally, the resulting data clusters are integrated within a unified SDN architectural solution, which improves network management by finding the best flow routing based on the type of flow, to be evaluated against a number of traffic data sources and different performance experiments. The validation process of the novel framework performance has been done by making a performance comparison between SDN-Ryu controller and the proposed SDN-external application based on three factors: throughput, bandwidth,and data transfer rate by conducting two experiments. Furthermore, the proposed method has been validated by using different Data Centre Network (DCN) topologies to demonstrate the effectiveness of the network traffic management solution. The overall validation metrics shows real gains, the results show that 70% of the time, it has high performance with different flows. The proposed routing SDN traffic-engineering paradigm for a particular flow therefore, dynamically provisions network resources among different flow types

    Unleashing the power of internet of things and blockchain: A comprehensive analysis and future directions.

    Get PDF
    As the fusion of the Internet of Things (IoT) and blockchain technology advances, it is increasingly shaping diverse fields. The potential of this convergence to fortify security, enhance privacy, and streamline operations has ignited considerable academic interest, resulting in an impressive body of literature. However, there is a noticeable scarcity of studies employing Latent Dirichlet Allocation (LDA) to dissect and categorize this field. This review paper endeavours to bridge this gap by meticulously analysing a dataset of 4455 journal articles drawn solely from the Scopus database, cantered around IoT and blockchain applications. Utilizing LDA, we have extracted 14 distinct topics from the collection, offering a broad view of the research themes in this interdisciplinary domain. Our exploration underscores an upswing in research pertaining to IoT and blockchain, emphasizing the rising prominence of this technological amalgamation. Among the most recurrent themes are IoT and blockchain integration in supply chain management and blockchain in healthcare data management and security, indicating the significant potential of this convergence to transform supply chains and secure healthcare data. Meanwhile, the less frequently discussed topics include access control and management in blockchain-based IoT systems and energy efficiency in wireless sensor networks using blockchain and IoT. To the best of our knowledge, this paper is the first to apply LDA in the context of IoT and blockchain research, providing unique perspectives on the existing literature. Moreover, our findings pave the way for proposed future research directions, stimulating further investigation into the less explored aspects and sustaining the growth of this dynamic field
    • …
    corecore