1,812 research outputs found

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    Multidisciplinary perspectives on Artificial Intelligence and the law

    Get PDF
    This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio

    Posthuman Creative Styling can a creative writer’s style of writing be described as procedural?

    Get PDF
    This thesis is about creative styling — the styling a creative writer might use to make their writing unique. It addresses the question as to whether such styling can be described as procedural. Creative styling is part of the technique a creative writer uses when writing. It is how they make the text more ‘lively’ by use of tips and tricks they have either learned or discovered. In essence these are rules, ones the writer accrues over time by their practice. The thesis argues that the use and invention of these rules can be set as procedures. and so describe creative styling as procedural. The thesis follows from questioning why it is that machines or algorithms have, so far, been incapable of producing creative writing which has value. Machine-written novels do not abound on the bookshelves and writing styled by computers is, on the whole, dull in comparison to human-crafted literature. It came about by thinking how it would be possible to reach a point where writing by people and procedural writing are considered to have equal value. For this reason the thesis is set in a posthuman context, where the differences between machines and people are erased. The thesis uses practice to inform an original conceptual space model, based on quality dimensions and dynamic-inter operation of spaces. This model gives an example of the procedures which a posthuman creative writer uses when engaged in creative styling. It suggests an original formulation for the conceptual blending of conceptual spaces, based on the casting of qualities from one space to another. In support of and informing its arguments are ninety-nine examples of creative writing practice which show the procedures by which style has been applied, created and assessed. It provides a route forward for further joint research into both computational and human-coded creative writing

    Privacy-Preserving by Design: Indoor Positioning System Using Wi-Fi Passive TDOA

    Full text link
    Indoor localization systems have become increasingly important in a wide range of applications, including industry, security, logistics, and emergency services. However, the growing demand for accurate localization has heightened concerns over privacy, as many localization systems rely on active signals that can be misused by an adversary to track users' movements or manipulate their measurements. This paper presents PassiFi, a novel passive Wi-Fi time-based indoor localization system that effectively balances accuracy and privacy. PassiFi uses a passive WiFi Time Difference of Arrival (TDoA) approach that ensures users' privacy and safeguards the integrity of their measurement data while still achieving high accuracy. The system adopts a fingerprinting approach to address multi-path and non-line-of-sight problems and utilizes deep neural networks to learn the complex relationship between TDoA and location. Evaluation in a real-world testbed demonstrates PassiFi's exceptional performance, surpassing traditional multilateration by 128%, achieving sub-meter accuracy on par with state-of-the-art active measurement systems, all while preserving privacy

    Threshold Encrypted Mempools: Limitations and Considerations

    Full text link
    Encrypted mempools are a class of solutions aimed at preventing or reducing negative externalities of MEV extraction using cryptographic privacy. Mempool encryption aims to hide information related to pending transactions until a block including the transactions is committed, targeting the prevention of frontrunning and similar behaviour. Among the various methods of encryption, threshold schemes are particularly interesting for the design of MEV mitigation mechanisms, as their distributed nature and minimal hardware requirements harmonize with a broader goal of decentralization. This work looks beyond the formal and technical cryptographic aspects of threshold encryption schemes to focus on the market and incentive implications of implementing encrypted mempools as MEV mitigation techniques. In particular, this paper argues that the deployment of such protocols without proper consideration and understanding of market impact invites several undesired outcomes, with the ultimate goal of stimulating further analysis of this class of solutions outside of pure cryptograhic considerations. Included in the paper is an overview of a series of problems, various candidate solutions in the form of mempool encryption techniques with a focus on threshold encryption, potential drawbacks to these solutions, and Osmosis as a case study. The paper targets a broad audience and remains agnostic to blockchain design where possible while drawing from mostly financial examples

    ACiS: smart switches with application-level acceleration

    Full text link
    Network performance has contributed fundamentally to the growth of supercomputing over the past decades. In parallel, High Performance Computing (HPC) peak performance has depended, first, on ever faster/denser CPUs, and then, just on increasing density alone. As operating frequency, and now feature size, have levelled off, two new approaches are becoming central to achieving higher net performance: configurability and integration. Configurability enables hardware to map to the application, as well as vice versa. Integration enables system components that have generally been single function-e.g., a network to transport data—to have additional functionality, e.g., also to operate on that data. More generally, integration enables compute-everywhere: not just in CPU and accelerator, but also in network and, more specifically, the communication switches. In this thesis, we propose four novel methods of enhancing HPC performance through Advanced Computing in the Switch (ACiS). More specifically, we propose various flexible and application-aware accelerators that can be embedded into or attached to existing communication switches to improve the performance and scalability of HPC and Machine Learning (ML) applications. We follow a modular design discipline through introducing composable plugins to successively add ACiS capabilities. In the first work, we propose an inline accelerator to communication switches for user-definable collective operations. MPI collective operations can often be performance killers in HPC applications; we seek to solve this bottleneck by offloading them to reconfigurable hardware within the switch itself. We also introduce a novel mechanism that enables the hardware to support MPI communicators of arbitrary shape and that is scalable to very large systems. In the second work, we propose a look-aside accelerator for communication switches that is capable of processing packets at line-rate. Functions requiring loops and states are addressed in this method. The proposed in-switch accelerator is based on a RISC-V compatible Coarse Grained Reconfigurable Arrays (CGRAs). To facilitate usability, we have developed a framework to compile user-provided C/C++ codes to appropriate back-end instructions for configuring the accelerator. In the third work, we extend ACiS to support fused collectives and the combining of collectives with map operations. We observe that there is an opportunity of fusing communication (collectives) with computation. Since the computation can vary for different applications, ACiS support should be programmable in this method. In the fourth work, we propose that switches with ACiS support can control and manage the execution of applications, i.e., that the switch be an active device with decision-making capabilities. Switches have a central view of the network; they can collect telemetry information and monitor application behavior and then use this information for control, decision-making, and coordination of nodes. We evaluate the feasibility of ACiS through extensive RTL-based simulation as well as deployment in an open-access cloud infrastructure. Using this simulation framework, when considering a Graph Convolutional Network (GCN) application as a case study, a speedup of on average 3.4x across five real-world datasets is achieved on 24 nodes compared to a CPU cluster without ACiS capabilities

    Tiny Machine Learning Environment: Enabling Intelligence on Constrained Devices

    Get PDF
    Running machine learning algorithms (ML) on constrained devices at the extreme edge of the network is problematic due to the computational overhead of ML algorithms, available resources on the embedded platform, and application budget (i.e., real-time requirements, power constraints, etc.). This required the development of specific solutions and development tools for what is now referred to as TinyML. In this dissertation, we focus on improving the deployment and performance of TinyML applications, taking into consideration the aforementioned challenges, especially memory requirements. This dissertation contributed to the construction of the Edge Learning Machine environment (ELM), a platform-independent open-source framework that provides three main TinyML services, namely shallow ML, self-supervised ML, and binary deep learning on constrained devices. In this context, this work includes the following steps, which are reflected in the thesis structure. First, we present the performance analysis of state-of-the-art shallow ML algorithms including dense neural networks, implemented on mainstream microcontrollers. The comprehensive analysis in terms of algorithms, hardware platforms, datasets, preprocessing techniques, and configurations shows similar performance results compared to a desktop machine and highlights the impact of these factors on overall performance. Second, despite the assumption that TinyML only permits models inference provided by the scarcity of resources, we have gone a step further and enabled self-supervised on-device training on microcontrollers and tiny IoT devices by developing the Autonomous Edge Pipeline (AEP) system. AEP achieves comparable accuracy compared to the typical TinyML paradigm, i.e., models trained on resource-abundant devices and then deployed on microcontrollers. Next, we present the development of a memory allocation strategy for convolutional neural networks (CNNs) layers, that optimizes memory requirements. This approach reduces the memory footprint without affecting accuracy nor latency. Moreover, e-skin systems share the main requirements of the TinyML fields: enabling intelligence with low memory, low power consumption, and low latency. Therefore, we designed an efficient Tiny CNN architecture for e-skin applications. The architecture leverages the memory allocation strategy presented earlier and provides better performance than existing solutions. A major contribution of the thesis is given by CBin-NN, a library of functions for implementing extremely efficient binary neural networks on constrained devices. The library outperforms state of the art NN deployment solutions by drastically reducing memory footprint and inference latency. All the solutions proposed in this thesis have been implemented on representative devices and tested in relevant applications, of which results are reported and discussed. The ELM framework is open source, and this work is clearly becoming a useful, versatile toolkit for the IoT and TinyML research and development community

    Sensing Collectives: Aesthetic and Political Practices Intertwined

    Get PDF
    Are aesthetics and politics really two different things? The book takes a new look at how they intertwine, by turning from theory to practice. Case studies trace how sensory experiences are created and how collective interests are shaped. They investigate how aesthetics and politics are entangled, both in building and disrupting collective orders, in governance and innovation. This ranges from populist rallies and artistic activism over alternative lifestyles and consumer culture to corporate PR and governmental policies. Authors are academics and artists. The result is a new mapping of the intermingling and co-constitution of aesthetics and politics in engagements with collective orders

    Improving efficiency and security of IIoT communications using in-network validation of server certificate

    Get PDF
    The use of advanced communications and smart mechanisms in industry is growing rapidly, making cybersecurity a critical aspect. Currently, most industrial communication protocols rely on the Transport Layer Security (TLS) protocol to build their secure version, providing confidentiality, integrity and authentication. In the case of UDP-based communications, frequently used in Industrial Internet of Things (IIoT) scenarios, the counterpart of TLS is Datagram Transport Layer Security (DTLS), which includes some mechanisms to deal with the high unreliability of the transport layer. However, the (D)TLS handshake is a heavy process, specially for resource-deprived IIoT devices and frequently, security is sacrificed in favour of performance. More specifically, the validation of digital certificates is an expensive process from the time and resource consumption point of view. For this reason, digital certificates are not always properly validated by IIoT devices, including the verification of their revocation status; and when it is done, it introduces an important delay in the communications. In this context, this paper presents the design and implementation of an in-network server certificate validation system that offloads this task from the constrained IIoT devices to a resource-richer network element, leveraging data plane programming (DPP). This approach enhances security as it guarantees that a comprehensive server certificate verification is always performed. Additionally, it increases performance as resource-expensive tasks are moved from IIoT devices to a resource-richer network element. Results show that the proposed solution reduces DTLS handshake times by 50–60 %. Furthermore, CPU use in IIoT devices is also reduced, resulting in an energy saving of about 40 % in such devices.This work was financially supported by the Spanish Ministry of Science and Innovation through the TRUE-5G project PID2019-108713RB-C54/AEI/10.13039/501100011033. It was also partially supported by the Ayudas Cervera para Centros Tecnológicos grant of the Spanish Centre for the Development of Industrial Technology (CDTI) under the project EGIDA (CER-20191012), and by the Basque Country Government under the ELKARTEK Program, project REMEDY - Real tiME control and embeddeD securitY (KK-2021/00091)

    Resilient and Scalable Forwarding for Software-Defined Networks with P4-Programmable Switches

    Get PDF
    Traditional networking devices support only fixed features and limited configurability. Network softwarization leverages programmable software and hardware platforms to remove those limitations. In this context the concept of programmable data planes allows directly to program the packet processing pipeline of networking devices and create custom control plane algorithms. This flexibility enables the design of novel networking mechanisms where the status quo struggles to meet high demands of next-generation networks like 5G, Internet of Things, cloud computing, and industry 4.0. P4 is the most popular technology to implement programmable data planes. However, programmable data planes, and in particular, the P4 technology, emerged only recently. Thus, P4 support for some well-established networking concepts is still lacking and several issues remain unsolved due to the different characteristics of programmable data planes in comparison to traditional networking. The research of this thesis focuses on two open issues of programmable data planes. First, it develops resilient and efficient forwarding mechanisms for the P4 data plane as there are no satisfying state of the art best practices yet. Second, it enables BIER in high-performance P4 data planes. BIER is a novel, scalable, and efficient transport mechanism for IP multicast traffic which has only very limited support of high-performance forwarding platforms yet. The main results of this thesis are published as 8 peer-reviewed and one post-publication peer-reviewed publication. The results cover the development of suitable resilience mechanisms for P4 data planes, the development and implementation of resilient BIER forwarding in P4, and the extensive evaluations of all developed and implemented mechanisms. Furthermore, the results contain a comprehensive P4 literature study. Two more peer-reviewed papers contain additional content that is not directly related to the main results. They implement congestion avoidance mechanisms in P4 and develop a scheduling concept to find cost-optimized load schedules based on day-ahead forecasts
    • …
    corecore