232 research outputs found

    Access Control Mechanisms in Named Data Networks:A Comprehensive Survey

    Get PDF
    Information-Centric Networking (ICN) has recently emerged as a prominent candidate for the Future Internet Architecture (FIA) that addresses existing issues with the host-centric communication model of the current TCP/IP-based Internet. Named Data Networking (NDN) is one of the most recent and active ICN architectures that provides a clean slate approach for Internet communication. NDN provides intrinsic content security where security is directly provided to the content instead of communication channel. Among other security aspects, Access Control (AC) rules specify the privileges for the entities that can access the content. In TCP/IP-based AC systems, due to the client-server communication model, the servers control which client can access a particular content. In contrast, ICN-based networks use content names to drive communication and decouple the content from its original location. This phenomenon leads to the loss of control over the content causing different challenges for the realization of efficient AC mechanisms. To date, considerable efforts have been made to develop various AC mechanisms in NDN. In this paper, we provide a detailed and comprehensive survey of the AC mechanisms in NDN. We follow a holistic approach towards AC in NDN where we first summarize the ICN paradigm, describe the changes from channel-based security to content-based security and highlight different cryptographic algorithms and security protocols in NDN. We then classify the existing AC mechanisms into two main categories: Encryption-based AC and Encryption-independent AC. Each category has different classes based on the working principle of AC (e.g., Attribute-based AC, Name-based AC, Identity-based AC, etc). Finally, we present the lessons learned from the existing AC mechanisms and identify the challenges of NDN-based AC at large, highlighting future research directions for the community.Comment: This paper has been accepted for publication by the ACM Computing Surveys. The final version will be published by the AC

    COIN@AAMAS2015

    Get PDF
    COIN@AAMAS2015 is the nineteenth edition of the series and the fourteen papers included in these proceedings demonstrate the vitality of the community and will provide the grounds for a solid workshop program and what we expect will be a most enjoyable and enriching debate.Peer reviewe

    Dependable Embedded Systems

    Get PDF
    This Open Access book introduces readers to many new techniques for enhancing and optimizing reliability in embedded systems, which have emerged particularly within the last five years. This book introduces the most prominent reliability concerns from today’s points of view and roughly recapitulates the progress in the community so far. Unlike other books that focus on a single abstraction level such circuit level or system level alone, the focus of this book is to deal with the different reliability challenges across different levels starting from the physical level all the way to the system level (cross-layer approaches). The book aims at demonstrating how new hardware/software co-design solution can be proposed to ef-fectively mitigate reliability degradation such as transistor aging, processor variation, temperature effects, soft errors, etc. Provides readers with latest insights into novel, cross-layer methods and models with respect to dependability of embedded systems; Describes cross-layer approaches that can leverage reliability through techniques that are pro-actively designed with respect to techniques at other layers; Explains run-time adaptation and concepts/means of self-organization, in order to achieve error resiliency in complex, future many core systems

    Doctor of Philosophy

    Get PDF
    dissertationThe computing landscape is undergoing a major change, primarily enabled by ubiquitous wireless networks and the rapid increase in the use of mobile devices which access a web-based information infrastructure. It is expected that most intensive computing may either happen in servers housed in large datacenters (warehouse- scale computers), e.g., cloud computing and other web services, or in many-core high-performance computing (HPC) platforms in scientific labs. It is clear that the primary challenge to scaling such computing systems into the exascale realm is the efficient supply of large amounts of data to hundreds or thousands of compute cores, i.e., building an efficient memory system. Main memory systems are at an inflection point, due to the convergence of several major application and technology trends. Examples include the increasing importance of energy consumption, reduced access stream locality, increasing failure rates, limited pin counts, increasing heterogeneity and complexity, and the diminished importance of cost-per-bit. In light of these trends, the memory system requires a major overhaul. The key to architecting the next generation of memory systems is a combination of the prudent incorporation of novel technologies, and a fundamental rethinking of certain conventional design decisions. In this dissertation, we study every major element of the memory system - the memory chip, the processor-memory channel, the memory access mechanism, and memory reliability, and identify the key bottlenecks to efficiency. Based on this, we propose a novel main memory system with the following innovative features: (i) overfetch-aware re-organized chips, (ii) low-cost silicon photonic memory channels, (iii) largely autonomous memory modules with a packet-based interface to the proces- sor, and (iv) a RAID-based reliability mechanism. Such a system is energy-efficient, high-performance, low-complexity, reliable, and cost-effective, making it ideally suited to meet the requirements of future large-scale computing systems

    Data-Driven Network Management for Next-Generation Wireless Networks

    Get PDF
    With the commercialization and maturity of the fifth-generation (5G) wireless networks, the next-generation wireless network (NGWN) is envisioned to provide seamless connectivity for mobile user terminals (MUTs) and to support a wide range of new applications with stringent quality of service (QoS) requirements. In the NGWN, the network architecture will be highly heterogeneous due to the integration of terrestrial networks, satellite networks, and aerial networks formed by unmanned aerial vehicles (UAVs), and the network environment becomes highly dynamic because of the mobility of MUTs and the spatiotemporal variation of service demands. In order to provide high-quality services in such dynamic and heterogeneous networks, flexible, fine-grained, and adaptive network management will be essential. Recent advancements in deep learning (DL) and digital twins (DTs) have made it possible to enable data-driven solutions to support network management in the NGWN. DL methods can solve network management problems by leveraging data instead of explicit mathematical models, and DTs can facilitate DL methods by providing extensive data based on the full digital representations created for individual MUTs. Data-driven solutions that integrates DL and DT can address complicated network management problems and explore implicit network characteristics to adapt to dynamic network environments in the NGWN. However, the design of data-driven network management solutions in the NGWN meets several technical challenges: 1) how the NGWN can be configured to support multiple services with different spatiotemporal service demands while simultaneously satisfying their different QoS requirements; 2) how the multi-dimensional network resources are proactively reserved to support MUTs with different mobility patterns in a resource-efficient manner; and 3) how the heterogeneous NGWN components, including base stations (BSs), satellites, and UAVs, jointly coordinate their network resources to support dynamic service demands, etc. In this thesis, we develop efficient data-driven network management strategies in two stages, i.e., long-term network planning and real-time network operation, to address the above challenges in the NGWN. Firstly, we investigate planning-stage network configuration to satisfy different service requirements for communication services. We consider a two-tier network with one macro BS and multiple small BSs, which supports communication services with different spatiotemporal data traffic distributions. The objective is to maximize the energy efficiency of BSs by jointly configuring downlink transmission power and communication coverage for each BS. To achieve this objective, we first design a network planning scheme with flexible binary slice zooming, dual time-scale planning, and grid-based network planning. The scheme allows flexibility to differentiate the communication coverage and downlink transmission power of the same BS for different services while improving the temporal and spatial granularity of network planning. We formulate a combinatorial optimization problem in which communication coverage management and power control are mutually dependent. To solve the problem, we propose a data-driven method with two steps: 1) we propose an unsupervised-learning-assisted approach to determine the communication coverage of BSs; and 2) we derive a closed-form solution for power control. Secondly, we investigate planning-stage resource reservation for a compute-intensive service to support MUTs with different mobility patterns. The MUTs can offload their computing tasks to the computing servers deployed at the core networks, gateways, and BSs. Each computing server requires both computing and storage resources to execute computing tasks. The objective is to optimize long-term resource reservation by jointly minimizing the usage of computing, storage, and communication resources and the cost from re-configuring resource reservation. To this end, we develop a data-driven network planning scheme with two elements, i.e., multi-resource reservation and resource reservation re-configuration. First, DTs are designed for collecting MUT status data, based on which MUTs are grouped according to their mobility patterns. Then, an optimization algorithm is proposed to customize resource reservation for different groups to satisfy their different resource demands. Last, a meta-learning-based approach is proposed to re-configure resource reservation for balancing the network resource usage and the re-configuration cost. Thirdly, we investigate operation-stage computing resource allocation in a space-air-ground integrated network (SAGIN). A UAV is deployed to fly around MUTs and collect their computing tasks, while scheduling the collected computing tasks to be processed at the UAV locally or offloaded to the nearby BSs or the remote satellite. The energy budget of the UAV, intermittent connectivity between the UAV and BSs, and dynamic computing task arrival pose challenges in computing task scheduling. The objective is to design a real-time computing task scheduling policy for minimizing the delay of computing task offloading and processing in the SAGIN. To achieve the objective, we first formulate the on-line computing scheduling in the dynamic network environment as a constrained Markov decision process. Then, we develop a risk-sensitive reinforcement learning approach in which a risk value is used to represent energy consumption that exceeds the budget. By balancing the risk value and the reward from delay minimization, the UAV can explore the task scheduling policy to minimize task offloading and processing delay while satisfying the UAV energy constraint. Extensive simulation have been conducted to demonstrate that the proposed data-driven network management approach for the NGWN can achieve flexible BS configuration for multiple communication services, fine-grained multi-dimensional resource reservation for a compute-intensive service, and adaptive computing resource allocation in the dynamic SAGIN. The schemes developed in the thesis are valuable to the data-driven network planning and operation in the NGWN

    Trustworthiness in Mobile Cyber Physical Systems

    Get PDF
    Computing and communication capabilities are increasingly embedded in diverse objects and structures in the physical environment. They will link the ‘cyberworld’ of computing and communications with the physical world. These applications are called cyber physical systems (CPS). Obviously, the increased involvement of real-world entities leads to a greater demand for trustworthy systems. Hence, we use "system trustworthiness" here, which can guarantee continuous service in the presence of internal errors or external attacks. Mobile CPS (MCPS) is a prominent subcategory of CPS in which the physical component has no permanent location. Mobile Internet devices already provide ubiquitous platforms for building novel MCPS applications. The objective of this Special Issue is to contribute to research in modern/future trustworthy MCPS, including design, modeling, simulation, dependability, and so on. It is imperative to address the issues which are critical to their mobility, report significant advances in the underlying science, and discuss the challenges of development and implementation in various applications of MCPS

    Optimising Networks For Ultra-High Definition Video

    Get PDF
    The increase in real-time ultra-high definition video services is a challenging issue for current network infrastructures. The high bitrate traffic generated by ultra-high definition content reduces the effectiveness of current live video distribution systems. Transcoders and application layer multicasting (ALM) can reduce traffic in a video delivery system, but they are limited due to the static nature of their implementations. To overcome the restrictions of current static video delivery systems, an OpenFlow based migration system is proposed. This system enables an almost seamless migration of a transcoder or ALM node, while delivering real-time ultra-high definition content. Further to this, a novel heuristic algorithm is presented to optimise control of the migration events and destination. The combination of the migration system and heuristic algorithm provides an improved video delivery system, capable of migrating resources during operation with minimal disruption to clients. With the rise in popularity of consumer based live streaming, it is necessary to develop and improve architectures that can support these new types of applications. Current architectures introduce a large delay to video streams, which presents issues for certain applications. In order to overcome this, an improved infrastructure for delivering real-time streams is also presented. The proposed system uses OpenFlow within a content delivery network (CDN) architecture, in order to improve several aspects of current CDNs. Aside from the reduction in stream delay, other improvements include switch level multicasting to reduce duplicate traffic and smart load balancing for server resources. Furthermore, a novel max-flow algorithm is also presented. This algorithm aims to optimise traffic within a system such as the proposed OpenFlow CDN, with the focus on distributing traffic across the network, in order to reduce the probability of blocking

    Adaptive Management of Multimodel Data and Heterogeneous Workloads

    Get PDF
    Data management systems are facing a growing demand for a tighter integration of heterogeneous data from different applications and sources for both operational and analytical purposes in real-time. However, the vast diversification of the data management landscape has led to a situation where there is a trade-off between high operational performance and a tight integration of data. The difference between the growth of data volume and the growth of computational power demands a new approach for managing multimodel data and handling heterogeneous workloads. With PolyDBMS we present a novel class of database management systems, bridging the gap between multimodel database and polystore systems. This new kind of database system combines the operational capabilities of traditional database systems with the flexibility of polystore systems. This includes support for data modifications, transactions, and schema changes at runtime. With native support for multiple data models and query languages, a PolyDBMS presents a holistic solution for the management of heterogeneous data. This does not only enable a tight integration of data across different applications, it also allows a more efficient usage of resources. By leveraging and combining highly optimized database systems as storage and execution engines, this novel class of database system takes advantage of decades of database systems research and development. In this thesis, we present the conceptual foundations and models for building a PolyDBMS. This includes a holistic model for maintaining and querying multiple data models in one logical schema that enables cross-model queries. With the PolyAlgebra, we present a solution for representing queries based on one or multiple data models while preserving their semantics. Furthermore, we introduce a concept for the adaptive planning and decomposition of queries across heterogeneous database systems with different capabilities and features. The conceptual contributions presented in this thesis materialize in Polypheny-DB, the first implementation of a PolyDBMS. Supporting the relational, document, and labeled property graph data model, Polypheny-DB is a suitable solution for structured, semi-structured, and unstructured data. This is complemented by an extensive type system that includes support for binary large objects. With support for multiple query languages, industry standard query interfaces, and a rich set of domain-specific data stores and data sources, Polypheny-DB offers a flexibility unmatched by existing data management solutions

    Operating System Contribution to Composable Timing Behaviour in High-Integrity Real-Time Systems

    Get PDF
    The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry

    Machine Learning for Hand Gesture Classification from Surface Electromyography Signals

    Get PDF
    Classifying hand gestures from Surface Electromyography (sEMG) is a process which has applications in human-machine interaction, rehabilitation and prosthetic control. Reduction in the cost and increase in the availability of necessary hardware over recent years has made sEMG a more viable solution for hand gesture classification. The research challenge is the development of processes to robustly and accurately predict the current gesture based on incoming sEMG data. This thesis presents a set of methods, techniques and designs that improve upon evaluation of, and performance on, the classification problem as a whole. These are brought together to set a new baseline for the potential classification. Evaluation is improved by careful choice of metrics and design of cross-validation techniques that account for data bias caused by common experimental techniques. A landmark study is re-evaluated with these improved techniques, and it is shown that data augmentation can be used to significantly improve upon the performance using conventional classification methods. A novel neural network architecture and supporting improvements are presented that further improve performance and is refined such that the network can achieve similar performance with many fewer parameters than competing designs. Supporting techniques such as subject adaptation and smoothing algorithms are then explored to improve overall performance and also provide more nuanced trade-offs with various aspects of performance, such as incurred latency and prediction smoothness. A new study is presented which compares the performance potential of medical grade electrodes and a low-cost commercial alternative showing that for a modest-sized gesture set, they can compete. The data is also used to explore data labelling in experimental design and to evaluate the numerous aspects of performance that must be traded off
    • …
    corecore