26 research outputs found

    A Pervasive Computational Intelligence based Cognitive Security Co-design Framework for Hype-connected Embedded Industrial IoT

    Get PDF
    The amplified connectivity of routine IoT entities can expose various security trajectories for cybercriminals to execute malevolent attacks. These dangers are even amplified by the source limitations and heterogeneity of low-budget IoT/IIoT nodes, which create existing multitude-centered and fixed perimeter-oriented security tools inappropriate for vibrant IoT settings. The offered emulation assessment exemplifies the remunerations of implementing context aware co-design oriented cognitive security method in assimilated IIoT settings and delivers exciting understandings in the strategy execution to drive forthcoming study. The innovative features of our system is in its capability to get by with irregular system connectivity as well as node limitations in terms of scares computational ability, limited buffer (at edge node), and finite energy. Based on real-time analytical data, projected scheme select the paramount probable end-to-end security system possibility that ties with an agreed set of node constraints. The paper achieves its goals by recognizing some gaps in the security explicit to node subclass that is vital to our system’s operations

    Consortium blockchain management with a peer reputation system for critical information sharing

    Get PDF
    Blockchain technology based applications are emerging to establish distributed trust amongst organizations who want to share critical information for mutual benefit amongst their peers. There is a growing need for consortium based blockchain schemes that avoid issues such as false reporting and free riding that impact cooperative behavior between multiple domains/entities. Specifically, customizable mechanisms need to be developed to setup and manage consortiums with economic models and cloud-based data storage schemes to suit various application requirements. In this MS Thesis, we address the above issues by proposing a novel consortium blockchain architecture and related protocols that allow critical information sharing using a reputation system that manages co-operation amongst peers using off-chain cloud data storage and on-chain transaction records. We show the effectiveness of our consortium blockchain management approach for two use cases: (i) threat information sharing for cyber defense collaboration system viz., DefenseChain, and (ii) protected data sharing in healthcare information system viz., HonestChain. DefenseChain features a consortium Blockchain architecture to obtain threat data and select suitable peers to help with cyber attack (e.g., DDoS, Advance Persistent Threat, Cryptojacking) detection and mitigation. As part of DefenseChain, we propose a novel economic model for creation and sustenance of the consortium with peers through a reputation estimation scheme that uses 'Quality of Detection' and 'Quality of Mitigation' metrics. Similarly, HonestChain features a consortium Blockchain architecture to allow protected data sharing between multiple domains/entities (e.g., health data service providers, hospitals and research labs) with incentives and in a standards-compliant manner (e.g., HIPAA, common data model) to enable predictive healthcare analytics. Using an OpenCloud testbed with configurations with Hyperledger Composer as well as a simulation setup, our evaluation experiments for DefenseChain and HonestChain show that our reputation system outperforms state-of-the-art solutions and our consortium blockchain approach is highly scalableIncludes bibliographical references (pages 45-52)

    Exploring Computing Continuum in IoT Systems: Sensing, Communicating and Processing at the Network Edge

    Get PDF
    As Internet of Things (IoT), originally comprising of only a few simple sensing devices, reaches 34 billion units by the end of 2020, they cannot be defined as merely monitoring sensors anymore. IoT capabilities have been improved in recent years as relatively large internal computation and storage capacity are becoming a commodity. In the early days of IoT, processing and storage were typically performed in cloud. New IoT architectures are able to perform complex tasks directly on-device, thus enabling the concept of an extended computational continuum. Real-time critical scenarios e.g. autonomous vehicles sensing, area surveying or disaster rescue and recovery require all the actors involved to be coordinated and collaborate without human interaction to a common goal, sharing data and resources, even in intermittent networks covered areas. This poses new problems in distributed systems, resource management, device orchestration,as well as data processing. This work proposes a new orchestration and communication framework, namely CContinuum, designed to manage resources in heterogeneous IoT architectures across multiple application scenarios. This work focuses on two key sustainability macroscenarios: (a) environmental sensing and awareness, and (b) electric mobility support. In the first case a mechanism to measure air quality over a long period of time for different applications at global scale (3 continents 4 countries) is introduced. The system has been developed in-house from the sensor design to the mist-computing operations performed by the nodes. In the second scenario, a technique to transmit large amounts of fine-time granularity battery data from a moving vehicle to a control center is proposed jointly with the ability of allocating tasks on demand within the computing continuum

    Cloud Computing: Challenges And Risk Management Framework

    Get PDF
    Cloud-computing technology has developed rapidly. It can be found in a wide range of social, business and computing applications. Cloud computing would change the Internet into a new computing and collaborative platform. It is a business model that achieves purchase ondemand and pay-per-use in network. Many competitors, organizations and companies in the industry have jumped into cloud computing and implemented it. Cloud computing provides us with things such as convenience, reduced cost and high scalability. But despite all of these advantages, there are many enterprises, individual users and organizations that still have not deployed this innovative technology. Several reasons lead to this problem; however, the main concerns are related to security, privacy and trust. Low trust between users and cloud computing providers has been found in the literature

    Discrete Event Simulations

    Get PDF
    Considered by many authors as a technique for modelling stochastic, dynamic and discretely evolving systems, this technique has gained widespread acceptance among the practitioners who want to represent and improve complex systems. Since DES is a technique applied in incredibly different areas, this book reflects many different points of view about DES, thus, all authors describe how it is understood and applied within their context of work, providing an extensive understanding of what DES is. It can be said that the name of the book itself reflects the plurality that these points of view represent. The book embraces a number of topics covering theory, methods and applications to a wide range of sectors and problem areas that have been categorised into five groups. As well as the previously explained variety of points of view concerning DES, there is one additional thing to remark about this book: its richness when talking about actual data or actual data based analysis. When most academic areas are lacking application cases, roughly the half part of the chapters included in this book deal with actual problems or at least are based on actual data. Thus, the editor firmly believes that this book will be interesting for both beginners and practitioners in the area of DES

    Minimal deployable endpoint-driven network forwarding: principle, designs and applications

    Get PDF
    Networked systems now have significant impact on human lives: the Internet, connecting the world globally, is the foundation of our information age, the data centers, running hundreds of thousands of servers, drive the era of cloud computing, and even the Tor project, a networked system providing online anonymity, now serves millions of daily users. Guided by the end-to-end principle, many computer networks have been designed with a simple and flexible core offering general data transfer service, whereas the bulk of the application-level functionalities have been implemented on endpoints that are attached to the edge of the network. Although the end-to-end design principle gives these networked systems tremendous success, a number of new requirements have emerged for computer networks and their running applications, including untrustworthy of endpoints, privacy requirement of endpoints, more demanding applications, the rise of third-party Intermediaries and the asymmetric capability of endpoints and so on. These emerging requirements have created various challenges in different networked systems. To address these challenges, there are no obvious solutions without adding in-network functions to the network core. However, no design principle has ever been proposed for guiding the implementation of in-network functions. In this thesis, We propose the first such principle and apply this principle to propose four designs in three different networked systems to address four separate challenges. We demonstrate through detailed implementation and extensive evaluations that the proposed principle can live in harmony with the end-to-end principle, and a combination of the two principle offers more complete, effective and accurate guides for innovating the modern computer networks and their applications.Ope

    Game-Theoretic Foundations for Forming Trusted Coalitions of Multi-Cloud Services in the Presence of Active and Passive Attacks

    Get PDF
    The prominence of cloud computing as a common paradigm for offering Web-based services has led to an unprecedented proliferation in the number of services that are deployed in cloud data centers. In parallel, services' communities and cloud federations have gained an increasing interest in the recent past years due to their ability to facilitate the discovery, composition, and resource scaling issues in large-scale services' markets. The problem is that the existing community and federation formation solutions deal with services as traditional software systems and overlook the fact that these services are often being offered as part of the cloud computing technology, which poses additional challenges at the architectural, business, and security levels. The motivation of this thesis stems from four main observations/research gaps that we have drawn through our literature reviews and/or experiments, which are: (1) leading cloud services such as Google and Amazon do not have incentives to group themselves into communities/federations using the existing community/federation formation solutions; (2) it is quite difficult to find a central entity that can manage the community/federation formation process in a multi-cloud environment; (3) if we allow services to rationally select their communities/federations without considering their trust relationships, these services might have incentives to structure themselves into communities/federations consisting of a large number of malicious services; and (4) the existing intrusion detection solutions in the domain of cloud computing are still ineffective in capturing advanced multi-type distributed attacks initiated by communities/federations of attackers since they overlook the attacker's strategies in their design and ignore the cloud system's resource constraints. This thesis aims to address these gaps by (1) proposing a business-oriented community formation model that accounts for the business potential of the services in the formation process to motivate the participation of services of all business capabilities, (2) introducing an inter-cloud trust framework that allows services deployed in one or disparate cloud centers to build credible trust relationships toward each other, while overcoming the collusion attacks that occur to mislead trust results even in extreme cases wherein attackers form the majority, (3) designing a trust-based game theoretical model that enables services to distributively form trustworthy multi-cloud communities wherein the number of malicious services is minimal, (4) proposing an intra-cloud trust framework that allows the cloud system to build credible trust relationships toward the guest Virtual Machines (VMs) running cloud-based services using objective and subjective trust sources, (5) designing and solving a trust-based maxmin game theoretical model that allows the cloud system to optimally distribute the detection load among VMs within a limited budget of resources, while considering Distributed Denial of Service (DDoS) attacks as a practical scenario, and (6) putting forward a resource-aware comprehensive detection and prevention system that is able to capture and prevent advanced simultaneous multi-type attacks within a limited amount of resources. We conclude the thesis by uncovering some persisting research gaps that need further study and investigation in the future

    Improving Large-Scale Network Traffic Simulation with Multi-Resolution Models

    Get PDF
    Simulating a large-scale network like the Internet is a challenging undertaking because of the sheer volume of its traffic. Packet-oriented representation provides high-fidelity details but is computationally expensive; fluid-oriented representation offers high simulation efficiency at the price of losing packet-level details. Multi-resolution modeling techniques exploit the advantages of both representations by integrating them in the same simulation framework. This dissertation presents solutions to the problems regarding the efficiency, accuracy, and scalability of the traffic simulation models in this framework. The ``ripple effect\u27\u27 is a well-known problem inherent in event-driven fluid-oriented traffic simulation, causing explosion of fluid rate changes. Integrating multi-resolution traffic representations requires estimating arrival rates of packet-oriented traffic, calculating the queueing delay upon a packet arrival, and computing packet loss rate under buffer overflow. Real time simulation of a large or ultra-large network demands efficient background traffic simulation. The dissertation includes a rate smoothing technique that provably mitigates the ``ripple effect\u27\u27, an accurate and efficient approach that integrates traffic models at multiple abstraction levels, a sequential algorithm that achieves real time simulation of the coarse-grained traffic in a network with 3 tier-1 ISP (Internet Service Provider) backbones using an ordinary PC, and a highly scalable parallel algorithm that simulates network traffic at coarse time scales
    corecore