466 research outputs found

    APCN: A Scalable Architecture for Balancing Accountability and Privacy in Large-scale Content-based Networks

    Get PDF
    This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this record. Balancing accountability and privacy has become extremely important in cyberspace, and the Internet has evolved to be dominated by content transmission. Several research efforts have been devoted to contributing to either accountability or privacy protection, but none of them has managed to consider both factors in content-based networks. An efficient solution is therefore urgently demanded by service and content providers. However, proposing such a solution is very challenging, because the following questions need to be considered simultaneously: (1) How can the conflict between privacy and accountability be avoided? (2) How is content identified and accountability performed based on packets belonging to that content? (3) How can the scalability issue be alleviated on massive content accountability in large-scale networks? To address these questions, we propose the first scalable architecture for balancing Accountability and Privacy in large-scale Content-based Networks (APCN). In particular, an innovative method for identifying content is proposed to effectively distinguish the content issued by different senders and from different flows, enabling the accountability of a content based on any of its packets. Furthermore, a new idea with double-delegate (i.e., source and local delegates) is proposed to improve the performance and alleviate the scalability issue on content accountability in large-scale networks. Extensive NS-3 experiments with real trace are conducted to validate the efficiency of the proposed APCN. The results demonstrate that APCN outperforms existing related solutions in terms of lower round-trip time and higher cache hit rate under different network configurations.National Key R&D Program of ChinaNational Science and Technology Major Project of the Ministry of Science and Technology of ChinaNational Natural Science Foundation of Chin

    Dynamic services in mobile ad hoc networks

    Get PDF
    The increasing diffusion of wireless-enabled portable devices is pushing toward the design of novel service scenarios, promoting temporary and opportunistic interactions in infrastructure-less environments. Mobile Ad Hoc Networks (MANET) are the general model of these higly dynamic networks that can be specialized, depending on application cases, in more specific and refined models such as Vehicular Ad Hoc Networks and Wireless Sensor Networks. Two interesting deployment cases are of increasing relevance: resource diffusion among users equipped with portable devices, such as laptops, smart phones or PDAs in crowded areas (termed dense MANET) and dissemination/indexing of monitoring information collected in Vehicular Sensor Networks. The extreme dynamicity of these scenarios calls for novel distributed protocols and services facilitating application development. To this aim we have designed middleware solutions supporting these challenging tasks. REDMAN manages, retrieves, and disseminates replicas of software resources in dense MANET; it implements novel lightweight protocols to maintain a desired replication degree despite participants mobility, and efficiently perform resource retrieval. REDMAN exploits the high-density assumption to achieve scalability and limited network overhead. Sensed data gathering and distributed indexing in Vehicular Networks raise similar issues: we propose a specific middleware support, called MobEyes, exploiting node mobility to opportunistically diffuse data summaries among neighbor vehicles. MobEyes creates a low-cost opportunistic distributed index to query the distributed storage and to determine the location of needed information. Extensive validation and testing of REDMAN and MobEyes prove the effectiveness of our original solutions in limiting communication overhead while maintaining the required accuracy of replication degree and indexing completeness, and demonstrates the feasibility of the middleware approach

    Digital provenance - models, systems, and applications

    Get PDF
    Data provenance refers to the history of creation and manipulation of a data object and is being widely used in various application domains including scientific experiments, grid computing, file and storage system, streaming data etc. However, existing provenance systems operate at a single layer of abstraction (workflow/process/OS) at which they record and store provenance whereas the provenance captured from different layers provide the highest benefit when integrated through a unified provenance framework. To build such a framework, a comprehensive provenance model able to represent the provenance of data objects with various semantics and granularity is the first step. In this thesis, we propose a such a comprehensive provenance model and present an abstract schema of the model. ^ We further explore the secure provenance solutions for distributed systems, namely streaming data, wireless sensor networks (WSNs) and virtualized environments. We design a customizable file provenance system with an application to the provenance infrastructure for virtualized environments. The system supports automatic collection and management of file provenance metadata, characterized by our provenance model. Based on the proposed provenance framework, we devise a mechanism for detecting data exfiltration attack in a file system. We then move to the direction of secure provenance communication in streaming environment and propose two secure provenance schemes focusing on WSNs. The basic provenance scheme is extended in order to detect packet dropping adversaries on the data flow path over a period of time. We also consider the issue of attack recovery and present an extensive incident response and prevention system specifically designed for WSNs

    Verifying RADAR Data Using Two-Dimensional QIM-based Data Hiding

    Full text link
    Modern vehicles have evolved into supporting advanced internal networks and connecting System Based Chips (SBC), System in a Package (SiP) solutions or traditional micro controllers to foster an electronic ecosystem for high speed data transfers, precision and real-time control. The use of Controller Area Networks (CAN) is widely adopted as the backbone of internal vehicle communication infrastructure. Automotive applications such as ADAS, autonomous driving, battery management systems, power train systems, telematics and infotainment, all utilize CAN transmissions directly or through gateway management. The network transmissions lack robust integrity verification mechanisms to validate authentic data payloads, making it vulnerable to packet replay, spoofing, insertion, deletion and denial of service attacks. Additional methods exist to secure network data such as traditional cryptography. Utilizing this method will increase the computational complexity, processing latency and increase overall system cost. This thesis proposes a robust, light and adaptive solution to validate the authenticity of automotive sensor data using CAN network protocol. We propose using a two-dimensional Quantization Index Modulation (QIM) data hiding technique, to create a means of verification. Analysis of the proposed framework will be conducted in a sensor transmission scenario for RADAR sensors in an autonomous vehicle setting. The detection and effects of distortion on the application are tested through the implementation of sensor fusion algorithms and the results are observed and analyzed. The proposed framework offers a needed capability to maintain transmission integrity without the compromise of data quality and low design complexity. This framework could also be applied to different network architectures, as well as its operational scope could be modified to operate with more abstract types of data.MSEElectrical Engineering, College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/167354/1/Brandon Fedoruk - Final Thesis.pd

    Digital provenance - models, systems, and applications

    Get PDF
    Data provenance refers to the history of creation and manipulation of a data object and is being widely used in various application domains including scientific experiments, grid computing, file and storage system, streaming data etc. However, existing provenance systems operate at a single layer of abstraction (workflow/process/OS) at which they record and store provenance whereas the provenance captured from different layers provide the highest benefit when integrated through a unified provenance framework. To build such a framework, a comprehensive provenance model able to represent the provenance of data objects with various semantics and granularity is the first step. In this thesis, we propose a such a comprehensive provenance model and present an abstract schema of the model. ^ We further explore the secure provenance solutions for distributed systems, namely streaming data, wireless sensor networks (WSNs) and virtualized environments. We design a customizable file provenance system with an application to the provenance infrastructure for virtualized environments. The system supports automatic collection and management of file provenance metadata, characterized by our provenance model. Based on the proposed provenance framework, we devise a mechanism for detecting data exfiltration attack in a file system. We then move to the direction of secure provenance communication in streaming environment and propose two secure provenance schemes focusing on WSNs. The basic provenance scheme is extended in order to detect packet dropping adversaries on the data flow path over a period of time. We also consider the issue of attack recovery and present an extensive incident response and prevention system specifically designed for WSNs

    A survey of defense mechanisms against distributed denial of service (DDOS) flooding attacks

    Get PDF
    Distributed Denial of Service (DDoS) flooding attacks are one of the biggest concerns for security professionals. DDoS flooding attacks are typically explicit attempts to disrupt legitimate users' access to services. Attackers usually gain access to a large number of computers by exploiting their vulnerabilities to set up attack armies (i.e., Botnets). Once an attack army has been set up, an attacker can invoke a coordinated, large-scale attack against one or more targets. Developing a comprehensive defense mechanism against identified and anticipated DDoS flooding attacks is a desired goal of the intrusion detection and prevention research community. However, the development of such a mechanism requires a comprehensive understanding of the problem and the techniques that have been used thus far in preventing, detecting, and responding to various DDoS flooding attacks. In this paper, we explore the scope of the DDoS flooding attack problem and attempts to combat it. We categorize the DDoS flooding attacks and classify existing countermeasures based on where and when they prevent, detect, and respond to the DDoS flooding attacks. Moreover, we highlight the need for a comprehensive distributed and collaborative defense approach. Our primary intention for this work is to stimulate the research community into developing creative, effective, efficient, and comprehensive prevention, detection, and response mechanisms that address the DDoS flooding problem before, during and after an actual attack. © 1998-2012 IEEE

    Energy efficient security and privacy management in sensor clouds

    Get PDF
    Sensor Cloud is a new model of computing for Wireless Sensor Networks, which facilitates resource sharing and enables large scale sensor networks. A multi-user distributed system, however, where resources are shared, has inherent challenges in security and privacy. The data being generated by the wireless sensors in a sensor cloud need to be protected against adversaries, which may be outsiders as well as insiders. Similarly the code which is disseminated to the sensors by the sensor cloud needs to be protected against inside and outside adversaries. Moreover, since the wireless sensors cannot support complex, energy intensive measures, the security and privacy of the data and the code have to be attained by way of lightweight algorithms. In this work, we first present two data aggregation algorithms, one based on an Elliptic Curve Cryptosystem (ECC) and the other based on symmetric key system, which provide confidentiality and integrity of data against an outside adversary and privacy against an in network adversary. A fine grained access control scheme which works on the securely aggregated data is presented next. This scheme uses Attribute Based Encryption (ABE) to achieve this objective. Finally, to securely and efficiently disseminate code in the sensor cloud, we present a code dissemination algorithm which first reduces the amount of code to be transmitted from the base station. It then uses Symmetric Proxy Re-encryption along with Bloom filters and HMACs to protect the code against eavesdropping and false code injection attacks. --Abstract, page iv

    Hardware-Aware Algorithm Designs for Efficient Parallel and Distributed Processing

    Get PDF
    The introduction and widespread adoption of the Internet of Things, together with emerging new industrial applications, bring new requirements in data processing. Specifically, the need for timely processing of data that arrives at high rates creates a challenge for the traditional cloud computing paradigm, where data collected at various sources is sent to the cloud for processing. As an approach to this challenge, processing algorithms and infrastructure are distributed from the cloud to multiple tiers of computing, closer to the sources of data. This creates a wide range of devices for algorithms to be deployed on and software designs to adapt to.In this thesis, we investigate how hardware-aware algorithm designs on a variety of platforms lead to algorithm implementations that efficiently utilize the underlying resources. We design, implement and evaluate new techniques for representative applications that involve the whole spectrum of devices, from resource-constrained sensors in the field, to highly parallel servers. At each tier of processing capability, we identify key architectural features that are relevant for applications and propose designs that make use of these features to achieve high-rate, timely and energy-efficient processing.In the first part of the thesis, we focus on high-end servers and utilize two main approaches to achieve high throughput processing: vectorization and thread parallelism. We employ vectorization for the case of pattern matching algorithms used in security applications. We show that re-thinking the design of algorithms to better utilize the resources available in the platforms they are deployed on, such as vector processing units, can bring significant speedups in processing throughout. We then show how thread-aware data distribution and proper inter-thread synchronization allow scalability, especially for the problem of high-rate network traffic monitoring. We design a parallelization scheme for sketch-based algorithms that summarize traffic information, which allows them to handle incoming data at high rates and be able to answer queries on that data efficiently, without overheads.In the second part of the thesis, we target the intermediate tier of computing devices and focus on the typical examples of hardware that is found there. We show how single-board computers with embedded accelerators can be used to handle the computationally heavy part of applications and showcase it specifically for pattern matching for security-related processing. We further identify key hardware features that affect the performance of pattern matching algorithms on such devices, present a co-evaluation framework to compare algorithms, and design a new algorithm that efficiently utilizes the hardware features.In the last part of the thesis, we shift the focus to the low-power, resource-constrained tier of processing devices. We target wireless sensor networks and study distributed data processing algorithms where the processing happens on the same devices that generate the data. Specifically, we focus on a continuous monitoring algorithm (geometric monitoring) that aims to minimize communication between nodes. By deploying that algorithm in action, under realistic environments, we demonstrate that the interplay between the network protocol and the application plays an important role in this layer of devices. Based on that observation, we co-design a continuous monitoring application with a modern network stack and augment it further with an in-network aggregation technique. In this way, we show that awareness of the underlying network stack is important to realize the full potential of the continuous monitoring algorithm.The techniques and solutions presented in this thesis contribute to better utilization of hardware characteristics, across a wide spectrum of platforms. We employ these techniques on problems that are representative examples of current and upcoming applications and contribute with an outlook of emerging possibilities that can build on the results of the thesis
    • …
    corecore