1,487 research outputs found

    An efficient pending interest table control management in named data network

    Get PDF
    Named Data Networking (NDN) is an emerging Internet architecture that employs a new network communication model based on the identity of Internet content. Its core component, the Pending Interest Table (PIT) serves a significant role of recording Interest packet information which is ready to be sent but in waiting for matching Data packet. In managing PIT, the issue of flow PIT sizing has been very challenging due to massive use of long Interest lifetime particularly when there is no flexible replacement policy, hence affecting PIT performance. The aim of this study is to propose an efficient PIT Control Management (PITCM) approach to be used in handling incoming Interest packets in order to mitigate PIT overflow thus enhancing PIT utilization and performance. PITCM consists of Adaptive Virtual PIT (AVPIT) mechanism, Smart Threshold Interest Lifetime (STIL) mechanism and Highest Lifetime Least Request (HLLR) policy. The AVPIT is responsible for obtaining early PIT overflow prediction and reaction. STIL is meant for adjusting lifetime value for incoming Interest packet while HLLR is utilized for managing PIT entries in efficient manner. A specific research methodology is followed to ensure that the work is rigorous in achieving the aim of the study. The network simulation tool is used to design and evaluate PITCM. The results of study show that PITCM outperforms the performance of standard NDN PIT with 45% higher Interest satisfaction rate, 78% less Interest retransmission rate and 65% less Interest drop rate. In addition, Interest satisfaction delay and PIT length is reduced significantly to 33% and 46%, respectively. The contribution of this study is important for Interest packet management in NDN routing and forwarding systems. The AVPIT and STIL mechanisms as well as the HLLR policy can be used in monitoring, controlling and managing the PIT contents for Internet architecture of the future

    Digital provenance - models, systems, and applications

    Get PDF
    Data provenance refers to the history of creation and manipulation of a data object and is being widely used in various application domains including scientific experiments, grid computing, file and storage system, streaming data etc. However, existing provenance systems operate at a single layer of abstraction (workflow/process/OS) at which they record and store provenance whereas the provenance captured from different layers provide the highest benefit when integrated through a unified provenance framework. To build such a framework, a comprehensive provenance model able to represent the provenance of data objects with various semantics and granularity is the first step. In this thesis, we propose a such a comprehensive provenance model and present an abstract schema of the model. ^ We further explore the secure provenance solutions for distributed systems, namely streaming data, wireless sensor networks (WSNs) and virtualized environments. We design a customizable file provenance system with an application to the provenance infrastructure for virtualized environments. The system supports automatic collection and management of file provenance metadata, characterized by our provenance model. Based on the proposed provenance framework, we devise a mechanism for detecting data exfiltration attack in a file system. We then move to the direction of secure provenance communication in streaming environment and propose two secure provenance schemes focusing on WSNs. The basic provenance scheme is extended in order to detect packet dropping adversaries on the data flow path over a period of time. We also consider the issue of attack recovery and present an extensive incident response and prevention system specifically designed for WSNs

    Digital provenance - models, systems, and applications

    Get PDF
    Data provenance refers to the history of creation and manipulation of a data object and is being widely used in various application domains including scientific experiments, grid computing, file and storage system, streaming data etc. However, existing provenance systems operate at a single layer of abstraction (workflow/process/OS) at which they record and store provenance whereas the provenance captured from different layers provide the highest benefit when integrated through a unified provenance framework. To build such a framework, a comprehensive provenance model able to represent the provenance of data objects with various semantics and granularity is the first step. In this thesis, we propose a such a comprehensive provenance model and present an abstract schema of the model. ^ We further explore the secure provenance solutions for distributed systems, namely streaming data, wireless sensor networks (WSNs) and virtualized environments. We design a customizable file provenance system with an application to the provenance infrastructure for virtualized environments. The system supports automatic collection and management of file provenance metadata, characterized by our provenance model. Based on the proposed provenance framework, we devise a mechanism for detecting data exfiltration attack in a file system. We then move to the direction of secure provenance communication in streaming environment and propose two secure provenance schemes focusing on WSNs. The basic provenance scheme is extended in order to detect packet dropping adversaries on the data flow path over a period of time. We also consider the issue of attack recovery and present an extensive incident response and prevention system specifically designed for WSNs

    Amorphous Placement and Informed Diffusion for Timely Monitoring by Autonomous, Resource-Constrained, Mobile Sensors

    Full text link
    Personal communication devices are increasingly equipped with sensors for passive monitoring of encounters and surroundings. We envision the emergence of services that enable a community of mobile users carrying such resource-limited devices to query such information at remote locations in the field in which they collectively roam. One approach to implement such a service is directed placement and retrieval (DPR), whereby readings/queries about a specific location are routed to a node responsible for that location. In a mobile, potentially sparse setting, where end-to-end paths are unavailable, DPR is not an attractive solution as it would require the use of delay-tolerant (flooding-based store-carry-forward) routing of both readings and queries, which is inappropriate for applications with data freshness constraints, and which is incompatible with stringent device power/memory constraints. Alternatively, we propose the use of amorphous placement and retrieval (APR), in which routing and field monitoring are integrated through the use of a cache management scheme coupled with an informed exchange of cached samples to diffuse sensory data throughout the network, in such a way that a query answer is likely to be found close to the query origin. We argue that knowledge of the distribution of query targets could be used effectively by an informed cache management policy to maximize the utility of collective storage of all devices. Using a simple analytical model, we show that the use of informed cache management is particularly important when the mobility model results in a non-uniform distribution of users over the field. We present results from extensive simulations which show that in sparsely-connected networks, APR is more cost-effective than DPR, that it provides extra resilience to node failure and packet losses, and that its use of informed cache management yields superior performance

    Views from the coalface: chemo-sensors, sensor networks and the semantic sensor web

    Get PDF
    Currently millions of sensors are being deployed in sensor networks across the world. These networks generate vast quantities of heterogeneous data across various levels of spatial and temporal granularity. Sensors range from single-point in situ sensors to remote satellite sensors which can cover the globe. The semantic sensor web in principle should allow for the unification of the web with the real-word. In this position paper, we discuss the major challenges to this unification from the perspective of sensor developers (especially chemo-sensors) and integrating sensors data in real-world deployments. These challenges include: (1) identifying the quality of the data; (2) heterogeneity of data sources and data transport methods; (3) integrating data streams from different sources and modalities (esp. contextual information), and (4) pushing intelligence to the sensor level

    A Survey on Wireless Sensor Network Security

    Full text link
    Wireless sensor networks (WSNs) have recently attracted a lot of interest in the research community due their wide range of applications. Due to distributed nature of these networks and their deployment in remote areas, these networks are vulnerable to numerous security threats that can adversely affect their proper functioning. This problem is more critical if the network is deployed for some mission-critical applications such as in a tactical battlefield. Random failure of nodes is also very likely in real-life deployment scenarios. Due to resource constraints in the sensor nodes, traditional security mechanisms with large overhead of computation and communication are infeasible in WSNs. Security in sensor networks is, therefore, a particularly challenging task. This paper discusses the current state of the art in security mechanisms for WSNs. Various types of attacks are discussed and their countermeasures presented. A brief discussion on the future direction of research in WSN security is also included.Comment: 24 pages, 4 figures, 2 table

    Supporting Large Scale Communication Systems on Infrastructureless Networks Composed of Commodity Mobile Devices: Practicality, Scalability, and Security.

    Full text link
    Infrastructureless Delay Tolerant Networks (DTNs) composed of commodity mobile devices have the potential to support communication applications resistant to blocking and censorship, as well as certain types of surveillance. In this thesis we study the utility, practicality, robustness, and security of these networks. We collected two sets of wireless connectivity traces of commodity mobile devices with different granularity and scales. The first dataset is collected through active installation of measurement software on volunteer users' own smartphones, involving 111 users of a DTN microblogging application that we developed. The second dataset is collected through passive observation of WiFi association events on a university campus, involving 119,055 mobile devices. Simulation results show consistent message delivery performances of the two datasets. Using an epidemic flooding protocol, the large network achieves an average delivery rate of 0.71 in 24 hours and a median delivery delay of 10.9 hours. We show that this performance is appropriate for sharing information that is not time sensitive, e.g., blogs and photos. We also show that using an energy efficient variant of the epidemic flooding protocol, even the large network can support text messages while only consuming 13.7% of a typical smartphone battery in 14 hours. We found that the network delivery rate and delay are robust to denial-of-service and censorship attacks. Attacks that randomly remove 90% of the network participants only reduce delivery rates by less than 10%. Even when subjected to targeted attacks, the network suffered a less than 10% decrease in delivery rate when 40% of its participants were removed. Although structurally robust, the openness of the proposed network introduces numerous security concerns. The Sybil attack, in which a malicious node poses as many identities in order to gain disproportionate influence, is especially dangerous as it breaks the assumption underlying majority voting. Many defenses based on spatial variability of wireless channels exist, and we extend them to be practical for ad hoc networks of commodity 802.11 devices without mutual trust. We present the Mason test, which uses two efficient methods for separating valid channel measurement results of behaving nodes from those falsified by malicious participants.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120779/1/liuyue_1.pd

    Bloom Filter-Based Secure Data Forwarding in Large-Scale Cyber-Physical Systems

    Get PDF
    Cyber-physical systems (CPSs) connect with the physical world via communication networks, which significantly increases security risks of CPSs. To secure the sensitive data, secure forwarding is an essential component of CPSs. However, CPSs require high dimensional multiattribute and multilevel security requirements due to the significantly increased system scale and diversity, and hence impose high demand on the secure forwarding information query and storage. To tackle these challenges, we propose a practical secure data forwarding scheme for CPSs. Considering the limited storage capability and computational power of entities, we adopt bloom filter to store the secure forwarding information for each entity, which can achieve well balance between the storage consumption and query delay. Furthermore, a novel link-based bloom filter construction method is designed to reduce false positive rate during bloom filter construction. Finally, the effects of false positive rate on the performance of bloom filter-based secure forwarding with different routing policies are discussed
    corecore