172 research outputs found

    Energy Proficient and Security Protocol for WSN: AODV

    Get PDF
    Wireless sensor network is extensively used technology now a day in real time application. It consists of a number of autonomous sensor nodes which are organized in various areas of interest to accumulate data and jointly convey that data back to a base station. But the sensor node has limited battery energy and it is also found that the WSN more vulnerable to severe kinds of security threats such as denial of service (DOS), Sybil, hello flood attack etc. In this, we proposed group communication using election algorithm to make the network most energy efficient and also make the network secure. The simulation of the proposed methodology is done between different network parameter such as PDR, end-to-end delay, throughput and energy consumption using the network simulator NS-2.34

    Distributed Object Tracking Using a Cluster-Based Kalman Filter in Wireless Camera Networks

    Get PDF
    Local data aggregation is an effective means to save sensor node energy and prolong the lifespan of wireless sensor networks. However, when a sensor network is used to track moving objects, the task of local data aggregation in the network presents a new set of challenges, such as the necessity to estimate, usually in real time, the constantly changing state of the target based on information acquired by the nodes at different time instants. To address these issues, we propose a distributed object tracking system which employs a cluster-based Kalman filter in a network of wireless cameras. When a target is detected, cameras that can observe the same target interact with one another to form a cluster and elect a cluster head. Local measurements of the target acquired by members of the cluster are sent to the cluster head, which then estimates the target position via Kalman filtering and periodically transmits this information to a base station. The underlying clustering protocol allows the current state and uncertainty of the target position to be easily handed off among clusters as the object is being tracked. This allows Kalman filter-based object tracking to be carried out in a distributed manner. An extended Kalman filter is necessary since measurements acquired by the cameras are related to the actual position of the target by nonlinear transformations. In addition, in order to take into consideration the time uncertainty in the measurements acquired by the different cameras, it is necessary to introduce nonlinearity in the system dynamics. Our object tracking protocol requires the transmission of significantly fewer messages than a centralized tracker that naively transmits all of the local measurements to the base station. It is also more accurate than a decentralized tracker that employs linear interpolation for local data aggregation. Besides, the protocol is able to perform real-time estimation because our implementation takes into consideration the sparsit- - y of the matrices involved in the problem. The experimental results show that our distributed object tracking protocol is able to achieve tracking accuracy comparable to the centralized tracking method, while requiring a significantly smaller number of message transmissions in the network

    Top k-leader election in wireless ad hoc networks

    Get PDF
    2008-2009 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    On Data Dissemination for Large-Scale Complex Critical Infrastructures

    Get PDF
    Middleware plays a key role for the achievement of the mission of future largescalecomplexcriticalinfrastructures, envisioned as federations of several heterogeneous systems over Internet. However, available approaches for datadissemination result still inadequate, since they are unable to scale and to jointly assure given QoS properties. In addition, the best-effort delivery strategy of Internet and the occurrence of node failures further exacerbate the correct and timely delivery of data, if the middleware is not equipped with means for tolerating such failures. This paper presents a peer-to-peer approach for resilient and scalable datadissemination over large-scalecomplexcriticalinfrastructures. The approach is based on the adoption of epidemic dissemination algorithms between peer groups, combined with the semi-active replication of group leaders to tolerate failures and assure the resilient delivery of data, despite the increasing scale and heterogeneity of the federated system. The effectiveness of the approach is shown by means of extensive simulation experiments, based on Stochastic Activity Networks

    NETWORKED MICROGRID OPTIMIZATION AND ENERGY MANAGEMENT

    Get PDF
    Military vehicles possess attributes consistent with a microgrid, containing electrical energy generation, storage, government furnished equipment (GFE), and the ability to share these capabilities via interconnection. Many military vehicles have significant energy storage capacity to satisfy silent watch requirements, making them particularly well-suited to share their energy storage capabilities with stationary microgrids for more efficient energy management. Further, the energy generation capacity and the fuel consumption rate of the vehicles are comparable to standard diesel generators, for certain scenarios, the use of the vehicles could result in more efficient operation. Energy management of a microgrid is an open area of research especially in generation constrained scenarios where shedding of low-priority loads may be required. Typical metrics used to assess the effectiveness of an energy management strategy or policy include fuel consumption, electrical storage energy requirements, or the net exergy destruction. When considering a military outpost consisting of a stationary microgrid and a set of vehicles, the metrics used for managing the network become more complex. For example, the metrics used to manage a vehicle’s onboard equipment while on patrol may include fuel consumption, the acoustic signature, and the heat signature. Now consider that the vehicles are parked at an outpost and participating in vehicle-to-grid power-sharing and control. The metrics used to manage the grid assets may now include fuel consumption, the electrical storage’s state of charge, frequency regulation, load prioritization, and load dispatching. The focus of this work is to develop energy management and control strategies that allow a set of diverse assets to be controlled, yielding optimal operation. The provided policies result in both short-term and long-term optimal control of the electrical generation assets. The contributions of this work were: (1) development of a methodology to generate a time-varying electrical load based on (1) a U.S. Army-relevant event schedule and (2) a set of meteorological conditions, resulting in a scenario rich environment suitable for modeling and control of hybrid AC/DC tactical military microgrids, (2) the development of a multi-tiered hierarchical control architecture, suitable for development of both short and long term optimal energy management strategies for hybrid electric microgrids, and (3) the development of blending strategies capable of blending a diverse set of heterogeneous assets with multiple competing objective functions. This work could be extended to include a more diverse set of energy generation assets, found within future energy networks

    CloudBench: an integrated evaluation of VM placement algorithms in clouds

    Get PDF
    A complex and important task in the cloud resource management is the efficient allocation of virtual machines (VMs), or containers, in physical machines (PMs). The evaluation of VM placement techniques in real-world clouds can be tedious, complex and time-consuming. This situation has motivated an increasing use of cloud simulators that facilitate this type of evaluations. However, most of the reported VM placement techniques based on simulations have been evaluated taking into account one specific cloud resource (e.g., CPU), whereas values often unrealistic are assumed for other resources (e.g., RAM, awaiting times, application workloads, etc.). This situation generates uncertainty, discouraging their implementations in real-world clouds. This paper introduces CloudBench, a methodology to facilitate the evaluation and deployment of VM placement strategies in private clouds. CloudBench considers the integration of a cloud simulator with a real-world private cloud. Two main tools were developed to support this methodology, a specialized multi-resource cloud simulator (CloudBalanSim), which is in charge of evaluating VM placement techniques, and a distributed resource manager (Balancer), which deploys and tests in a real-world private cloud the best VM placement configurations that satisfied user requirements defined in the simulator. Both tools generate feedback information, from the evaluation scenarios and their obtained results, which is used as a learning asset to carry out intelligent and faster evaluations. The experiments implemented with the CloudBench methodology showed encouraging results as a new strategy to evaluate and deploy VM placement algorithms in the cloud.This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness under the Grant TIN2016-79637-P “Towards Unifcation of HPC and Big Data Paradigms” and by the Mexican Council of Science and Technology (CONACYT) through a Ph.D. Grant (No. 212677)
    • 

    corecore