7,130 research outputs found

    Multihop clustering algorithm for load balancing in wireless sensor networks

    Get PDF
    The paper presents a new cluster based routing algorithm that exploits the redundancy properties of the sensor networks in order to address the traditional problem of load balancing and energy efficiency in the WSNs.The algorithm makes use of the nodes in a sensor network of which area coverage is covered by the neighbours of the nodes and mark them as temporary cluster heads. The algorithm then forms two layers of multi hop communication. The bottom layer which involves intra cluster communication and the top layer which involves inter cluster communication involving the temporary cluster heads. Performance studies indicate that the proposed algorithm solves effectively the problem of load balancing and is also more efficient in terms of energy consumption from Leach and the enhanced version of Leach

    Multihop clustering algorithm for load balancing in wireless sensor networks

    Get PDF
    The paper presents a new cluster based routing algorithm that exploits the redundancy properties of the sensor networks in order to address the traditional problem of load balancing and energy efficiency in the WSNs.The algorithm makes use of the nodes in a sensor network of which area coverage is covered by the neighbours of the nodes and mark them as temporary cluster heads. The algorithm then forms two layers of multi hop communication. The bottom layer which involves intra cluster communication and the top layer which involves inter cluster communication involving the temporary cluster heads. Performance studies indicate that the proposed algorithm solves effectively the problem of load balancing and is also more efficient in terms of energy consumption from Leach and the enhanced version of Leach

    New Generation of Instrumented Ranges: Enabling Automated Performance Analysis

    Get PDF
    Military training conducted on physical ranges that match a unit’s future operational environment provides an invaluable experience. Today, to conduct a training exercise while ensuring a unit’s performance is closely observed, evaluated, and reported on in an After Action Review, the unit requires a number of instructors to accompany the different elements. Training organized on ranges for urban warfighting brings an additional level of complexity—the high level of occlusion typical for these environments multiplies the number of evaluators needed. While the units have great need for such training opportunities, they may not have the necessary human resources to conduct them successfully. In this paper we report on our US Navy/ONR-sponsored project aimed at a new generation of instrumented ranges, and the early results we have achieved. We suggest a radically different concept: instead of recording multiple video streams that need to be reviewed and evaluated by a number of instructors, our system will focus on capturing dynamic individual warfighter pose data and performing automated performance evaluation. We will use an in situ network of automatically-controlled pan-tilt-zoom video cameras and personal position and orientation sensing devices. Our system will record video, reconstruct dynamic 3D individual poses, analyze, recognize events, evaluate performances, generate reports, provide real-time free exploration of recorded data, and even allow the user to generate ‘what-if’ scenarios that were never recorded. The most direct benefit for an individual unit will be the ability to conduct training with fewer human resources, while having a more quantitative account of their performance (dispersion across the terrain, ‘weapon flagging’ incidents, number of patrols conducted). The instructors will have immediate feedback on some elements of the unit’s performance. Having data sets for multiple units will enable historical trend analysis, thus providing new insights and benefits for the entire service.Office of Naval Researc

    Automated camera ranking and selection using video content and scene context

    Get PDF
    PhDWhen observing a scene with multiple cameras, an important problem to solve is to automatically identify “what camera feed should be shown and when?” The answer to this question is of interest for a number of applications and scenarios ranging from sports to surveillance. In this thesis we present a framework for the ranking of each video frame and camera across time and the camera network, respectively. This ranking is then used for automated video production. In the first stage information from each camera view and from the objects in it is extracted and represented in a way that allows for object- and frame-ranking. First objects are detected and ranked within and across camera views. This ranking takes into account both visible and contextual information related to the object. Then content ranking is performed based on the objects in the view and camera-network level information. We propose two novel techniques for content ranking namely: Routing Based Ranking (RBR) and Multivariate Gaussian based Ranking (MVG). In RBR we use a rule based framework where weighted fusion of object and frame level information takes place while in MVG the rank is estimated as a multivariate Gaussian distribution. Through experimental and subjective validation we demonstrate that the proposed content ranking strategies allows the identification of the best-camera at each time. The second part of the thesis focuses on the automatic generation of N-to-1 videos based on the ranked content. We demonstrate that in such production settings it is undesirable to have frequent inter-camera switching. Thus motivating the need for a compromise, between selecting the best camera most of the time and minimising the frequent inter-camera switching, we demonstrate that state-of-the-art techniques for this task are inadequate and fail in dynamic scenes. We propose three novel methods for automated camera selection. The first method (¡go f ) performs a joint optimization of a cost function that depends on both the view quality and inter-camera switching so that a i Abstract ii pleasing best-view video sequence can be composed. The other two methods (¡dbn and ¡util) include the selection decision into the ranking-strategy. In ¡dbn we model the best-camera selection as a state sequence via Directed Acyclic Graphs (DAG) designed as a Dynamic Bayesian Network (DBN), which encodes the contextual knowledge about the camera network and employs the past information to minimize the inter camera switches. In comparison ¡util utilizes the past as well as the future information in a Partially Observable Markov Decision Process (POMDP) where the camera-selection at a certain time is influenced by the past information and its repercussions in the future. The performance of the proposed approach is demonstrated on multiple real and synthetic multi-camera setups. We compare the proposed architectures with various baseline methods with encouraging results. The performance of the proposed approaches is also validated through extensive subjective testing

    Key technologies for safe and autonomous drones

    Get PDF
    Drones/UAVs are able to perform air operations that are very difficult to be performed by manned aircrafts. In addition, drones' usage brings significant economic savings and environmental benefits, while reducing risks to human life. In this paper, we present key technologies that enable development of drone systems. The technologies are identified based on the usages of drones (driven by COMP4DRONES project use cases). These technologies are grouped into four categories: U-space capabilities, system functions, payloads, and tools. Also, we present the contributions of the COMP4DRONES project to improve existing technologies. These contributions aim to ease drones’ customization, and enable their safe operation.This project has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 826610. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, Austria, Belgium, Czech Republic, France, Italy, Latvia, Netherlands. The total project budget is 28,590,748.75 EUR (excluding ESIF partners), while the requested grant is 7,983,731.61 EUR to ECSEL JU, and 8,874,523.84 EUR of National and ESIF Funding. The project has been started on 1st October 2019
    corecore