4 research outputs found

    Post Event Investigation of Multi-stream Video Data Utilizing Hadoop Cluster

    Get PDF
    Rapid advancement in technology and in-expensive camera has raised the necessity of monitoring systems for surveillance applications. As a result data acquired from numerous cameras deployed for surveillance is tremendous. When an event is triggered then, manually investigating such a massive data is a complex task. Thus it is essential to explore an approach that, can store massive multi-stream video data as well as, process them to find useful information. To address the challenge of storing and processing multi-stream video data, we have used Hadoop, which has grown into a leading computing model for data intensive applications. In this paper we propose a novel technique for performing post event investigation on stored surveillance video data. Our algorithm stores video data in HDFS in such a way that it efficiently identifies the location of data from HDFS based on the time of occurrence of event and perform further processing. To prove efficiency of our proposed work, we have performed event detection in the video based on the time period provided by the user. In order to estimate the performance of our approach, we evaluated the storage and processing of video data by varying (i) pixel resolution of video frame (ii) size of video data (iii) number of reducers (workers) executing the task (iv) the number of nodes in the cluster. The proposed framework efficiently achieve speed up of 5.9 for large files of 1024X1024 pixel resolution video frames thus makes it appropriate for the feasible practical deployment in any applications

    A Novel Completely Local Repairable Code Algorithm Based on Erasure Code

    Get PDF
    Hadoop Distributed File System (HDFS) is widely used in massive data storage. Because of the disadvantage of the multi-copy strategy, the hardware expansion of HDFS cannot keep up with the continuous volume of big data. Now, the traditional data replication strategy has been gradually replaced by Erasure Code due to its smaller redundancy rate and storage overhead. However, compared with replicas, Erasure Code needs to read a certain amount of data blocks during the process of data recovery, resulting in a large amount of overhead for I/O and network. Based on the Reed-Solomon (RS) algorithm, we propose a novel Completely Local Repairable Code (CLRC) algorithm. By grouping RS coded blocks and generating local check blocks, CLRC algorithm can optimize the locality of the RS algorithm, which can reduce the cost of data recovery. Evaluations show that the CLRC algorithm can reduce the bandwidth and I/O consumption during the process of data recovery when a single block is damaged. What\u27s more, the cost of decoding time is only 59% of the RS algorithm
    corecore