3 research outputs found

    Modified inter prediction H.264 video encoding for maritime surveillance

    Get PDF
    Video compression has evolved since it is first being standardized. The most popular CODEC, H.264 can compress video effectively according to the quality that is required. This is due to the motion estimation (ME) process that has impressive features like variable block sizes varying from 4Ă—4 to 16Ă—16 and quarter pixel motion compensation. However, the disadvantage of H.264 is that, it is very complex and impractical for hardware implementation. Many efforts have been made to produce low complexity encoding by compromising on the bitrate and decoded quality. Two notable methods are Fast Search Mode and Early Termination. In Early Termination concept, the encoder does not have to perform ME on every macroblock for every block size. If certain criteria are reached, the process could be terminated and the Mode Decision could select the best block size much faster. This project proposes on using background subtraction to maximize the Early Termination process. When recording using static camera, the background remains the same for a long period of time where most macroblocks will produce minimum residual. Thus in this thesis, the ME process for the background macroblock is terminated much earlier using the maximum 16Ă—16 macroblock size. The accuracy of the background segmentation for maritime surveillance video case study is 88.43% and the true foreground rate is at 41.74%. The proposed encoder manages to reduce 73.5% of the encoding time and 80.5% of the encoder complexity. The bitrate of the output is also reduced, in the range of 20%, compared to the H.264 baseline encoder. The results show that the proposed method achieves the objectives of improving the compression rate and the encoding time

    The effects of scene content parameters, compression, and frame rate on the performance of analytics systems

    Get PDF
    In this investigation we study the effects of compression and frame rate reduction on the performance of four video analytics (VA) systems utilizing a low complexity scenario, such as the Sterile Zone (SZ). Additionally, we identify the most influential scene parameters affecting the performance of these systems. The SZ scenario is a scene consisting of a fence, not to be trespassed, and an area with grass. The VA system needs to alarm when there is an intruder (attack) entering the scene. The work includes testing of the systems with uncompressed and compressed (using H.264/MPEG-4 AVC at 25 and 5 frames per second) footage, consisting of quantified scene parameters. The scene parameters include descriptions of scene contrast, camera to subject distance, and attack portrayal. Additional footage, including only distractions (no attacks) is also investigated. Results have shown that every system has performed differently for each compression/frame rate level, whilst overall, compression has not adversely affected the performance of the systems. Frame rate reduction has decreased performance and scene parameters have influenced the behavior of the systems differently. Most false alarms were triggered with a distraction clip, including abrupt shadows through the fence. Findings could contribute to the improvement of VA systems

    Precise Depth Image Based Real-Time 3D Difference Detection

    Get PDF
    3D difference detection is the task to verify whether the 3D geometry of a real object exactly corresponds to a 3D model of this object. This thesis introduces real-time 3D difference detection with a hand-held depth camera. In contrast to previous works, with the proposed approach, geometric differences can be detected in real time and from arbitrary viewpoints. Therefore, the scan position of the 3D difference detection be changed on the fly, during the 3D scan. Thus, the user can move the scan position closer to the object to inspect details or to bypass occlusions. The main research questions addressed by this thesis are: Q1: How can 3D differences be detected in real time and from arbitrary viewpoints using a single depth camera? Q2: Extending the first question, how can 3D differences be detected with a high precision? Q3: Which accuracy can be achieved with concrete setups of the proposed concept for real time, depth image based 3D difference detection? This thesis answers Q1 by introducing a real-time approach for depth image based 3D difference detection. The real-time difference detection is based on an algorithm which maps the 3D measurements of a depth camera onto an arbitrary 3D model in real time by fusing computer vision (depth imaging and pose estimation) with a computer graphics based analysis-by-synthesis approach. Then, this thesis answers Q2 by providing solutions for enhancing the 3D difference detection accuracy, both by precise pose estimation and by reducing depth measurement noise. A precise variant of the 3D difference detection concept is proposed, which combines two main aspects. First, the precision of the depth camera’s pose estimation is improved by coupling the depth camera with a very precise coordinate measuring machine. Second, measurement noise of the captured depth images is reduced and missing depth information is filled in by extending the 3D difference detection with 3D reconstruction. The accuracy of the proposed 3D difference detection is quantified by a quantitative evaluation. This provides an anwer to Q3. The accuracy is evaluated both for the basic setup and for the variants that focus on a high precision. The quantitative evaluation using real-world data covers both the accuracy which can be achieved with a time-of-flight camera (SwissRanger 4000) and with a structured light depth camera (Kinect). With the basic setup and the structured light depth camera, differences of 8 to 24 millimeters can be detected from one meter measurement distance. With the enhancements proposed for precise 3D difference detection, differences of 4 to 12 millimeters can be detected from one meter measurement distance using the same depth camera. By solving the challenges described by the three research question, this thesis provides a solution for precise real-time 3D difference detection based on depth images. With the approach proposed in this thesis, dense 3D differences can be detected in real time and from arbitrary viewpoints using a single depth camera. Furthermore, by coupling the depth camera with a coordinate measuring machine and by integrating 3D reconstruction in the 3D difference detection, 3D differences can be detected in real time and with a high precision
    corecore