126,259 research outputs found

    Desain dan Implementasi Visual Object Tracking Menggunakan Pan and Tilt Vision System

    Get PDF
    Object tracking is a technique for detecting and following the moving object. It can be used to helping security officers to monitor the room that has a large monitoring working area. The aim of research is to design the visual system of object tracking by using pan and tilt vision system. The orientation of camera can move in vertically and horizontally path. Visualization program for this project is consist of motion detection, edge detection and center of mass. The detected object position can be used for controlling the pan and tilt at mechanical system which is mounted on the camera to track the moving object. The results of research show the design of object tracking can detecting and following walking human with an ideal distance of 6 meters and directional angular shift is 5 degrees on the visual resolution of 360 × 240 pixel

    Modeling and tracking relative movement of object parts

    Get PDF
    Video surveillance systems play an important role in many civilian and military applications, for the purposes of security and surveillance. Object detection is an important component in a video surveillance system, used to identify possible objects of interest and to generate data for tracking and analysis purposes. Not much exploration has been done to track the moving parts of the object which is being tracked. Some of the promising techniques like Kalman Filter, Mean-shift algorithm, Matching Eigen Space, Discrete Wavelet Transform, Curvelet Transform, Distance Metric Learning have shown good performance for keeping track of moving object. Most of this work is focused on studying and analyzing various object tracking techniques which are available. Most of the techniques which are available for object tracking have heavy computation requirements. The intention of this research is to design a technique, which is not computationally intensive and to be able to track relative movements of object parts in real time. The research applies a technique called foreground detection (also known as background subtraction) for tracking the object as it is not computationally intensive. For tracking the relative movement of object parts, a skeletonization technique is used. During implementation, it is found that using skeletonization technique, it is harder to extract the objects parts

    Autonomous Detection and Tracking of an Object Autonomously Using Ar.drone Quadcopter

    Get PDF
    Nowadays, there are many robotic applications being developed to do tasks autonomously without any interactions or commands from human. Therefore, developing a system which enables a robot to do surveillance such as detection and tracking of a moving object will lead us to more advanced tasks carried out by robots in the future. AR.Drone is a flying robot platform that is able to take role as UAV (Unmanned Aerial Vehicle). Usage of computer vision algorithm such as Hough Transform makes it possible for such system to be implemented on AR.Drone. In this research, the developed algorithm is able to detect and track an object with certain shape and color. Then the algorithm is successfully implemented on AR.Drone quadcopter for detection and tracking

    An Improved Video Object Segmentation and Tracking based on Features using Threshold Filtering Technique

    Get PDF
    Video object segmentation and tracking is significant research topic in a video surveillance application. Recently, many researches has been developed for video object segmentation and detection, however, the video object segmentation based on features like shape, texture, intensity was not efficiently performed. In this paper, an Improved Threshold Filtered Video Object Detection and Tracking (ITFVODT) framework is designed for efficient video object segmentation based on their features like shape, texture, intensity and tracking of moving objects. ITFVODT framework initially takes video file as input. Then, ITFVODT framework segments the video frames based on shape, texture, intensity of image. After the object segmentation, filtering technique is applied for tracking the video objects. Filtering technique is used in ITFVODT framework for improving the video quality by reducing mean square error. Finally, ITFVODT framework performed the video objects detection task with help of Thresholding technique which in turn improves the video object detection accuracy. The proposed ITFVODT framework using video images obtained from Internet Archive 501(c) (3) for conducting experiment. The performance of ITFVODT framework is tested with the metrics such as object segmentation accuracy, Peak Signal to Noise Ratio, object tracking accuracy, Mean Square Error and object detection accuracy of moving video object frames. Experimental analysis shows that the ITFVODT framework is able to improve the video object segmentation accuracy by 12% and also improve video object detection accuracy by 17% when compared to the state-of-the-art works

    Moving object detection for interception by a humanoid robot

    Get PDF
    Interception of a moving object with an autonomous robot is an important problem in robotics. It has various application areas, such as in an industrial setting where products on a conveyor would be picked up by a robotic arm, in the military to halt intruders, in robotic soccer (where the robots try to get to the moving ball and try to block an opponent\u27s attempt to pass the ball), and in other challenging situations. Interception, in and of itself, is a complex task that demands a system with target recognition capability, proper navigation and actuation toward the moving target. There are numerous techniques for intercepting stationary targets and targets that move along a certain trajectory (linear, circular, and parabolic). However, much less research has been done for objects moving with an unknown and unpredictable trajectory, changing scale as well and having a different view point, where, additionally, the reference frame of the robot vision system is also dynamic. This study aims to find methods for object detection and tracking using vision system applicable for autonomous interception of a moving humanoid robot target by another humanoid robot. With the use of the implemented vision system, a robot is able to detect, track and intercept in a dynamic environment the moving target, taking into account the unique specifications of a humanoid robot, such as the kinematics of walking. The vision system combined object detection based on Haar/LBP feature classifiers trained on Boosted Cascades\u27\u27 and target contour tracking using optical flow techniques. The constant updates during navigation helped to intercept the object moving with unpredicted trajectory

    Application of improved you only look once model in road traffic monitoring system

    Get PDF
    The present research focuses on developing an intelligent traffic management solution for tracking the vehicles on roads. Our proposed work focuses on a much better you only look once (YOLOv4) traffic monitoring system that uses the CSPDarknet53 architecture as its foundation. Deep-sort learning methodology for vehicle multi-target detection from traffic video is also part of our research study. We have included features like the Kalman filter, which estimates unknown objects and can track moving targets. Hungarian techniques identify the correct frame for the object. We are using enhanced object detection network design and new data augmentation techniques with YOLOv4, which ultimately aids in traffic monitoring. Until recently, object identification models could either perform quickly or draw conclusions quickly. This was a big improvement, as YOLOv4 has an astoundingly good performance for a very high frames per second (FPS). The current study is focused on developing an intelligent video surveillance-based vehicle tracking system that tracks the vehicles using a neural network, image-based tracking, and YOLOv4. Real video sequences of road traffic are used to test the effectiveness of the method that has been suggested in the research. Through simulations, it is demonstrated that the suggested technique significantly increases graphics processing unit (GPU) speed and FSP as compared to baseline algorithms

    Real Time Fusion of Radioisotope Direction Estimation and Visual Object Tracking

    Get PDF
    Research into discovering prohibited nuclear material plays an integral role in providing security from terrorism. Although many diverse methods contribute to defense, there exists a capability gap in localizing moving sources. This thesis introduces a real time radioisotope tracking algorithm assisted by visual object tracking methods to fill the capability gap. The proposed algorithm can estimate carrier likelihood for objects in its field of view, and is designed to assist a pedestrian agent wearing a backpack detector. The complex, crowd-filled, urban environments where this algorithm must function combined with the size and weight limitations of a pedestrian system makes designing a functioning algorithm challenging.The contribution of this thesis is threefold. First, a generalized directional estimator is proposed. Second, two state-of-the-art visual object detection and visual object tracking methods are combined into a single tracking algorithm. Third, those outputs are fused to produce a real time radioisotope tracking algorithm. This algorithm is designed for use with the backpack detector built by the IDEAS for WIND research group. This setup takes advantage of recent advances in detector, camera, and computer technologies to meet the challenging physical limitations.The directional estimator operates via gradient boosting regression to predict radioisotope direction with a variance of 50 degrees when trained on a simple laboratory dataset. Under conditions similar to other state-of-the-art methods, the accuracy is comparable. YOLOv3 and SiamFC are chosen by evaluating advanced visual tracking methods in terms of speed and efficiency across multiple architectures, and in terms of accuracy on datasets like the Visual Object Tracking (VOT) Challenge and Common Objects in Context (COCO). The resultant tracking algorithm operates in real time. The outputs of direction estimation and visual tracking are fused using sequential Bayesian inference to predict carrier likelihood. Using lab trials evaluated by hand on visual and nuclear data, and a synthesized challenge dataset using visual data from the Boston Marathon attack, it can be observed that this prototype system advances the state-of-the-art towards localization of a moving source
    corecore