5,876 research outputs found
MOTION DETECTION IN MOVING BACKGROUND USING ORB FEATURE MATCHING AND AFFINE TRANSFORM
Visual surveillance systems have gained a lot of interest in the last few years due to its importance in military application and security. Surveillance cameras are installed in security sensitive areas such as banks, train stations, highways, and borders. In computer vision, moving object detection and tracking methods are the most important preliminary steps for higher-level video analysis applications. Moving objects in moving background are an important research area of image-video processing and computer vision. Feature matching is at the base of many computer vision problems, such as object recognition or structure from motion. ORB is used for feature detection and tracking. The objective is to track the moving objects in a moving video. Oriented Fast and Rotated Brief (ORB) which is a combination of two major techniques: Features from Accelerated Segment Test (FAST) and       Binary Robust Independent Elementary Features (BRIEF).Mismatched features between two frames are rejected by the proposed method for a good accuracy of compensation. The Residues are removed using Logic AND Operation. To validate the proposed method, and to perform experiments that compare the properties of the proposed method to Scale Invariant Feature Transform (SIFT) based method and Speeded-Up Robust Features (SURF) based method, for both detecting accuracy and efficiency
Sparse optical flow regularisation for real-time visual tracking
Optical flow can greatly improve the robustness of visual tracking algorithms. While dense optical flow algorithms have various applications, they can not be used for real-time solutions without resorting to GPU calculations. Furthermore, most optical flow algorithms fail in challenging lighting environments due to the violation of the brightness constraint. We propose a simple but effective iterative regularisation scheme for real-time, sparse optical flow algorithms, that is shown to be robust to sudden illumination changes and can handle large displacements. The algorithm proves to outperform well known techniques in real life video sequences, while being much faster to calculate. Our solution increases the robustness of a real-time particle filter based tracking application, consuming only a fraction of the available CPU power. Furthermore, a new and realistic optical flow dataset with annotated ground truth is created and made freely available for research purposes
Recommended from our members
High-speed multi-dimensional relative navigation for uncooperative space objects
This work proposes a high-speed Light Detection and Ranging (LIDAR) based navigation architecture that is appropriate for uncooperative relative space navigation applications. In contrast to current solutions that exploit 3D LIDAR data, our architecture transforms the odometry problem from the 3D space into multiple 2.5D ones and completes the odometry problem by utilizing a recursive filtering scheme. Trials evaluate several current state-of-the-art 2D keypoint detection and local feature description methods as well as recursive filtering techniques on a number of simulated but credible scenarios that involve a satellite model developed by Thales Alenia Space (France). Most appealing performance is attained by the 2D keypoint detector Good Features to Track (GFFT) combined with the feature descriptor KAZE, that are further combined with either the H∞ or the Kalman recursive filter. Experimental results demonstrate that compared to current algorithms, the GFTT/KAZE combination is highly appealing affording one order of magnitude more accurate odometry and a very low processing burden, which depending on the competitor method, may exceed one order of magnitude faster computation
Learning Articulated Motions From Visual Demonstration
Many functional elements of human homes and workplaces consist of rigid
components which are connected through one or more sliding or rotating
linkages. Examples include doors and drawers of cabinets and appliances;
laptops; and swivel office chairs. A robotic mobile manipulator would benefit
from the ability to acquire kinematic models of such objects from observation.
This paper describes a method by which a robot can acquire an object model by
capturing depth imagery of the object as a human moves it through its range of
motion. We envision that in future, a machine newly introduced to an
environment could be shown by its human user the articulated objects particular
to that environment, inferring from these "visual demonstrations" enough
information to actuate each object independently of the user.
Our method employs sparse (markerless) feature tracking, motion segmentation,
component pose estimation, and articulation learning; it does not require prior
object models. Using the method, a robot can observe an object being exercised,
infer a kinematic model incorporating rigid, prismatic and revolute joints,
then use the model to predict the object's motion from a novel vantage point.
We evaluate the method's performance, and compare it to that of a previously
published technique, for a variety of household objects.Comment: Published in Robotics: Science and Systems X, Berkeley, CA. ISBN:
978-0-9923747-0-
- …