1,040 research outputs found

    Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures

    Get PDF
    From being an expensive toy in the hands of governmental agencies, computers have evolved a long way from the huge vacuum tube-based machines to today\u27s small but more than thousand times powerful personal computers. Computers have long been investigated as the foundation for an artificial vision system. The computer vision discipline has seen a rapid development over the past few decades from rudimentary motion detection systems to complex modekbased object motion analyzing algorithms. Our work is one such improvement over previous algorithms developed for the purpose of object motion analysis in video feeds. Our work is based on the principle of multi-dimensional object signatures. Object signatures are constructed from individual attributes extracted through video processing. While past work has proceeded on similar lines, the lack of a comprehensive object definition model severely restricts the application of such algorithms to controlled situations. In conditions with varying external factors, such algorithms perform less efficiently due to inherent assumptions of constancy of attribute values. Our approach assumes a variable environment where the attribute values recorded of an object are deemed prone to variability. The variations in the accuracy in object attribute values has been addressed by incorporating weights for each attribute that vary according to local conditions at a sensor location. This ensures that attribute values with higher accuracy can be accorded more credibility in the object matching process. Variations in attribute values (such as surface color of the object) were also addressed by means of applying error corrections such as shadow elimination from the detected object profile. Experiments were conducted to verify our hypothesis. The results established the validity of our approach as higher matching accuracy was obtained with our multi-dimensional approach than with a single-attribute based comparison

    Online video streaming for human tracking based on weighted resampling particle filter

    Full text link
    © 2018 The Authors. Published by Elsevier Ltd. This paper proposes a weighted resampling method for particle filter which is applied for human tracking on active camera. The proposed system consists of three major parts which are human detection, human tracking, and camera control. The codebook matching algorithm is used for extracting human region in human detection system, and the particle filter algorithm estimates the position of the human in every input image. The proposed system in this paper selects the particles with highly weighted value in resampling, because it provides higher accurate tracking features. Moreover, a proportional-integral-derivative controller (PID controller) controls the active camera by minimizing difference between center of image and the position of object obtained from particle filter. The proposed system also converts the position difference into pan-tilt speed to drive the active camera and keep the human in the field of view (FOV) camera. The intensity of image changes overtime while tracking human therefore the proposed system uses the Gaussian mixture model (GMM) to update the human feature model. As regards, the temporal occlusion problem is solved by feature similarity and the resampling particles. Also, the particle filter estimates the position of human in every input frames, thus the active camera drives smoothly. The robustness of the accurate tracking of the proposed system can be seen in the experimental results

    A Penny\u27s Worth of Principles and Standards Using Scientific Notation

    Get PDF

    Vision based relative position estimation in surgical robotics.

    Get PDF
    Teleoperation-based Robotic-Assisted Minimally In-vasive Surgery (RAMIS) has gained immense popularity in medical field. However, limited physical interaction between the surgeon and patient poses a significant challenge. In RAMIS, the surgeon operates the robotic system remotely, which can diminish the personal connection and raise concerns about immediate responsiveness to unforeseen situations. Additionally, patients may perceive RAMIS as riskier due to potential technological failures and a lack of direct surgeon control. Surgeons have identified accidental clashes between surgical instruments and tissues as a critical issue. This work presents a technique that measures the distance between a surgical tool and tissue by extracting feature points from a Static Virtual Marker (SVM) and employing a classic feature detection algorithm Fast Oriented and Rotated Brief (ORB). Using a customized surgical robot and a ROS-based transform measurement system, this approach was successfully validated in the Gazebo simulation environment, offering safer surgical operations

    A video synchronization approach for coherent key-frame extraction and object segmentation

    Full text link
    © 2005 - 2014 JATIT & LLS. All rights reserved. In this paper we discuss a new video frame synchronization approach for coherent key-frame extraction and object segmentation. As two basic units for content-based video analysis, key-frame extraction and object segmentation are usually implemented independently and separately based on different feature sets. Our previous work showed that by exploiting the inherent relationship between key-frames and objects, a set of salient key-frames can be extracted to support robust and efficient object segmentation. This work furthers the previous numerical studies by suggesting a new analytical approach to jointly formulate key-frame extraction and object segmentation via a statistical mixture model where the concept of frame/pixel saliency which is introduced and also this deals with the relationship between the frames. A modified Expectation Maximization algorithm is developed for model estimation that leads to the most salient key-frames for object segmentation. Simulations on both synthetic and real videos show the effectiveness and efficiency of the proposed method

    Detection and Generalization of Spatio-temporal Trajectories for Motion Imagery

    Get PDF
    In today\u27s world of vast information availability users often confront large unorganized amounts of data with limited tools for managing them. Motion imagery datasets have become increasingly popular means for exposing and disseminating information. Commonly, moving objects are of primary interest in modeling such datasets. Users may require different levels of detail mainly for visualization and further processing purposes according to the application at hand. In this thesis we exploit the geometric attributes of objects for dataset summarization by using a series of image processing and neural network tools. In order to form data summaries we select representative time instances through the segmentation of an object\u27s spatio-temporal trajectory lines. High movement variation instances are selected through a new hybrid self-organizing map (SOM) technique to describe a single spatio-temporal trajectory. Multiple objects move in diverse yet classifiable patterns. In order to group corresponding trajectories we utilize an abstraction mechanism that investigates a vague moving relevance between the data in space and time. Thus, we introduce the spatio-temporal neighborhood unit as a variable generalization surface. By altering the unit\u27s dimensions, scaled generalization is accomplished. Common complications in tracking applications that include occlusion, noise, information gaps and unconnected segments of data sequences are addressed through the hybrid-SOM analysis. Nevertheless, entangled data sequences where no information on which data entry belongs to each corresponding trajectory are frequently evident. A multidimensional classification technique that combines geometric and backpropagation neural network implementation is used to distinguish between trajectory data. Further more, modeling and summarization of two-dimensional phenomena evolving in time brings forward the novel concept of spatio-temporal helixes as compact event representations. The phenomena models are comprised of SOM movement nodes (spines) and cardinality shape-change descriptors (prongs). While we focus on the analysis of MI datasets, the framework can be generalized to function with other types of spatio-temporal datasets. Multiple scale generalization is allowed in a dynamic significance-based scale rather than a constant one. The constructed summaries are not just a visualization product but they support further processing for metadata creation, indexing, and querying. Experimentation, comparisons and error estimations for each technique support the analyses discussed

    Microgravity: A Teacher's Guide With Activities in Science, Mathematics, and Technology

    Get PDF
    The purpose of this curriculum supplement guide is to define and explain microgravity and show how microgravity can help us learn about the phenomena of our world. The front section of the guide is designed to provide teachers of science, mathematics, and technology at many levels with a foundation in microgravity science and applications. It begins with background information for the teacher on what microgravity is and how it is created. This is followed with information on the domains of microgravity science research; biotechnology, combustion science, fluid physics, fundamental physics, materials science, and microgravity research geared toward exploration. The background section concludes with a history of microgravity research and the expectations microgravity scientists have for research on the International Space Station. Finally, the guide concludes with a suggested reading list, NASA educational resources including electronic resources, and an evaluation questionnaire

    Object Recognition and Modeling Using SIFT Features

    Get PDF
    In this paper we present a technique for object recognition and modelling based on local image features matching. Given a complete set of views of an object the goal of our technique is the recognition of the same object in an image of a cluttered environment containing the object and an estimate of its pose. The method is based on visual modeling of objects from a multi-view representation of the object to recognize. The first step consists of creating object model, selecting a subset of the available views using SIFT descriptors to evaluate image similarity and relevance. The selected views are then assumed as the model of the object and we show that they can effectively be used to visually represent the main aspects of the object. Recognition is done making comparison between the image containing an object in generic position and the views selected as object models. Once an object has been recognized the pose can be estimated searching the complete set of views of the object. Experimental results are very encouraging using both a private dataset we acquired in our lab and a publicly available dataset
    • …
    corecore