138,140 research outputs found

    Real-time Tracking Based on Neuromrophic Vision

    Full text link
    Real-time tracking is an important problem in computer vision in which most methods are based on the conventional cameras. Neuromorphic vision is a concept defined by incorporating neuromorphic vision sensors such as silicon retinas in vision processing system. With the development of the silicon technology, asynchronous event-based silicon retinas that mimic neuro-biological architectures has been developed in recent years. In this work, we combine the vision tracking algorithm of computer vision with the information encoding mechanism of event-based sensors which is inspired from the neural rate coding mechanism. The real-time tracking of single object with the advantage of high speed of 100 time bins per second is successfully realized. Our method demonstrates that the computer vision methods could be used for the neuromorphic vision processing and we can realize fast real-time tracking using neuromorphic vision sensors compare to the conventional camera

    Real time sobel square edge detector for night vision analysis

    Get PDF
    Vision analysis with low or no illumination is gaining more and more attention recently, especially in the fields of security surveillance and medical diagnosis. In this paper, a real time sobel square edge detector is developed as a vision enhancer in order to render clear shapes of object in targeting scenes, allowing further analysis such as object or human detection, object or human tracking, human behavior recognition, and identification on abnormal scenes or activities. The method is optimized for real time applications and compared with existing edge detectors. Program codes are illustrated in the content and the results show that the proposed algorithm is promising to generate clear vision data with low noise

    EagleSense:tracking people and devices in interactive spaces using real-time top-view depth-sensing

    Get PDF
    Real-time tracking of people's location, orientation and activities is increasingly important for designing novel ubiquitous computing applications. Top-view camera-based tracking avoids occlusion when tracking people while collaborating, but often requires complex tracking systems and advanced computer vision algorithms. To facilitate the prototyping of ubiquitous computing applications for interactive spaces, we developed EagleSense, a real-time human posture and activity recognition system with a single top-view depth sensing camera. We contribute our novel algorithm and processing pipeline, including details for calculating silhouetteextremities features and applying gradient tree boosting classifiers for activity recognition optimised for top-view depth sensing. EagleSense provides easy access to the real-time tracking data and includes tools for facilitating the integration into custom applications. We report the results of a technical evaluation with 12 participants and demonstrate the capabilities of EagleSense with application case studies

    Better Feature Tracking Through Subspace Constraints

    Full text link
    Feature tracking in video is a crucial task in computer vision. Usually, the tracking problem is handled one feature at a time, using a single-feature tracker like the Kanade-Lucas-Tomasi algorithm, or one of its derivatives. While this approach works quite well when dealing with high-quality video and "strong" features, it often falters when faced with dark and noisy video containing low-quality features. We present a framework for jointly tracking a set of features, which enables sharing information between the different features in the scene. We show that our method can be employed to track features for both rigid and nonrigid motions (possibly of few moving bodies) even when some features are occluded. Furthermore, it can be used to significantly improve tracking results in poorly-lit scenes (where there is a mix of good and bad features). Our approach does not require direct modeling of the structure or the motion of the scene, and runs in real time on a single CPU core.Comment: 8 pages, 2 figures. CVPR 201
    • …
    corecore