3,870 research outputs found

    Real-time detection and tracking of multiple objects with partial decoding in H.264/AVC bitstream domain

    Full text link
    In this paper, we show that we can apply probabilistic spatiotemporal macroblock filtering (PSMF) and partial decoding processes to effectively detect and track multiple objects in real time in H.264|AVC bitstreams with stationary background. Our contribution is that our method cannot only show fast processing time but also handle multiple moving objects that are articulated, changing in size or internally have monotonous color, even though they contain a chaotic set of non-homogeneous motion vectors inside. In addition, our partial decoding process for H.264|AVC bitstreams enables to improve the accuracy of object trajectories and overcome long occlusion by using extracted color information.Comment: SPIE Real-Time Image and Video Processing Conference 200

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]

    Practical Color-Based Motion Capture

    Get PDF
    Motion capture systems have been widely used for high quality content creation and virtual reality but are rarely used in consumer applications due to their price and setup cost. In this paper, we propose a motion capture system built from commodity components that can be deployed in a matter of minutes. Our approach uses one or more webcams and a color shirt to track the upper-body at interactive rates. We describe a robust color calibration system that enables our color-based tracking to work against cluttered backgrounds and under multiple illuminants. We demonstrate our system in several real-world indoor and outdoor settings

    Comparative Study of Model-Based and Learning-Based Disparity Map Fusion Methods

    Get PDF
    Creating an accurate depth map has several, valuable applications including augmented/virtual reality, autonomous navigation, indoor/outdoor mapping, object segmentation, and aerial topography. Current hardware solutions for precise 3D scanning are relatively expensive. To combat hardware costs, software alternatives based on stereoscopic images have previously been proposed. However, software solutions are less accurate than hardware solutions, such as laser scanning, and are subject to a variety of irregularities. Notably, disparity maps generated from stereo images typically fall short in cases of occlusion, near object boundaries, and on repetitive texture regions or texture-less regions. Several post-processing methods are examined in an effort to combine strong algorithm results and alleviate erroneous disparity regions. These methods include basic statistical combinations, histogram-based voting, edge detection guidance, support vector machines (SVMs), and bagged trees. Individual errors and average errors are compared between the newly introduced fusion methods and the existing disparity algorithms. Several acceptable solutions are identified to bridge the gap between 3D scanning and stereo imaging. It is shown that fusing disparity maps can result in lower error rates than individual algorithms across the dataset while maintaining a high level of robustness

    Assessing machine learning classifiers for the detection of animals' behavior using depth-based tracking

    Full text link
    [EN] There is growing interest in the automatic detection of animals' behaviors and body postures within the field of Animal Computer Interaction, and the benefits this could bring to animal welfare, enabling remote communication, welfare assessment, detection of behavioral patterns, interactive and adaptive systems, etc. Most of the works on animals' behavior recognition rely on wearable sensors to gather information about the animals' postures and movements, which are then processed using machine learning techniques. However, non-wearable mechanisms such as depth-based tracking could also make use of machine learning techniques and classifiers for the automatic detection of animals' behavior. These systems also offer the advantage of working in set-ups in which wearable devices would be difficult to use. This paper presents a depth-based tracking system for the automatic detection of animals' postures and body parts, as well as an exhaustive evaluation on the performance of several classification algorithms based on both a supervised and a knowledge-based approach. The evaluation of the depth -based tracking system and the different classifiers shows that the system proposed is promising for advancing the research on animals' behavior recognition within and outside the field of Animal Computer Interaction. (C) 2017 Elsevier Ltd. All rights reserved.This work is funded by the European Development Regional Fund (EDRF-FEDER) and supported by Spanish MINECO with Project TIN2014-60077-R. It also received support from a postdoctoral fellowship within the VALi+d Program of the Conselleria d'Educacio, Cultura I Esport (Generalitat Valenciana) awarded to Alejandro Catala (APOSTD/2013/013). The work of Patricia Pons is supported by a national grant from the Spanish MECD (FPU13/03831). Special thanks to our cat participants and their owners, and many thanks to our feline caretakers and therapists, Olga, Asier and Julia, for their valuable collaboration and their dedication to animal wellbeing.Pons Tomás, P.; Jaén Martínez, FJ.; Catalá Bolós, A. (2017). Assessing machine learning classifiers for the detection of animals' behavior using depth-based tracking. Expert Systems with Applications. 86:235-246. https://doi.org/10.1016/j.eswa.2017.05.063S2352468
    corecore