39,617 research outputs found
Human motion retrieval based on freehand sketch
In this paper, we present an integrated framework of human motion retrieval based on freehand sketch. With some simple rules, the user can acquire a desired motion by sketching several key postures. To retrieve efficiently and accurately by sketch, the 3D postures are projected onto several 2D planes. The limb direction feature is proposed to represent the input sketch and the projected-postures. Furthermore, a novel index structure based on k-d tree is constructed to index the motions in the database, which speeds up the retrieval process. With our posture-by-posture retrieval algorithm, a continuous motion can be got directly or generated by using a pre-computed graph structure. What's more, our system provides an intuitive user interface. The experimental results demonstrate the effectiveness of our method. © 2014 John Wiley & Sons, Ltd
A semantic feature for human motion retrieval
With the explosive growth of motion capture data, it becomes very imperative in animation production to have an efficient search engine to retrieve motions from large motion repository. However, because of the high dimension of data space and complexity of matching methods, most of the existing approaches cannot return the result in real time. This paper proposes a high level semantic feature in a low dimensional space to represent the essential characteristic of different motion classes. On the basis of the statistic training of Gauss Mixture Model, this feature can effectively achieve motion matching on both global clip level and local frame level. Experiment results show that our approach can retrieve similar motions with rankings from large motion database in real-time and also can make motion annotation automatically on the fly. Copyright © 2013 John Wiley & Sons, Ltd
DC-image for real time compressed video matching
This chapter presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without full decompression. In addition, the relevant arguments and supporting evidences are discussed. Several local feature detectors will be examined to select the best for matching using the DC-image. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and computation complexity. The second experiment compares between using local features and global features regarding compressed video matching with respect to the DC-image. The results confirmed that the use of DC-image, despite its highly reduced size, it is promising as it produces higher matching precision, compared to the full I-frame. Also, SIFT, as a local feature, outperforms most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin which leaves a space for further optimizations that can be done to improve this computation complexity
Real-time motion data annotation via action string
Even though there is an explosive growth of motion capture data, there is still a lack of efficient and reliable methods to automatically annotate all the motions in a database. Moreover, because of the popularity of mocap devices in home entertainment systems, real-time human motion annotation or recognition becomes more and more imperative. This paper presents a new motion annotation method that achieves both the aforementioned two targets at the same time. It uses a probabilistic pose feature based on the Gaussian Mixture Model to represent each pose. After training a clustered pose feature model, a motion clip could be represented as an action string. Then, a dynamic programming-based string matching method is introduced to compare the differences between action strings. Finally, in order to achieve the real-time target, we construct a hierarchical action string structure to quickly label each given action string. The experimental results demonstrate the efficacy and efficiency of our method
When are abrupt onsets found efficiently in complex visual search? : evidence from multi-element asynchronous dynamic search
Previous work has found that search principles derived from simple visual search tasks do not necessarily apply to more complex search tasks. Using a Multielement Asynchronous Dynamic (MAD) visual search task, in which high numbers of stimuli could either be moving, stationary, and/or changing in luminance, Kunar and Watson (M. A Kunar & D. G. Watson, 2011, Visual search in a Multi-element Asynchronous Dynamic (MAD) world, Journal of Experimental Psychology: Human Perception and Performance, Vol 37, pp. 1017-1031) found that, unlike previous work, participants missed a higher number of targets with search for moving items worse than for static items and that there was no benefit for finding targets that showed a luminance onset. In the present research, we investigated why luminance onsets do not capture attention and whether luminance onsets can ever capture attention in MAD search. Experiment 1 investigated whether blinking stimuli, which abruptly offset for 100 ms before reonsetting-conditions known to produce attentional capture in simpler visual search tasks-captured attention in MAD search, and Experiments 2-5 investigated whether giving participants advance knowledge and preexposure to the blinking cues produced efficient search for blinking targets. Experiments 6-9 investigated whether unique luminance onsets, unique motion, or unique stationary items captured attention. The results found that luminance onsets captured attention in MAD search only when they were unique, consistent with a top-down unique feature hypothesis. (PsycINFO Database Record (c) 2013 APA, all rights reserved)
When are abrupt onsets found efficiently in complex visual search? : evidence from multi-element asynchronous dynamic search
Previous work has found that search principles derived from simple visual search tasks do not necessarily apply to more complex search tasks. Using a Multielement Asynchronous Dynamic (MAD) visual search task, in which high numbers of stimuli could either be moving, stationary, and/or changing in luminance, Kunar and Watson (M. A Kunar & D. G. Watson, 2011, Visual search in a Multi-element Asynchronous Dynamic (MAD) world, Journal of Experimental Psychology: Human Perception and Performance, Vol 37, pp. 1017-1031) found that, unlike previous work, participants missed a higher number of targets with search for moving items worse than for static items and that there was no benefit for finding targets that showed a luminance onset. In the present research, we investigated why luminance onsets do not capture attention and whether luminance onsets can ever capture attention in MAD search. Experiment 1 investigated whether blinking stimuli, which abruptly offset for 100 ms before reonsetting-conditions known to produce attentional capture in simpler visual search tasks-captured attention in MAD search, and Experiments 2-5 investigated whether giving participants advance knowledge and preexposure to the blinking cues produced efficient search for blinking targets. Experiments 6-9 investigated whether unique luminance onsets, unique motion, or unique stationary items captured attention. The results found that luminance onsets captured attention in MAD search only when they were unique, consistent with a top-down unique feature hypothesis. (PsycINFO Database Record (c) 2013 APA, all rights reserved)
Indexing, browsing and searching of digital video
Video is a communications medium that normally brings together moving pictures with a synchronised audio track into a discrete piece or pieces of information. The size of a âpiece â of video can variously be referred to as a frame, a shot, a scene, a clip, a programme or an episode, and these are distinguished by their lengths and by their composition. We shall return to the definition of each of these in section 4 this chapter. In modern society, video is ver
Digital Image Access & Retrieval
The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio
- âŠ