25,580 research outputs found
Descriptive temporal template features for visual motion recognition
In this paper, a human action recognition system is proposed. The system is based on new, descriptive `temporal template' features in order to achieve high-speed recognition in real-time, embedded applications. The limitations of the well known `Motion History Image' (MHI) temporal template are addressed and a new `Motion History Histogram' (MHH) feature is proposed to capture more motion information in the video. MHH not only provides rich motion information, but also remains computationally inexpensive. To further improve classification performance, we combine both MHI and MHH into a low dimensional feature vector which is processed by a support vector machine (SVM). Experimental results show that our new representation can achieve a significant improvement in the performance of human action recognition over existing comparable methods, which use 2D temporal template based representations
Statistical Analysis of Dynamic Actions
Real-world action recognition applications require the development of systems which are fast, can handle a large variety of actions without a priori knowledge of the type of actions, need a minimal number of parameters, and necessitate as short as possible learning stage. In this paper, we suggest such an approach. We regard dynamic activities as long-term temporal objects, which are characterized by spatio-temporal features at multiple temporal scales. Based on this, we design a simple statistical distance measure between video sequences which captures the similarities in their behavioral content. This measure is nonparametric and can thus handle a wide range of complex dynamic actions. Having a behavior-based distance measure between sequences, we use it for a variety of tasks, including: video indexing, temporal segmentation, and action-based video clustering. These tasks are performed without prior knowledge of the types of actions, their models, or their temporal extents
Skeleton-aided Articulated Motion Generation
This work make the first attempt to generate articulated human motion
sequence from a single image. On the one hand, we utilize paired inputs
including human skeleton information as motion embedding and a single human
image as appearance reference, to generate novel motion frames, based on the
conditional GAN infrastructure. On the other hand, a triplet loss is employed
to pursue appearance-smoothness between consecutive frames. As the proposed
framework is capable of jointly exploiting the image appearance space and
articulated/kinematic motion space, it generates realistic articulated motion
sequence, in contrast to most previous video generation methods which yield
blurred motion effects. We test our model on two human action datasets
including KTH and Human3.6M, and the proposed framework generates very
promising results on both datasets.Comment: ACM MM 201
Developing a low-cost beer dispensing robotic system for the service industry
As the prices of commercially available electronic and mechanical components decrease, manufacturers such as Devantech and Revolution Education have made encoded motor controller systems and microcontrollers very accessible to engineers and designers. This has made it possible to design sophisticated robotic and mechatronic systems very rapidly and at relatively low cost. A recent project in the Autonomous Systems Lab at Middlesex University, UK was to design and build a small, automated, robotic bartender based around the 5 litre Heineken 'Draughtkeg' system, which is capable of patrolling a bar and dispensing beer when signalled to by a customer. Because the system was designed as a commercial product, design constraints focused on keeping the build cost down, and so electronic components were sourced from outside companies and interfaced with a bespoke chassis and custom mechanical parts designed and manufactured on site at the University. All the programming was conducted using the proprietary BASIC language, which is freely available from the PicAXE supplier at no cost. This paper will discuss the restrictions involved in building a robot chassis around 'off-theshelf' components, and the issues arising from making the human-machine interaction intuitive whilst only using low-cost ultrasonic sensors. Programming issues will also be discussed, such as the control of accuracy when interfacing a PicAXE microcontroller with a Devantech MD25 Motor Controller board. Public live testing of the system was conducted at the Kinetica Art Fair 2010 event in London and has since been picked up by websites such as Engadget.com and many others. Feedback on the system will be described, as well as the refinements made as a result of these test
Children, Humanoid Robots and Caregivers
This paper presents developmental learning on a humanoid robot from human-robot interactions. We consider in particular teaching humanoids as children during the child's Separation and Individuation developmental phase (Mahler, 1979). Cognitive development during this phase is characterized both by the child's dependence on her mother for learning while becoming awareness of her own individuality, and by self-exploration of her physical surroundings. We propose a learning framework for a humanoid robot inspired on such cognitive development
Human motion modeling and simulation by anatomical approach
To instantly generate desired infinite realistic human motion is still a great challenge in virtual human simulation. In this paper, the novel emotion effected motion classification and anatomical motion classification are presented, as well as motion capture and parameterization methods. The framework for a novel anatomical approach to model human motion in a HTR (Hierarchical Translations and Rotations) file format is also described. This novel anatomical approach in human motion modelling has the potential to generate desired infinite human motion from a compact motion database. An architecture for the real-time generation of new motions is also propose
- …