156,034 research outputs found

    Simple yet efficient real-time pose-based action recognition

    Full text link
    Recognizing human actions is a core challenge for autonomous systems as they directly share the same space with humans. Systems must be able to recognize and assess human actions in real-time. In order to train corresponding data-driven algorithms, a significant amount of annotated training data is required. We demonstrated a pipeline to detect humans, estimate their pose, track them over time and recognize their actions in real-time with standard monocular camera sensors. For action recognition, we encode the human pose into a new data format called Encoded Human Pose Image (EHPI) that can then be classified using standard methods from the computer vision community. With this simple procedure we achieve competitive state-of-the-art performance in pose-based action detection and can ensure real-time performance. In addition, we show a use case in the context of autonomous driving to demonstrate how such a system can be trained to recognize human actions using simulation data.Comment: Submitted to IEEE Intelligent Transportation Systems Conference (ITSC) 2019. Code will be available soon at https://github.com/noboevbo/ehpi_action_recognitio

    RGBD Datasets: Past, Present and Future

    Full text link
    Since the launch of the Microsoft Kinect, scores of RGBD datasets have been released. These have propelled advances in areas from reconstruction to gesture recognition. In this paper we explore the field, reviewing datasets across eight categories: semantics, object pose estimation, camera tracking, scene reconstruction, object tracking, human actions, faces and identification. By extracting relevant information in each category we help researchers to find appropriate data for their needs, and we consider which datasets have succeeded in driving computer vision forward and why. Finally, we examine the future of RGBD datasets. We identify key areas which are currently underexplored, and suggest that future directions may include synthetic data and dense reconstructions of static and dynamic scenes.Comment: 8 pages excluding references (CVPR style

    Is It Real, or Is It Randomized?: A Financial Turing Test

    Full text link
    We construct a financial "Turing test" to determine whether human subjects can differentiate between actual vs. randomized financial returns. The experiment consists of an online video-game (http://arora.ccs.neu.edu) where players are challenged to distinguish actual financial market returns from random temporal permutations of those returns. We find overwhelming statistical evidence (p-values no greater than 0.5%) that subjects can consistently distinguish between the two types of time series, thereby refuting the widespread belief that financial markets "look random." A key feature of the experiment is that subjects are given immediate feedback regarding the validity of their choices, allowing them to learn and adapt. We suggest that such novel interfaces can harness human capabilities to process and extract information from financial data in ways that computers cannot.Comment: 12 pages, 6 figure

    A multi-modal dance corpus for research into real-time interaction between humans in online virtual environments

    Get PDF
    We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology are locally available to them, can learn choerographies with teacher guidance in an online virtual ballet studio. As the data corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers perform a number of fixed choreographies, which are both graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. Although the data corpus is tailored specifically for an online dance class application scenario, the data is free to download and used for any research and development purposes

    Encoding natural movement as an agent-based system: an investigation into human pedestrian behaviour in the built environment

    Get PDF
    Gibson's ecological theory of perception has received considerable attention within psychology literature, as well as in computer vision and robotics. However, few have applied Gibson's approach to agent-based models of human movement, because the ecological theory requires that individuals have a vision-based mental model of the world, and for large numbers of agents this becomes extremely expensive computationally. Thus, within current pedestrian models, path evaluation is based on calibration from observed data or on sophisticated but deterministic route-choice mechanisms; there is little open-ended behavioural modelling of human-movement patterns. One solution which allows individuals rapid concurrent access to the visual information within an environment is an 'exosomatic visual architecture" where the connections between mutually visible locations within a configuration are prestored in a lookup table. Here we demonstrate that, with the aid of an exosomatic visual architecture, it is possible to develop behavioural models in which movement rules originating from Gibson's principle of affordance are utilised. We apply large numbers of agents programmed with these rules to a built-environment example and show that, by varying parameters such as destination selection, field of view, and steps taken between decision points, it is possible to generate aggregate movement levels very similar to those found in an actual building context

    Assessing the feasibility of online SSVEP decoding in human walking using a consumer EEG headset.

    Get PDF
    BackgroundBridging the gap between laboratory brain-computer interface (BCI) demonstrations and real-life applications has gained increasing attention nowadays in translational neuroscience. An urgent need is to explore the feasibility of using a low-cost, ease-of-use electroencephalogram (EEG) headset for monitoring individuals' EEG signals in their natural head/body positions and movements. This study aimed to assess the feasibility of using a consumer-level EEG headset to realize an online steady-state visual-evoked potential (SSVEP)-based BCI during human walking.MethodsThis study adopted a 14-channel Emotiv EEG headset to implement a four-target online SSVEP decoding system, and included treadmill walking at the speeds of 0.45, 0.89, and 1.34 meters per second (m/s) to initiate the walking locomotion. Seventeen participants were instructed to perform the online BCI tasks while standing or walking on the treadmill. To maintain a constant viewing distance to the visual targets, participants held the hand-grip of the treadmill during the experiment. Along with online BCI performance, the concurrent SSVEP signals were recorded for offline assessment.ResultsDespite walking-related attenuation of SSVEPs, the online BCI obtained an information transfer rate (ITR) over 12 bits/min during slow walking (below 0.89 m/s).ConclusionsSSVEP-based BCI systems are deployable to users in treadmill walking that mimics natural walking rather than in highly-controlled laboratory settings. This study considerably promotes the use of a consumer-level EEG headset towards the real-life BCI applications

    The role of human body movements in mate selection

    Get PDF
    It is common scientific knowledge, that most of what we say within a conversation is not only expressed by the words meaning alone, but also through our gestures, postures, and body movements. This non-verbal mode is possibly rooted firmly in our human evolutionary heritage, and as such, some scientists argue that it serves as a fundamental assessment and expression tool for our inner qualities. Studies of nonverbal communication have established that a universal, culture-free, non-verbal sign system exists, that is available to all individuals for negotiating social encounters. Thus, it is not only the kind of gestures and expressions humans use in social communication, but also the way these movements are performed, as this seems to convey key information about an individuals quality. Dance, for example, is a special form of movement, which can be observed in human courtship displays. Recent research suggests that people are sensitive to the variation in dance movements, and that dance performance provides information about an individuals mate quality in terms of health and strength. This article reviews the role of body movement in human non-verbal communication, and highlights its significance in human mate preferences in order to promote future work in this research area within the evolutionary psychology framework
    corecore