309,704 research outputs found

    Action Recognition in Videos: from Motion Capture Labs to the Web

    Full text link
    This paper presents a survey of human action recognition approaches based on visual data recorded from a single video camera. We propose an organizing framework which puts in evidence the evolution of the area, with techniques moving from heavily constrained motion capture scenarios towards more challenging, realistic, "in the wild" videos. The proposed organization is based on the representation used as input for the recognition task, emphasizing the hypothesis assumed and thus, the constraints imposed on the type of video that each technique is able to address. Expliciting the hypothesis and constraints makes the framework particularly useful to select a method, given an application. Another advantage of the proposed organization is that it allows categorizing newest approaches seamlessly with traditional ones, while providing an insightful perspective of the evolution of the action recognition task up to now. That perspective is the basis for the discussion in the end of the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4 table

    Alphabet Sign Language Recognition Using Leap Motion Technology and Rule Based Backpropagation-genetic Algorithm Neural Network (Rbbpgann)

    Full text link
    Sign Language recognition was used to help people with normal hearing communicate effectively with the deaf and hearing-impaired. Based on survey that conducted by Multi-Center Study in Southeast Asia, Indonesia was on the top four position in number of patients with hearing disability (4.6%). Therefore, the existence of Sign Language recognition is important. Some research has been conducted on this field. Many neural network types had been used for recognizing many kinds of sign languages. However, their performance are need to be improved. This work focuses on the ASL (Alphabet Sign Language) in SIBI (Sign System of Indonesian Language) which uses one hand and 26 gestures. Here, thirty four features were extracted by using Leap Motion. Further, a new method, Rule Based-Backpropagation Genetic Al-gorithm Neural Network (RB-BPGANN), was used to recognize these Sign Languages. This method is combination of Rule and Back Propagation Neural Network (BPGANN). Based on experiment this pro-posed application can recognize Sign Language up to 93.8% accuracy. It was very good to recognize large multiclass instance and can be solution of overfitting problem in Neural Network algorithm

    A review of computer vision-based approaches for physical rehabilitation and assessment

    Get PDF
    The computer vision community has extensively researched the area of human motion analysis, which primarily focuses on pose estimation, activity recognition, pose or gesture recognition and so on. However for many applications, like monitoring of functional rehabilitation of patients with musculo skeletal or physical impairments, the requirement is to comparatively evaluate human motion. In this survey, we capture important literature on vision-based monitoring and physical rehabilitation that focuses on comparative evaluation of human motion during the past two decades and discuss the state of current research in this area. Unlike other reviews in this area, which are written from a clinical objective, this article presents research in this area from a computer vision application perspective. We propose our own taxonomy of computer vision-based rehabilitation and assessment research which are further divided into sub-categories to capture novelties of each research. The review discusses the challenges of this domain due to the wide ranging human motion abnormalities and difficulty in automatically assessing those abnormalities. Finally, suggestions on the future direction of research are offered

    Going Deeper than Tracking: A Survey of Computer-Vision Based Recognition of Animal Pain and Emotions

    Get PDF
    Advances in animal motion tracking and pose recognition have been a game changer in the study of animal behavior. Recently, an increasing number of works go 'deeper' than tracking, and address automated recognition of animals' internal states such as emotions and pain with the aim of improving animal welfare, making this a timely moment for a systematization of the field. This paper provides a comprehensive survey of computer vision-based research on recognition of pain and emotional states in animals, addressing both facial and bodily behavior analysis. We summarize the efforts that have been presented so far within this topic-classifying them across different dimensions, highlight challenges and research gaps, and provide best practice recommendations for advancing the field, and some future directions for research

    The AllWISE Motion Survey, Part 2

    Get PDF
    We use the AllWISE Data Release to continue our search for WISE-detected motions. In this paper, we publish another 27,846 motion objects, bringing the total number to 48,000 when objects found during our original AllWISE motion survey are included. We use this list, along with the lists of confirmed WISE-based motion objects from the recent papers by Luhman and by Schneider et al. and candidate motion objects from the recent paper by Gagne et al. to search for widely separated, common-proper-motion systems. We identify 1,039 such candidate systems. All 48,000 objects are further analyzed using color-color and color-mag plots to provide possible characterizations prior to spectroscopic follow-up. We present spectra of 172 of these, supplemented with new spectra of 23 comparison objects from the literature, and provide classifications and physical interpretations of interesting sources. Highlights include: (1) the identification of three G/K dwarfs that can be used as standard candles to study clumpiness and grain size in nearby molecular clouds because these objects are currently moving behind the clouds, (2) the confirmation/discovery of several M, L, and T dwarfs and one white dwarf whose spectrophotometric distance estimates place them 5-20 pc from the Sun, (3) the suggestion that the Na 'D' line be used as a diagnostic tool for interpreting and classifying metal-poor late-M and L dwarfs, (4) the recognition of a triple system including a carbon dwarf and late-M subdwarf, for which model fits of the late-M subdwarf (giving [Fe/H] ~ -1.0) provide a measured metallicity for the carbon star, and (5) a possible 24-pc-distant K5 dwarf + peculiar red L5 system with an apparent physical separation of 0.1 pc.Comment: 62 pages with 80 figures, accepted for publication in The Astrophysical Journal Supplement Series, 23 Mar 2016; second version fixes a few small typos and corrects the footnotes for Table

    Benchmark RGB-D Gait Datasets: A Systematic Review

    Get PDF
    Human motion analysis has proven to be a great source of information for a wide range of applications. Several approaches for a detailed and accurate motion analysis have been proposed in the literature, as well as an almost proportional number of dedicated datasets. The relatively recent arrival of depth sensors contributed to an increasing interest in this research area and also to the emergence of a new type of motion datasets. This work focuses on a systematic review of publicly available depth-based datasets, encompassing human gait data which is used for person recognition and/or classification purposes. We have conducted this systematic review using the Scopus database. The herein presented survey, which to the best of our knowledge is the first one dedicated to this type of datasets, is intended to inform and aid researchers on the selection of the most suitable datasets to develop, test and compare their algorithms. (c) Springer Nature Switzerland AG 2019

    Hand Gesture Recognization Using Virtual Canvas

    Get PDF
    Computer vision based hand tracking can be used to interact with computers in a new innovative way. The input components of a normal computer system include keyboard, mouse, joystick are avoided. Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, fingers, arms, head, and/or body. It is of utmost importance in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Existing challenges and future research possibilities are also highlighted. Gestures are expressive, meaningful body motions involving physical movements of the fingers, hands, arms, head, face, or body with the intent of conveying meaningful information orinteracting with the environment. A gesture may also be perceived by the environment as a compression technique for the information to be transmitted elsewhere and subsequently reconstructed by the receive

    Going Deeper into Action Recognition: A Survey

    Full text link
    Understanding human actions in visual data is tied to advances in complementary research areas including object recognition, human dynamics, domain adaptation and semantic segmentation. Over the last decade, human action analysis evolved from earlier schemes that are often limited to controlled environments to nowadays advanced solutions that can learn from millions of videos and apply to almost all daily activities. Given the broad range of applications from video surveillance to human-computer interaction, scientific milestones in action recognition are achieved more rapidly, eventually leading to the demise of what used to be good in a short time. This motivated us to provide a comprehensive review of the notable steps taken towards recognizing human actions. To this end, we start our discussion with the pioneering methods that use handcrafted representations, and then, navigate into the realm of deep learning based approaches. We aim to remain objective throughout this survey, touching upon encouraging improvements as well as inevitable fallbacks, in the hope of raising fresh questions and motivating new research directions for the reader
    corecore