27,153 research outputs found

    Towards Full-Body Gesture Analysis and Recognition

    Get PDF
    With computers being embedded in every walk of our life, there is an increasing demand forintuitive devices for human-computer interaction. As human beings use gestures as importantmeans of communication, devices based on gesture recognition systems will be effective for humaninteraction with computers. However, it is very important to keep such a system as non-intrusive aspossible, to reduce the limitations of interactions. Designing such non-intrusive, intuitive, camerabasedreal-time gesture recognition system has been an active area of research research in the fieldof computer vision.Gesture recognition invariably involves tracking body parts. We find many research works intracking body parts like eyes, lips, face etc. However, there is relatively little work being done onfull body tracking. Full-body tracking is difficult because it is expensive to model the full-body aseither 2D or 3D model and to track its movements.In this work, we propose a monocular gesture recognition system that focuses on recognizing a setof arm movements commonly used to direct traffic, guiding aircraft landing and for communicationover long distances. This is an attempt towards implementing gesture recognition systems thatrequire full body tracking, for e.g. an automated recognition semaphore flag signaling system.We have implemented a robust full-body tracking system, which forms the backbone of ourgesture analyzer. The tracker makes use of two dimensional link-joint (LJ) model, which representsthe human body, for tracking. Currently, we track the movements of the arms in a video sequence,however we have future plans to make the system real-time. We use distance transform techniquesto track the movements by fitting the parameters of LJ model in every frames of the video captured.The tracker\u27s output is fed a to state-machine which identifies the gestures made. We haveimplemented this system using four sub-systems. Namely1. Background subtraction sub-system, using Gaussian models and median filters.2. Full-body Tracker, using L-J Model APIs3. Quantizer, that converts tracker\u27s output into defined alphabets4. Gesture analyzer, that reads the alphabets into action performed.Currently, our gesture vocabulary contains gestures involving arms moving up and down which canbe used for detecting semaphore, flag signaling system. Also we can detect gestures like clappingand waving of arms

    Production of spherical mesoporous molecularly imprinted polymer particles containing tunable amine decorated nanocavities with CO2 molecule recognition properties

    Get PDF
    Novel spherical molecularly imprinted polymer (MIP) particles containing amide-decorated nanocavities with CO2 recognition properties in the poly[acrylamide-co-(ethyleneglycol dimethacrylate)] mesoporous matrix were synthesized by suspension polymerization using oxalic acid and acetonitrile/toluene as dummy template and porogen mixture, respectively. The particles had a maximum BET surface area, SBET, of 457 m2/g and a total mesopore volume of 0.92 cm3/g created by phase separation between the copolymer and porogenic solvents. The total volume of the micropores (d < 2 nm) was 0.1 cm3/g with two sharp peaks at 0.84 and 0.85 nm that have not been detected in non-imprinted polymer material. The degradation temperature at 5% weight loss was 240–255 °C and the maximum equilibrium CO2 adsorption capacity was 0.56 and 0.62 mmol/g at 40 and 25 °C, respectively, and 0.15 bar CO2 partial pressure. The CO2 adsorption capacity was mainly affected by the density of CO2-philic NH2 groups in the polymer network and the number of nanocavities. Increasing the content of low-polar solvent (toluene) in the organic phase prior to polymerization led to higher CO2 capture capacity due to stronger hydrogen bonds between the template and the monomer during complex formation. Under the same conditions, molecularly imprinted particles showed much higher CO2 capture capacity compared to their non-imprinted counterparts. The volume median diameter (73–211 μm) and density (1.3 g/cm3) of the produced particles were within the range suitable for CO2 capture in fixed and fluidized bed systems

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 165, March 1977

    Get PDF
    This bibliography lists 198 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1977

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

    Full text link
    There is an undeniable communication barrier between deaf people and people with normal hearing ability. Although innovations in sign language translation technology aim to tear down this communication barrier, the majority of existing sign language translation systems are either intrusive or constrained by resolution or ambient lighting conditions. Moreover, these existing systems can only perform single-sign ASL translation rather than sentence-level translation, making them much less useful in daily-life communication scenarios. In this work, we fill this critical gap by presenting DeepASL, a transformative deep learning-based sign language translation technology that enables ubiquitous and non-intrusive American Sign Language (ASL) translation at both word and sentence levels. DeepASL uses infrared light as its sensing mechanism to non-intrusively capture the ASL signs. It incorporates a novel hierarchical bidirectional deep recurrent neural network (HB-RNN) and a probabilistic framework based on Connectionist Temporal Classification (CTC) for word-level and sentence-level ASL translation respectively. To evaluate its performance, we have collected 7,306 samples from 11 participants, covering 56 commonly used ASL words and 100 ASL sentences. DeepASL achieves an average 94.5% word-level translation accuracy and an average 8.2% word error rate on translating unseen ASL sentences. Given its promising performance, we believe DeepASL represents a significant step towards breaking the communication barrier between deaf people and hearing majority, and thus has the significant potential to fundamentally change deaf people's lives
    • …
    corecore