218,048 research outputs found

    SAMMIE: an ergonomics CAD system for vehicle design and evaluation

    Get PDF
    SAMMIE (System for Aiding Man-Machine Interaction Evaluation) is a CAD system which enables the ergonomics/human factors evaluation of vehicle designs to commence at the earliest stages of the design process. Evaluations of postural comfort and the occupants' clearances, reach and vision should be undertaken from the concept stage when design modifications are easier and cheaper to implement than at the pre-production stage. In order to achieve this, the package offers 3D modelling of vehicles and their occupants. Details of the package and its application to vehicle design are presented

    Possibilities of man-machine interaction through the perception of human gestures

    Get PDF
    A mesura que les màquines s'utilitzen interaccionant cada cop més amb les persones, la necessitat d'interfícies més amigables esdevé una necessitat creixent. La comunicació oral persona-màquina com una forma d'interacció utilitzant el llenguatge natural és cada vegada més usual. La interpretació dels gestos humans pot, en certes aplicacions, complementar aquesta comunicació oral. Aquest article descriu un sistema d'interpretació dels gestos basat en la visió per computador. El procés d'interpretació realitza la detecció i seguiment d'un operador humà, i a partir dels seus moviments interpreta un conjunt específic d'ordres gestuals, en temps real.As man-machine interaction grows there is an increasing need for friendly interfaces. Human-machine oral communication as a means of natural language interaction is becoming quite common. Interpretation of human gestures can, in some applications, complement such communication. This article describes an interpretation of gestures procedure. The system is based on a computer vision system for the detection and tracking of a human operator and the interpretation of a specific set of human gestures in real time

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks

    Full text link
    A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and an modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects.Comment: 7 pages, 8 figure

    Human-Robot Control Strategies for the NASA/DARPA Robonaut

    Get PDF
    The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose

    Machine Analysis of Facial Expressions

    Get PDF
    No abstract

    Facial Expression Recognition from World Wild Web

    Full text link
    Recognizing facial expression in a wild setting has remained a challenging task in computer vision. The World Wide Web is a good source of facial images which most of them are captured in uncontrolled conditions. In fact, the Internet is a Word Wild Web of facial images with expressions. This paper presents the results of a new study on collecting, annotating, and analyzing wild facial expressions from the web. Three search engines were queried using 1250 emotion related keywords in six different languages and the retrieved images were mapped by two annotators to six basic expressions and neutral. Deep neural networks and noise modeling were used in three different training scenarios to find how accurately facial expressions can be recognized when trained on noisy images collected from the web using query terms (e.g. happy face, laughing man, etc)? The results of our experiments show that deep neural networks can recognize wild facial expressions with an accuracy of 82.12%
    corecore