200 research outputs found

    TEMPUS: Simulating personnel and tasks in a 3-D environment

    Get PDF
    The latest TEMPUS installation occurred in March, 1985. Another update is slated for early June, 1985. An updated User's Manual is in preparation and will be delivered approximately mid-June, 1985. NASA JSC has full source code listings and internal documentation for installed software. NASA JSC staff has received instruction in the use of TEMPUS. Telephone consultations have augmented on-site instruction

    User interface enhancement report

    Get PDF
    The existing user interfaces to TEMPUS, Plaid, and other systems in the OSDS are fundamentally based on only two modes of communication: alphanumeric commands or data input and grapical interaction. The latter are especially suited to the types of interaction necessary for creating workstation objects with BUILD and with performing body positioning in TEMPUS. Looking toward the future application of TEMPUS, however, the long-term goals of OSDS will include the analysis of extensive tasks in space involving one or more individuals working in concert over a period of time. In this context, the TEMPUS body positioning capability, though extremely useful in creating and validating a small number of particular body positions, will become somewhat tedious to use. The macro facility helps somewhat, since frequently used positions may be easily applied by executing a stored macro. The difference between body positioning and task execution, though subtle, is important. In the case of task execution, the important information at the user's level is what actions are to be performed rather than how the actions are performed. Viewed slightly differently, the what is constant over a set of individuals though the how may vary

    System integration report

    Get PDF
    Several areas that arise from the system integration issue were examined. Intersystem analysis is discussed as it relates to software development, shared data bases and interfaces between TEMPUS and PLAID, shaded graphics rendering systems, object design (BUILD), the TEMPUS animation system, anthropometric lab integration, ongoing TEMPUS support and maintenance, and the impact of UNIX and local workstations on the OSDS environment

    Evaluating Perceived Trust From Procedurally Animated Gaze

    Get PDF
    Adventure role playing games (RPGs) provide players with increasingly expansive worlds, compelling storylines, and meaningful fictional character interactions. Despite the fast-growing richness of these worlds, the majority of interactions between the player and non-player characters (NPCs) still remain scripted. In this paper we propose using an NPC’s animations to reflect how they feel towards the player and as a proof of concept, investigate the potential for a straightforward gaze model to convey trust. Through two perceptual experiments, we find that viewers can distinguish between high and low trust animations, that viewers associate the gaze differences specifically with trust and not with an unrelated attitude (aggression), and that the effect can hold for different facial expressions and scene contexts, even when viewed by participants for a short (five second) clip length. With an additional experiment, we explore the extent that trust is uniquely conveyed over other attitudes associated with gaze, such as interest, unfriendliness, and admiration

    Real-Time Control of a Virtual Human Using Minimal Sensors

    Get PDF
    We track, in real-time, the position and posture of a human body, using a minimal number of 6 DOF sensors to capture full body standing postures. We use 4 sensors to create a good approximation of a human operator\u27s position and posture, and map it on to our articulated computer graphics human model. The unsensed joints are positioned by a fast inverse kinematics algorithm. Our goal is to realistically recreate human postures while minimally encumbering the operator

    How the OCEAN Personality Model Affects the Perception of Crowds

    Get PDF
    Cataloged from PDF version of article.A personality model named High-Density Autonomous Crowds (HiDAC) simulation system provides individual differences by assigning each person different psychological and physiological traits. Users normally set these parameters to model a crowd's nonuniformity and diversity. The approach creates plausible variations in the crowd and enables novice users to dictate these variations by combining a standard personality model with a high-density crowd simulation. HiDAC addresses the simulation of local behaviors and the global wayfinding of crowds in a dynamically changing environment. It directs autonomous agents' behavior by combining geometric and psychological rules. HiDAC handles collisions through avoidance and response forces. Over long distances, the system applies collision avoidance so that agents can steer around obstacles. HiDAC assigns people specific behaviors. The number of actions they complete depends on their curiosity

    Smart Avatars in JackMOO

    Get PDF
    Creation of compelling 3-dimensional, multi-user virtual worlds for education and training applications requires a high degree of realism in the appearance, interaction, and behavior of avatars within the scene. Our goal is to develop and/or adapt existing 3-dimensional technologies to provide training scenarios across the Internet in a form as close as possible to the appearance and interaction expected of live situations with human participants. We have produced a prototype system, JackMOO, which combines Jack, a virtual human system, and LambdaMOO, a multiuser, network-accessible, programmable, interactive server. Jack provides the visual realization of avatars and other objects. LambdaMOO provides the web-accessible communication, programability, and persistent object database. The combined JackMOO allows us to store the richer semantic information necessitated by the scope and range of human actions that an avatar must portray, and to express those actions in the form of imperative sentences. This paper describes JackMOO, its components, and a prototype application with five virtual human agents

    A Covered Eye Fails To Follow an Object Moving in Depth

    Get PDF
    To clearly view approaching objects, the eyes rotate inward (vergence), and the intraocular lenses focus (accommodation). Current ocular control models assume both eyes are driven by unitary vergence and unitary accommodation commands that causally interact. The models typically describe discrete gaze shifts to non-accommodative targets performed under laboratory conditions. We probe these unitary signals using a physical stimulus moving in depth on the midline while recording vergence and accommodation simultaneously from both eyes in normal observers. Using monocular viewing, retinal disparity is removed, leaving only monocular cues for interpreting the object\u27s motion in depth. The viewing eye always followed the target\u27s motion. However, the occluded eye did not follow the target, and surprisingly, rotated out of phase with it. In contrast, accommodation in both eyes was synchronized with the target under monocular viewing. The results challenge existing unitary vergence command theories, and causal accommodation-vergence linkage

    Human Model Reaching, Grasping, Looking and Sitting Using Smart Objects

    Get PDF
    Manually creating convincing animated human motion in a 3D ergonomic test environment is tedious and time consuming. However, procedural motion generators help animators efficiently produce complex and realistic motions. Using the concept of a Human Modeling Software Testbed (HMST), we created novel procedural methods for animating reaching, grasping, looking, and sitting using the environmental context of ‘smart’ objects that parametrically guide human model ergonomic motions. This approach enabled complicated procedures such as collision-free leg reach and contextual sitting motion generation. By procedurally adding small secondary details to the animation, such as head/eye vision constraints and prehensile grasps, the animated motions look more natural with minimal animator input. A ‘smart’ object in the scene graph provides specific parameters to produce proper motions and final positions. These parameters are applied to the desired figure procedurally to create any secondary motions, and further generalize to any environment. Our system allows users to proceed with any required ergonomic analyses with confidence in the visual validity of the automated motions
    • …
    corecore