17,956 research outputs found

    Space station proximity operations and window design

    Get PDF
    On-orbit proximity operations (PROX-OPS) consist of all extravehicular activity (EVA) within 1 km of the space station. Because of the potentially large variety of PROX-OPS, very careful planning for space station windows is called for and must consider a great many human factors. The following topics are discussed: (1) basic window design philosophy and assumptions; (2) the concept of the local horizontal - local vertical on-orbit; (3) window linear dimensions; (4) selected anthropomorphic considerations; (5) displays and controls relative to windows; and (6) full window assembly replacement

    Vection in depth during treadmill walking

    Get PDF
    Vection has typically been induced in stationary observers (ie conditions providing visual-only information about self-motion). Two recent studies have examined vection during active treadmill walking--one reported that treadmill walking in the same direction as the visually simulated self-motion impaired vection (Onimaru et al, 2010 Journal of Vision 10(7):860), the other reported that it enhanced vection (Seno et al, 2011 Perception 40 747-750; Seno et al, 2011 Attention, Perception, & Psychophysics 73 1467-1476). Our study expands on these earlier investigations of vection during observer active movement. In experiment 1 we presented radially expanding optic flow and compared the vection produced in stationary observers with that produced during walking forward on a treadmill at a 'matched' speed. Experiment 2 compared the vection induced by forward treadmill walking while viewing expanding or contracting optic flow with that induced by viewing playbacks of these same displays while stationary. In both experiments subjects' tracked head movements were either incorporated into the self-motion displays (as simulated viewpoint jitter) or simply ignored. We found that treadmill walking always reduced vection (compared with stationary viewing conditions) and that simulated viewpoint jitter always increased vection (compared with constant velocity displays). These findings suggest that while consistent visual-vestibular information about self-acceleration increases vection, biomechanical self-motion information reduces this experience (irrespective of whether it is consistent or not with the visual input)

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 341)

    Get PDF
    This bibliography lists 133 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during September 1990. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance

    EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays

    Get PDF
    While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays

    Rehabilitative devices for a top-down approach

    Get PDF
    In recent years, neurorehabilitation has moved from a "bottom-up" to a "top down" approach. This change has also involved the technological devices developed for motor and cognitive rehabilitation. It implies that during a task or during therapeutic exercises, new "top-down" approaches are being used to stimulate the brain in a more direct way to elicit plasticity-mediated motor re-learning. This is opposed to "Bottom up" approaches, which act at the physical level and attempt to bring about changes at the level of the central neural system. Areas covered: In the present unsystematic review, we present the most promising innovative technological devices that can effectively support rehabilitation based on a top-down approach, according to the most recent neuroscientific and neurocognitive findings. In particular, we explore if and how the use of new technological devices comprising serious exergames, virtual reality, robots, brain computer interfaces, rhythmic music and biofeedback devices might provide a top-down based approach. Expert commentary: Motor and cognitive systems are strongly harnessed in humans and thus cannot be separated in neurorehabilitation. Recently developed technologies in motor-cognitive rehabilitation might have a greater positive effect than conventional therapies

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    Optical, gravitational, and kinesthetic determinants of judged eye level

    Get PDF
    Subjects judged eye level, defined in three distinct ways relative to three distinct reference planes: a gravitational horizontal, giving the gravitationally referenced eye level (GREL); a visible surface, giving the surface-referenced eye level (SREL); and a plane fixed with respect to the head, giving the head-referenced eye level (HREL). The information available for these judgements was varied by having the subjects view an illuminated target that could be placed in a box which: (1) was pitched at various angles, (2) was illuminated or kept in darkness, (3) was moved to different positions along the subject's head-to-foot body axis, and (4) was viewed with the subjects upright or reclining. The results showed: (1) judgements of GREL made in the dark were 2.5 deg lower than in the light, with a significantly greater variability; (2) judged GREL was shifted approximately half of the way toward SREL when these two eye levels did not coincide; (3) judged SREL was shifted about 12 percent of the way toward HREL when these two eye levels did not coincide, (4) judged HREL was shifted about half way toward SREL when these two eye level did not coincide and when the subject was upright (when the subject was reclining, HREL was shifted approx. 90 percent toward SREL); (5) the variability of the judged HREL in the dark was nearly twice as great with the subject reclining than with the subject upright. These results indicate that gravity is an important source of information for judgement of eye level. In the absence of information concerning the direction of gravity, the ability to judge HREL is extremely poor. A visible environment does not seem to afford precise information as to judgements of direction, but it probably does afford significant information as to the stability of these judgements
    corecore