4,400 research outputs found

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 338)

    Get PDF
    This bibliography lists 139 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during June 1990. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance

    Sound Localization for Robot Navigation

    Get PDF
    Non

    NASA Space Human Factors Program

    Get PDF
    This booklet briefly and succinctly treats 23 topics of particular interest to the NASA Space Human Factors Program. Most articles are by different authors who are mainly NASA Johnson or NASA Ames personnel. Representative topics covered include mental workload and performance in space, light effects on Circadian rhythms, human sleep, human reasoning, microgravity effects and automation and crew performance

    Aerospace medicine and biology: A continuing bibliography with indexes, supplement 128, May 1974

    Get PDF
    This special bibliography lists 282 reports, articles, and other documents introduced into the NASA scientific and technical information system in April 1974

    Aerospace medicine and biology: A continuing bibliography with indexes

    Get PDF
    This bibliography lists 148 reports, articles and other documents introduced into the NASA scientific and technical information system in December 1984

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    A novel visual tracking scheme for unstructured indoor environments

    Get PDF
    In the ever-expanding sphere of assistive robotics, the pressing need for advanced methods capable of accurately tracking individuals within unstructured indoor settings has been magnified. This research endeavours to devise a realtime visual tracking mechanism that encapsulates high performance attributes while maintaining minimal computational requirements. Inspired by the neural processes of the human brain’s visual information handling, our innovative algorithm employs a pattern image, serving as an ephemeral memory, which facilitates the identification of motion within images. This tracking paradigm was subjected to rigorous testing on a Nao humanoid robot, demonstrating noteworthy outcomes in controlled laboratory conditions. The algorithm exhibited a remarkably low false detection rate, less than 4%, and target losses were recorded in merely 12% of instances, thus attesting to its successful operation. Moreover, the algorithm’s capacity to accurately estimate the direct distance to the target further substantiated its high efficacy. These compelling findings serve as a substantial contribution to assistive robotics. The proficient visual tracking methodology proposed herein holds the potential to markedly amplify the competencies of robots operating in dynamic, unstructured indoor settings, and set the foundation for a higher degree of complex interactive tasks

    Examining the role of Diverted Attention on Musical Motion Aftereffects

    Get PDF
    Previous studies have observed visual motion aftereffects (MAE) following prolonged exposure to both auditory and visual stimuli. As the importance of attention for MAE perception has been debated, the present study manipulated the level of attention directed to an auditory stimulus depicting motion and assessed how attention influenced MAE strength. It was hypothesized that MAE strength would be dependent on attention to the motion stimuli. 100 participants were recruited and randomly divided into either a Diverted-Attention Condition or Control Condition. Each participant completed preliminary assessments to ensure adequate auditory calibration and familiarity with the random dot kinematogram (RDK) visual motion stimuli used in the experiment. In the main task, both conditions were exposed to the same auditory stimuli - ascending or descending musical scales with intermittent noise bursts - but given different task instructions. Participants in the Diverted-Attention Condition attended to short noise bursts and ignored the musical scales; participants in the Control Condition attended to the musical scales. Trials followed an identical procedure: (1) ascending or descending scale, (2) RDK presentation, and (3) a forced-choice judgment about the motion of the RDK. RDK motion coherence and direction were manipulated. Analyses found a significant main effect of Scale Direction and Motion Coherence, but no main effect of Condition. These results replicate prior reports of auditory-driven visual MAEs but suggest that attention might not modulate these effects. Potential explanations for these findings are explored through consideration of potential design confounds, alternative perspectives, and suggestions for future studies

    Assessment of Audio Interfaces for use in Smartphone Based Spatial Learning Systems for the Blind

    Get PDF
    Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior. This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. After learning the targets through the four modes, the participants spatially updated the position of the targets and localized them by walking to each of them from two indirect waypoints. The results also indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot. In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes. These results indicate that we can develop 3D audio interfaces on sensor rich off the shelf smartphone devices, without the need of expensive head tracking hardware. Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating. In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. Also there is no significant difference in a cognitive map developed through spatial audio based on tracking user’s head or hand. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones
    • …
    corecore