10 research outputs found

    Telerobotics Workstation (TRWS) for Deep Space Habitats

    Get PDF
    On medium- to long-duration human spaceflight missions, latency in communications from Earth could reduce efficiency or hinder local operations, control, and monitoring of the various mission vehicles and other elements. Regardless of the degree of autonomy of any one particular element, a means of monitoring and controlling the elements in real time based on mission needs would increase efficiency and response times for their operation. Since human crews would be present locally, a local means for monitoring and controlling all the various mission elements is needed, particularly for robotic elements where response to interesting scientific features in the environment might need near- instantaneous manipulation and control. One of the elements proposed for medium- and long-duration human spaceflight missions, the Deep Space Habitat (DSH), is intended to be used as a remote residence and working volume for human crews. The proposed solution for local monitoring and control would be to provide a workstation within the DSH where local crews can operate local vehicles and robotic elements with little to no latency. The Telerobotics Workstation (TRWS) is a multi-display computer workstation mounted in a dedicated location within the DSH that can be adjusted for a variety of configurations as required. From an Intra-Vehicular Activity (IVA) location, the TRWS uses the Robot Application Programming Interface Delegate (RAPID) control environment through the local network to remotely monitor and control vehicles and robotic assets located outside the pressurized volume in the immediate vicinity or at low-latency distances from the habitat. The multiple display area of the TRWS allows the crew to have numerous windows open with live video feeds, control windows, and data browsers, as well as local monitoring and control of the DSH and associated systems

    Multisensory numerosity judgments for visual and tactile stimuli

    Full text link
    To date, numerosity judgments have been studied only under conditions of unimodal stimulus presentation. It is therefore unclear whether the same limitations on correctly reporting the number of unimodal visual or tactile stimuli presented in a display might be expected under conditions in which participants have to count stimuli presented simultaneously in two or more different sensory modalities. In Experiment 1, we investigated numerosity judgments using both unimodal and bimodal displays consisting of one to six vibrotactile stimuli (presented over the body surface) and one to six visual stimuli (seen on the body via mirror reflection). Participants had to count the number of stimuli regardless of their modality of presentation. Bimodal numerosity judgments were significantly less accurate than predicted on the basis of an independent modality-specific resources account, thus showing that numerosity judgments might rely on a unitary amodal system instead. The results of a second experiment demonstrated that divided attention costs could not account for the poor performance in the bimodal conditions of Experiment 1. We discuss these results in relation to current theories of cross-modal integration and to the cognitive resources and/or common higher order spatial representations possibly accessed by both visual and tactile stimuli.
    corecore