10,411 research outputs found

    Multiloop Manual Control of Dynamic Systems

    Get PDF
    Human interaction with a simple, multiloop dynamic system in which the human's activity was systematically varied by changing the levels of automation was studied. The control loop structure resulting from the task definition parallels that for any multiloop manual control system, is considered a sterotype. Simple models of the human in the task, and upon extending a technique for describing the manner in which the human subjectively quantifies his opinion of task difficulty were developed. A man in the loop simulation which provides data to support and direct the analytical effort is presented

    A study of manual control methodology with annotated bibliography

    Get PDF
    Manual control methodology - study with annotated bibliograph

    Spatial displays as a means to increase pilot situational awareness

    Get PDF
    Experiences raise a number of concerns for future spatial-display developers. While the promise of spatial displays is great, the cost of their development will be correspondingly large. The knowledge and skills which must be coordinated to ensure successful results is unprecedent. From the viewpoint of the designer, basic knowledge of how human beings perceive and process complex displays appears fragmented and largely unquantified. Methodologies for display development require prototyping and testing with subject pilots for even small changes. Useful characterizations of the range of differences between individual users is nonexistent or at best poorly understood. The nature, significance, and frequency of interpretation errors associated with complex integrated displays is unexplored and undocumented territory. Graphic displays have intuitive appeal and can achieve face validity much more readily than earlier symbolic displays. The risk of misleading the pilot is correspondingly greater. Thus while some in the research community are developing the tools and techniques necessary for effective spatial-display development, potential users must be educated about the issues so that informed choices can be made. The scope of the task facing all is great. The task is challenging and the potential for meaningful contributions at all levels is high indeed

    Operator vision aids for space teleoperation assembly and servicing

    Get PDF
    This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed

    Some data processing requirements for precision Nap-Of-the-Earth (NOE) guidance and control of rotorcraft

    Get PDF
    Nap-Of-the-Earth (NOE) flight in a conventional helicopter is extremely taxing for two pilots under visual conditions. Developing a single pilot all-weather NOE capability will require a fully automatic NOE navigation and flight control capability for which innovative guidance and control concepts were examined. Constrained time-optimality provides a validated criterion for automatically controlled NOE maneuvers if the pilot is to have confidence in the automated maneuvering technique. A second focus was to organize the storage and real-time updating of NOE terrain profiles and obstacles in course-oriented coordinates indexed to the mission flight plan. A method is presented for using pre-flight geodetic parameter identification to establish guidance commands for planned flight profiles and alternates. A method is then suggested for interpolating this guidance command information with the aid of forward and side looking sensors within the resolution of the stored data base, enriching the data content with real-time display, guidance, and control purposes. A third focus defined a class of automatic anticipative guidance algorithms and necessary data preview requirements to follow the vertical, lateral, and longitudinal guidance commands dictated by the updated flight profiles and to address the effects of processing delays in digital guidance and control system candidates. The results of this three-fold research effort offer promising alternatives designed to gain pilot acceptance for automatic guidance and control of rotorcraft in NOE operations

    It takes time to prime: Semantic priming in the ocular lexical decision task

    No full text
    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDTs) was replaced with an eye-movement response through a sequence of 3 words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative lexical decision (LD) on each word in the triplet. In Experiment 2, LD responses were delayed until all 3 letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, but limited during text reading, as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of tau (Ï„), meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases in which the LD is difficult, as indicated by longer response times. Compared with the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT

    A multivariable sampled-data model of an automobile driver

    Get PDF
    In this thesis, a multivariable system model of driver performance in the basic driving tasks is presented. The driver model described acts as a serial-process, priority-accessed, time-sharing computer. This model processes the input or output task which currently possesses the highest priority. Input tasks are represented by continuous signals sampled intermittently according to priority laws. Output tasks are modeled as simple analog processes operating on the last few intermittently generated output controls. An individual priority rule is constructed for each input and output task. The performance of the driver in the lateral control task involves a feedforward pattern which is consequence of the fact the driver looks several feet ahead of the pathway. A laboratory analysis of the feedforward aspects of the driver in the single-input single-output lateral control task is described --Abstract, page ii

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure
    • …
    corecore