1,214,896 research outputs found

    Event-driven displays for manipulator control

    Get PDF
    The problem of constructing event-related information displays from multidimensional data generated by proximity, force-torque and tactile sensors integrated with the terminal device of a remotely controlled manipulator is considered. Event-driven displays are constructed by using appropriate algorithms acting on sensory data in real time. Event-driven information displays lessen the operator's workload and improve control performance. The paper describes and discusses several event-driven display examples that were implemented in the JPL teleoperator project, including a brief outline of the data handling system which drives the graphics display in real time. The paper concludes with a discussion of future plans to integrate event-driven displays with visual (TV) information

    Analysis of a VTOL hover task with predictor displays using an optimal control model of the human operator

    Get PDF
    The influence of different types of predictor displays in a longitudinal VTOL hover task is analyzed in a theoretical study. It was assumed that pitch angle and position are presented to the pilot in separate displays namely the artificial horizon and position display. The predictive information is calculated by means of a Taylor series. From earlier experimental studies it is well known that predictor displays improve human and system performance and result in reducing human workload. In this study, an optimal control model is used to prove this effect theoretically. Several cases with differing amounts of predictive and rate information are compared

    Levitating Particle Displays with Interactive Voxels

    Get PDF
    Levitating objects can be used as the primitives in a new type of display. We present levitating particle displays and show how research into object levitation is enabling a new way of presenting and interacting with information. We identify novel properties of levitating particle displays and give examples of the interaction techniques and applications they allow. We then discuss design challenges for these displays, potential solutions, and promising areas for future research

    Evaluation of force-torque displays for use with space station telerobotic activities

    Get PDF
    Recent experiments which addressed Space Station remote manipulation tasks found that tactile force feedback (reflecting forces and torques encountered at the end-effector through the manipulator hand controller) does not improve performance significantly. Subjective response from astronaut and non-astronaut test subjects indicated that force information, provided visually, could be useful. No research exists which specifically investigates methods of presenting force-torque information visually. This experiment was designed to evaluate seven different visual force-torque displays which were found in an informal telephone survey. The displays were prototyped in the HyperCard programming environment. In a within-subjects experiment, 14 subjects nullified forces and torques presented statically, using response buttons located at the bottom of the screen. Dependent measures included questionnaire data, errors, and response time. Subjective data generally demonstrate that subjects rated variations of pseudo-perspective displays consistently better than bar graph and digital displays. Subjects commented that the bar graph and digital displays could be used, but were not compatible with using hand controllers. Quantitative data show similar trends to the subjective data, except that the bar graph and digital displays both provided good performance, perhaps do to the mapping of response buttons to display elements. Results indicate that for this set of displays, the pseudo-perspective displays generally represent a more intuitive format for presenting force-torque information

    Focussed palmtop information access combining starfield displays and profile-based recommendations

    Get PDF
    This paper presents two palmtop applications: Taeneb CityGuide and Taeneb ConferenceGuide. Both applications are centred around Starfield displays on palmtop computers - this provides fast, dynamic access to information on a small platform. The paper describes the applications focussing on this novel palmtop information access method and on the user-profiling aspect of the CityGuide, where restaurants are recommended to users based on both the match of restaurant type to the users' observed previous interactions and the rating given by reviewers with similar observed preferences

    The effect of transparency on recognition of overlapping objects

    Get PDF
    Are overlapping objects easier to recognize when the objects are transparent or opaque? It is important to know whether the transparency of X-ray images of luggage contributes to the difficulty in searching those images for targets. Transparency provides extra information about objects that would normally be occluded but creates potentially ambiguous depth relations at the region of overlap. Two experiments investigated the threshold durations at which adult participants could accurately name pairs of overlapping objects that were opaque or transparent. In Experiment 1, the transparent displays included monocular cues to relative depth. Recognition of the back object was possible at shorter durations for transparent displays than for opaque displays. In Experiment 2, the transparent displays had no monocular depth cues. There was no difference in the duration at which the back object was recognized across transparent and opaque displays. The results of the two experiments suggest that transparent displays, even though less familiar than opaque displays, do not make object recognition more difficult, and possibly show a benefit. These findings call into question the importance of edge junctions in object recognitio

    Reasoning about dynamic information displays

    Get PDF
    With increasing use of computing systems while on the move and in constantly changing conditions, whether it is via mobile devices, wearable computers or embedded systems in the environment, time plays an increasingly important role in interaction. The way in which information is represented in an interface is fundamental to interaction with it, and how the information is used in the users tasks and activities. Dynamic representations where the user must perceive changes in the information displayed over time pose a further challenge to the designer. Very often this information is integrated with information from the environment in the performance of the user's tasks. The diminutive size and limited display capabilities of many ubiquitous and mobile computing devices further motivate careful design of these displays. In this paper we look at how time can be taken into account when reasoning about representational issues from the early stages of design. We look at a model which can be used to reason about these issues in a structured fashion, and apply it to an example

    Privacy and Curiosity in Mobile Interactions with Public Displays.

    Get PDF
    Personal multimedia devices like mobile phones create new needs for larger displays distributed at specific points in the environment to look up information about the current place, playing games or exchanging multimedia data. The technical prerequisites are covered; however, using public displays always exposing information. In this paper we look at these issues from the privacy as well as from the curiosity perspective with several studies showing and confirming users’ reservations against public interactions. Interactive advertisements can exploit this best using specific types of interaction techniques

    Comparing verbal media for alarm handling: Speech versus textual displays

    Get PDF
    The rise of computers in command and control domains has meant that control operations can be performed via desk-based visual display terminals. This trend has also produced the potential to display information to operators in a variety of formats. Of particular interest has been the use of text-based displays for alarm presentation. There are possible limitations to the use of text for alarm presentation, not least of which is the need for a dedicated alarms display screen (or, at least, a display page). Given the capability of computers to synthesize speech, it is possible that speech-based alarms could generate the same information as text-based displays without the need for dedicated screen space. In this paper an experimental comparison of speech-based and text-based displays for presentation of alarms is reported. The findings show that speech leads to longer response times than text displays, but that it has minimal effect on the efficacy of fault handling. The results are discussed within the alarm initiated activities framework and implications for alarm system design are outlined
    corecore