6,414 research outputs found

    Guidelines for the design of haptic widgets

    Get PDF
    Haptic feedback has been shown to improve user performance in Graphical User Interface (GUI) targeting tasks in a number of studies. These studies have typically focused on interactions with individual targets, and it is unclear whether the performance increases reported will generalise to the more realistic situation where multiple targets are presented simultaneously. This paper addresses this issue in two ways. Firstly two empirical studies dealing with groups of haptically augmented widgets are presented. These reveal that haptic augmentations of complex widgets can reduce performance, although carefully designed feedback can result in performance improvements. The results of these studies are then used in conjunction with the previous literature to generate general design guidelines for the creation of haptic widgets

    Solar array flight experiment/dynamic augmentation experiment

    Get PDF
    This report presents the objectives, design, testing, and data analyses of the Solar Array Flight Experiment/Dynamic Augmentation Experiment (SAFE/DAE) that was tested aboard Shuttle in September 1984. The SAFE was a lightweight, flat-fold array that employed a thin polyimide film (Kapton) as a substrate for the solar cells. Extension/retraction, dynamics, electrical and thermal tests, were performed. Of particular interest is the dynamic behavior of such a large lightweight structure in space. Three techniques for measuring and analyzing this behavior were employed. The methodology for performing these tests, gathering data, and data analyses are presented. The report shows that the SAFE solar array technology is ready for application and that new methods are available to assess the dynamics of large structures in space

    Cooperative Interactive Distributed Guidance on Mobile Devices

    Get PDF
    Mobiles device are quickly becoming an indispensable part of our society. Equipped with numerous communication capabilities, they are increasingly being examined as potential tools for civilian and military usage to aide in distributed remote collaboration for dynamic decision making and physical task completion. With an ever growing mobile workforce, the need for remote assistance in aiding field workers who are confronted with situations outside their expertise certainly increases. Enhanced capabilities in using mobile devices could significantly improve numerous components of a task\u27s completion (i.e. accuracy, timing, etc.). This dissertation considers the design of mobile implementation of technology and communication capabilities to support interactive collaboration between distributed team members. Specifically, this body of research seeks to explore and understand how various multimodal remote assistances affect both the human user\u27s performance and the mobile device\u27s effectiveness when used during cooperative tasks. Additionally, power effects are additionally studied to assess the energy demands on a mobile device supporting multimodal communication. In a series of applied experiments and demonstrations, the effectiveness of a mobile device facilitating multimodal collaboration is analyzed through both empirical data collection and subjective exploration. The utility of the mobile interactive system and its configurations are examined to assess the impact on distributed task performance and collaborative dialogue between pairs. The dissertation formulates and defends an argument that multimodal communication capabilities should be incorporated into mobile communication channels to provide collaborating partners salient perspectives with a goal of reaching a mutual understanding of task procedures. The body of research discusses the findings of this investigation and highlight these findings they may influence future mobile research seeking to enhance interactive distributed guidance

    Brain-based target expansion

    Full text link

    Gaze+Hold: Eyes-only Direct Manipulation with Continuous Gaze Modulated by Closure of One Eye

    Get PDF
    The eyes are coupled in their gaze function and therefore usually treated as a single input channel, limiting the range of interactions. However, people are able to open and close one eye while still gazing with the other. We introduce Gaze+Hold as an eyes-only technique that builds on this ability to leverage the eyes as separate input channels, with one eye modulating the state of interaction while the other provides continuous input. Gaze+Hold enables direct manipulation beyond pointing which we explore through the design of Gaze+Hold techniques for a range of user interface tasks. In a user study, we evaluated performance, usability and user’s spontaneous choice of eye for modulation of input. The results show that users are effective with Gaze+Hold. The choice of dominant versus non-dominant eye had no effect on performance, perceived usability and workload. This is significant for the utility of Gaze+Hold as it affords flexibility for mapping of either eye in different configurations

    Imaging Immune and Metabolic Cells of Visceral Adipose Tissues with Multimodal Nonlinear Optical Microscopy

    Get PDF
    Visceral adipose tissue (VAT) inflammation is recognized as a mechanism by which obesity is associated with metabolic diseases. The communication between adipose tissue macrophages (ATMs) and adipocytes is important to understanding the interaction between immunity and energy metabolism and its roles in obesity-induced diseases. Yet visualizing adipocytes and macrophages in complex tissues is challenging to standard imaging methods. Here, we describe the use of a multimodal nonlinear optical (NLO) microscope to characterize the composition of VATs of lean and obese mice including adipocytes, macrophages, and collagen fibrils in a label-free manner. We show that lipid metabolism processes such as lipid droplet formation, lipid droplet microvesiculation, and free fatty acids trafficking can be dynamically monitored in macrophages and adipocytes. With its versatility, NLO microscopy should be a powerful imaging tool to complement molecular characterization of the immunity-metabolism interface

    Autonomous Acquisition of Natural Situated Communication

    Get PDF
    An important part of human intelligence, both historically and operationally, is our ability to communicate. We learn how to communicate, and maintain our communicative skills, in a society of communicators – a highly effective way to reach and maintain proficiency in this complex skill. Principles that might allow artificial agents to learn language this way are in completely known at present – the multi-dimensional nature of socio-communicative skills are beyond every machine learning framework so far proposed. Our work begins to address the challenge of proposing a way for observation-based machine learning of natural language and communication. Our framework can learn complex communicative skills with minimal up-front knowledge. The system learns by incrementally producing predictive models of causal relationships in observed data, guided by goal-inference and reasoning using forward-inverse models. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime TV-style interview, using multimodal communicative gesture and situated language to talk about recycling of various materials and objects. S1 can learn multimodal complex language and multimodal communicative acts, a vocabulary of 100 words forming natural sentences with relatively complex sentence structure, including manual deictic reference and anaphora. S1 is seeded only with high-level information about goals of the interviewer and interviewee, and a small ontology; no grammar or other information is provided to S1 a priori. The agent learns the pragmatics, semantics, and syntax of complex utterances spoken and gestures from scratch, by observing the humans compare and contrast the cost and pollution related to recycling aluminum cans, glass bottles, newspaper, plastic, and wood. After 20 hours of observation S1 can perform an unscripted TV interview with a human, in the same style, without making mistakes
    • …
    corecore