25 research outputs found

    Tactile Arrays for Virtual Textures

    Get PDF
    This thesis describes the development of three new tactile stimulators for active touch, i.e. devices to deliver virtual touch stimuli to the fingertip in response to exploratory movements by the user. All three stimulators are designed to provide spatiotemporal patterns of mechanical input to the skin via an array of contactors, each under individual computer control. Drive mechanisms are based on piezoelectric bimorphs in a cantilever geometry. The first of these is a 25-contactor array (5 × 5 contactors at 2 mm spacing). It is a rugged design with a compact drive system and is capable of producing strong stimuli when running from low voltage supplies. Combined with a PC mouse, it can be used for active exploration tasks. Pilot studies were performed which demonstrated that subjects could successfully use the device for discrimination of line orientation, simple shape identification and line following tasks. A 24-contactor stimulator (6 × 4 contactors at 2 mm spacing) with improved bandwidth was then developed. This features control electronics designed to transmit arbitrary waveforms to each channel (generated on-the-fly, in real time) and software for rapid development of experiments. It is built around a graphics tablet, giving high precision position capability over a large 2D workspace. Experiments using two-component stimuli (components at 40 Hz and 320 Hz) indicate that spectral balance within active stimuli is discriminable independent of overall intensity, and that the spatial variation (texture) within the target is easier to detect at 320 Hz that at 40 Hz. The third system developed (again 6 × 4 contactors at 2 mm spacing) was a lightweight modular stimulator developed for fingertip and thumb grasping tasks; furthermore it was integrated with force-feedback on each digit and a complex graphical display, forming a multi-modal Virtual Reality device for the display of virtual textiles. It is capable of broadband stimulation with real-time generated outputs derived from a physical model of the fabric surface. In an evaluation study, virtual textiles generated from physical measurements of real textiles were ranked in categories reflecting key mechanical and textural properties. The results were compared with a similar study performed on the real fabrics from which the virtual textiles had been derived. There was good agreement between the ratings of the virtual textiles and the real textiles, indicating that the virtual textiles are a good representation of the real textiles and that the system is delivering appropriate cues to the user

    Supporting the Development Process of Multimodal and Natural Automotive User Interfaces

    Get PDF
    Nowadays, driving a car places multi-faceted demands on the driver that go beyond maneuvering a vehicle through road traffic. The number of additional functions for entertainment, infotainment and comfort increased rapidly in the last years. Each new function in the car is designed to make driving as pleasant as possible but also increases the risk that the driver will be distracted from the primary driving task. One of the most important goals for designers of new and innovative automotive user interfaces is therefore to keep driver distraction to a minimum while providing an appropriate support to the driver. This goal can be achieved by providing tools and methods that support a human-centred development process. In this dissertation, a design space will be presented that helps to analyze the use of context, to generate new ideas for automotive user interfaces and to document them. Furthermore, new opportunities for rapid prototyping will be introduced. To be able to evaluate new automotive user interfaces and interaction concepts regarding their effect on driving performance, a driving simulation software was developed within the scope of this dissertation. In addition, research results in the field of multimodal, implicit and eye-based interaction in the car are presented. The different case studies mentioned illustrate the systematic and comprehensive research on the opportunities of these kinds of interaction, as well as their effects on driving performance. We developed a prototype of a vibration steering wheel that communicates navigation instructions. Another prototype of a steering wheel has a display integrated in the middle and enables handwriting input. A further case study explores a visual placeholder concept to assist drivers when using in-car displays while driving. When a driver looks at a display and then at the street, the last gaze position on the display is highlighted to assist the driver when he switches his attention back to the display. This speeds up the process of resuming an interrupted task. In another case study, we compared gaze-based interaction with touch and speech input. In the last case study, a driver-passenger video link system is introduced that enables the driver to have eye contact with the passenger without turning his head. On the whole, this dissertation shows that by using a new human-centred development process, modern interaction concepts can be developed in a meaningful way.Das Führen eines Fahrzeuges stellt heute vielfältige Ansprüche an den Fahrer, die über das reine Manövrieren im Straßenverkehr hinausgehen. Die Fülle an Zusatzfunktionen zur Unterhaltung, Navigation- und Komfortzwecken, die während der Fahrt genutzt werden können, ist in den letzten Jahren stark angestiegen. Einerseits dient jede neu hinzukommende Funktion im Fahrzeug dazu, das Fahren so angenehm wie möglich zu gestalten, birgt aber anderseits auch immer das Risiko, den Fahrer von seiner primären Fahraufgabe abzulenken. Eines der wichtigsten Ziele für Entwickler von neuen und innovativen Benutzungsschnittstellen im Fahrzeug ist es, die Fahrerablenkung so gering wie möglich zu halten und dabei dem Fahrer eine angemessene Unterstützung zu bieten. Werkzeuge und Methoden, die einen benutzerzentrierten Entwicklungsprozess unter-stützen, können helfen dieses Ziel zu erreichen. In dieser Dissertation wird ein Entwurfsraum vorgestellt, welcher helfen soll den Benutzungskontext zu analysieren, neue Ideen für Benutzungsschnittstellen zu generieren und diese zu dokumentieren. Darüber hinaus wurden im Rahmen der Arbeit neue Möglichkeiten zur schnellen Prototypenerstellung entwickelt. Es wurde ebenfalls eine Fahrsimulationssoftware erstellt, welche die quantitative Bewertung der Auswirkungen von Benutzungs-schnittstellen und Interaktionskonzepten auf die Fahreraufgabe ermöglicht. Desweiteren stellt diese Dissertation neue Forschungsergebnisse auf den Gebieten der multimodalen, impliziten und blickbasierten Interaktion im Fahrzeug vor. In verschiedenen Fallbeispielen wurden die Möglichkeiten dieser Interaktionsformen sowie deren Auswirkung auf die Fahrerablenkung umfassend und systematisch untersucht. Es wurde ein Prototyp eines Vibrationslenkrads erstellt, womit Navigations-information übermittelt werden können sowie ein weiterer Prototyp eines Lenkrads, welches ein Display in der Mitte integriert hat und damit handschriftliche Texteingabe ermöglicht. Ein visuelles Platzhalterkonzept ist im Fokus eines weiteren Fallbeispiels. Auf einem Fahrzeugdisplay wird die letzte Blickposition bevor der Fahrer seine Aufmerksamkeit dem Straßenverkehr zuwendet visuell hervorgehoben. Dies ermöglicht dem Fahrer eine unterbrochene Aufgabe z.B. das Durchsuchen einer Liste von Musik-titel schneller wieder aufzunehmen, wenn er seine Aufmerksamkeit wieder dem Display zuwendet. In einer weiteren Studie wurde blickbasierte Interaktion mit Sprach- und Berührungseingabe verglichen und das letzte Fallbeispiel beschäftigt sich mit der Unterstützung der Kommunikation im Fahrzeug durch die Bereitstellung eines Videosystems, welches Blickkontakt zwischen dem Fahrer und den Mitfahrern ermöglicht, ohne dass der Fahrer seinen Kopf drehen muss. Die Arbeit zeigt insgesamt, dass durch den Einsatz eines neuen benutzerzentrierten Entwicklungsprozess moderne Interaktionskonzept sinnvoll entwickelt werden können

    Multimodal information presentation for high-load human computer interaction

    Get PDF
    This dissertation addresses the question: given an application and an interaction context, how can interfaces present information to users in a way that improves the quality of interaction (e.g. a better user performance, a lower cognitive demand and a greater user satisfaction)? Information presentation is critical to the quality of interaction because it guides, constrains and even determines cognitive behavior. A good presentation is particularly desired in high-load human computer interactions, such as when users are under time pressure, stress, or are multi-tasking. Under a high mental workload, users may not have the spared cognitive capacity to cope with the unnecessary workload induced by a bad presentation. In this dissertation work, the major presentation factor of interest is modality. We have conducted theoretical studies in the cognitive psychology domain, in order to understand the role of presentation modality in different stages of human information processing. Based on the theoretical guidance, we have conducted a series of user studies investigating the effect of information presentation (modality and other factors) in several high-load task settings. The two task domains are crisis management and driving. Using crisis scenario, we investigated how to presentation information to facilitate time-limited visual search and time-limited decision making. In the driving domain, we investigated how to present highly-urgent danger warnings and how to present informative cues that help drivers manage their attention between multiple tasks. The outcomes of this dissertation work have useful implications to the design of cognitively-compatible user interfaces, and are not limited to high-load applications

    Interaction with embodied media

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2009.Includes bibliographical references (p. 213-222).The graphical user interface has become the de facto metaphor for the majority of our diverse activities using computers, yet the desktop environment provides a one size fits all user interface. This dissertation argues that for the computer to fully realize its potential to significantly extend our intellectual abilities, new interaction techniques must call upon our bodily abilities to manipulate objects, enable collaborative work, and be usable in our everyday physical environment. In this dissertation I introduce a new human-computer interaction concept, embodied media. An embodied media system physically represents digital content such as files, variables, or other program constructs with a collection of self-contained, interactive electronic tokens that can display visual feedback and can be manipulated gesturally by users as a single, coordinated interface. Such a system relies minimally on external sensing infrastructure compared to tabletop or augmented reality systems, and is a more general-purpose platform than most tangible user interfaces. I hypothesized that embodied media interfaces provide advantages for activities that require the user to efficiently arrange and adjust multiple digital content items. Siftables is the first instantiation of an embodied media interface. I built 180 Siftable devices in three design iterations, and developed a programming interface and various applications to explore the possibilities of embodied media.(cont.) In a survey, outside developers reported that Siftables created new user interface possibilities, and that working with Siftables increased their interest in human-computer interaction and expanded their ideas about the field. I evaluated a content organization application with users, finding that Siftables offered an advantage over the mouse+graphical user interface (GUI) for task completion time that was amplified when participants worked in pairs, and a digital image manipulation application in which participants preferred Siftables to the GUI in terms of enjoyability, expressivity, domain learning, and for exploratory/quick arrangement of items.by David Jeffrey Merrill.Ph.D
    corecore