2,612 research outputs found

    Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface

    Get PDF
    Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices

    An empirical investigation of gaze selection in mid-air gestural 3D manipulation

    Get PDF
    In this work, we investigate gaze selection in the context of mid-air hand gestural manipulation of 3D rigid bodies on monoscopic displays. We present the results of a user study with 12 participants in which we compared the performance of Gaze, a Raycasting technique (2D Cursor) and a Virtual Hand technique (3D Cursor) to select objects in two 3D mid-air interaction tasks. Also, we compared selection confirmation times for Gaze selection when selection is followed by manipulation to when it is not. Our results show that gaze selection is faster and more preferred than 2D and 3D mid-air-controlled cursors, and is particularly well suited for tasks in which users constantly switch between several objects during the manipulation. Further, selection confirmation times are longer when selection is followed by manipulation than when it is not

    The cockpit for the 21st century

    Get PDF
    Interactive surfaces are a growing trend in many domains. As one possible manifestation of Mark Weiser’s vision of ubiquitous and disappearing computers in everywhere objects, we see touchsensitive screens in many kinds of devices, such as smartphones, tablet computers and interactive tabletops. More advanced concepts of these have been an active research topic for many years. This has also influenced automotive cockpit development: concept cars and recent market releases show integrated touchscreens, growing in size. To meet the increasing information and interaction needs, interactive surfaces offer context-dependent functionality in combination with a direct input paradigm. However, interfaces in the car need to be operable while driving. Distraction, especially visual distraction from the driving task, can lead to critical situations if the sum of attentional demand emerging from both primary and secondary task overextends the available resources. So far, a touchscreen requires a lot of visual attention since its flat surface does not provide any haptic feedback. There have been approaches to make direct touch interaction accessible while driving for simple tasks. Outside the automotive domain, for example in office environments, concepts for sophisticated handling of large displays have already been introduced. Moreover, technological advances lead to new characteristics for interactive surfaces by enabling arbitrary surface shapes. In cars, two main characteristics for upcoming interactive surfaces are largeness and shape. On the one hand, spatial extension is not only increasing through larger displays, but also by taking objects in the surrounding into account for interaction. On the other hand, the flatness inherent in current screens can be overcome by upcoming technologies, and interactive surfaces can therefore provide haptically distinguishable surfaces. This thesis describes the systematic exploration of large and shaped interactive surfaces and analyzes their potential for interaction while driving. Therefore, different prototypes for each characteristic have been developed and evaluated in test settings suitable for their maturity level. Those prototypes were used to obtain subjective user feedback and objective data, to investigate effects on driving and glance behavior as well as usability and user experience. As a contribution, this thesis provides an analysis of the development of interactive surfaces in the car. Two characteristics, largeness and shape, are identified that can improve the interaction compared to conventional touchscreens. The presented studies show that large interactive surfaces can provide new and improved ways of interaction both in driver-only and driver-passenger situations. Furthermore, studies indicate a positive effect on visual distraction when additional static haptic feedback is provided by shaped interactive surfaces. Overall, various, non-exclusively applicable, interaction concepts prove the potential of interactive surfaces for the use in automotive cockpits, which is expected to be beneficial also in further environments where visual attention needs to be focused on additional tasks.Der Einsatz von interaktiven OberflĂ€chen weitet sich mehr und mehr auf die unterschiedlichsten Lebensbereiche aus. Damit sind sie eine mögliche AusprĂ€gung von Mark Weisers Vision der allgegenwĂ€rtigen Computer, die aus unserer direkten Wahrnehmung verschwinden. Bei einer Vielzahl von technischen GerĂ€ten des tĂ€glichen Lebens, wie Smartphones, Tablets oder interaktiven Tischen, sind berĂŒhrungsempfindliche OberflĂ€chen bereits heute in Benutzung. Schon seit vielen Jahren arbeiten Forscher an einer Weiterentwicklung der Technik, um ihre Vorteile auch in anderen Bereichen, wie beispielsweise der Interaktion zwischen Mensch und Automobil, nutzbar zu machen. Und das mit Erfolg: Interaktive BenutzeroberflĂ€chen werden mittlerweile serienmĂ€ĂŸig in vielen Fahrzeugen eingesetzt. Der Einbau von immer grĂ¶ĂŸeren, in das Cockpit integrierten Touchscreens in Konzeptfahrzeuge zeigt, dass sich diese Entwicklung weiter in vollem Gange befindet. Interaktive OberflĂ€chen ermöglichen das flexible Anzeigen von kontextsensitiven Inhalten und machen eine direkte Interaktion mit den Bildschirminhalten möglich. Auf diese Weise erfĂŒllen sie die sich wandelnden Informations- und InteraktionsbedĂŒrfnisse in besonderem Maße. Beim Einsatz von Bedienschnittstellen im Fahrzeug ist die gefahrlose Benutzbarkeit wĂ€hrend der Fahrt von besonderer Bedeutung. Insbesondere visuelle Ablenkung von der Fahraufgabe kann zu kritischen Situationen fĂŒhren, wenn PrimĂ€r- und SekundĂ€raufgaben mehr als die insgesamt verfĂŒgbare Aufmerksamkeit des Fahrers beanspruchen. Herkömmliche Touchscreens stellen dem Fahrer bisher lediglich eine flache OberflĂ€che bereit, die keinerlei haptische RĂŒckmeldung bietet, weshalb deren Bedienung besonders viel visuelle Aufmerksamkeit erfordert. Verschiedene AnsĂ€tze ermöglichen dem Fahrer, direkte Touchinteraktion fĂŒr einfache Aufgaben wĂ€hrend der Fahrt zu nutzen. Außerhalb der Automobilindustrie, zum Beispiel fĂŒr BĂŒroarbeitsplĂ€tze, wurden bereits verschiedene Konzepte fĂŒr eine komplexere Bedienung großer Bildschirme vorgestellt. DarĂŒber hinaus fĂŒhrt der technologische Fortschritt zu neuen möglichen AusprĂ€gungen interaktiver OberflĂ€chen und erlaubt, diese beliebig zu formen. FĂŒr die nĂ€chste Generation von interaktiven OberflĂ€chen im Fahrzeug wird vor allem an der Modifikation der Kategorien GrĂ¶ĂŸe und Form gearbeitet. Die Bedienschnittstelle wird nicht nur durch grĂ¶ĂŸere Bildschirme erweitert, sondern auch dadurch, dass Objekte wie Dekorleisten in die Interaktion einbezogen werden können. Andererseits heben aktuelle Technologieentwicklungen die Restriktion auf flache OberflĂ€chen auf, so dass Touchscreens kĂŒnftig ertastbare Strukturen aufweisen können. Diese Dissertation beschreibt die systematische Untersuchung großer und nicht-flacher interaktiver OberflĂ€chen und analysiert ihr Potential fĂŒr die Interaktion wĂ€hrend der Fahrt. Dazu wurden fĂŒr jede Charakteristik verschiedene Prototypen entwickelt und in Testumgebungen entsprechend ihres Reifegrads evaluiert. Auf diese Weise konnten subjektives Nutzerfeedback und objektive Daten erhoben, und die Effekte auf Fahr- und Blickverhalten sowie Nutzbarkeit untersucht werden. Diese Dissertation leistet den Beitrag einer Analyse der Entwicklung von interaktiven OberflĂ€chen im Automobilbereich. Weiterhin werden die Aspekte GrĂ¶ĂŸe und Form untersucht, um mit ihrer Hilfe die Interaktion im Vergleich zu herkömmlichen Touchscreens zu verbessern. Die durchgefĂŒhrten Studien belegen, dass große FlĂ€chen neue und verbesserte Bedienmöglichkeiten bieten können. Außerdem zeigt sich ein positiver Effekt auf die visuelle Ablenkung, wenn zusĂ€tzliches statisches, haptisches Feedback durch nicht-flache OberflĂ€chen bereitgestellt wird. Zusammenfassend zeigen verschiedene, untereinander kombinierbare Interaktionskonzepte das Potential interaktiver OberflĂ€chen fĂŒr den automotiven Einsatz. Zudem können die Ergebnisse auch in anderen Bereichen Anwendung finden, in denen visuelle Aufmerksamkeit fĂŒr andere Aufgaben benötigt wird

    Tablet Applications for the Elderly: Specific Usability Guidelines

    Get PDF
    While the world population is aging, the technological progress is steadily increasing. Smartphones and tablets belong to a growing market and even more people aged 65 and above are using such touch devices. However, with advancing age normal cognitive, sensory, perceptual and motor changes influence psychological and physical capabilities and therefore the way the elderly are able to use tablet-applications. When designing tablet-applications for the elderly developers have to be supported in understanding these capabilities. Therefore, this thesis provides a comprehensive compilation of usability guidelines in order to develop user-friendly tablet-applications for older people. The development and testing of an exemplary tablet-application within this thesis shows how these guidelines can be brought into practice and how this realization is evaluated by test persons in this age group

    Interacting "Through the Display"

    Get PDF
    The increasing availability of displays at lower costs has led to a proliferation of such in our everyday lives. Additionally, mobile devices are ready to hand and have been proposed as interaction devices for external screens. However, only their input mechanism was taken into account without considering three additional factors in environments hosting several displays: first, a connection needs to be established to the desired target display (modality). Second, screens in the environment may be re-arranged (flexibility). And third, displays may be out of the user’s reach (distance). In our research we aim to overcome the problems resulting from these characteristics. The overall goal is a new interaction model that allows for (1) a non-modal connection mechanism for impromptu use on various displays in the environment, (2) interaction on and across displays in highly flexible environments, and (3) interacting at variable distances. In this work we propose a new interaction model called through the display interaction which enables users to interact with remote content on their personal device in an absolute and direct fashion. To gain a better understanding of the effects of the additional characteristics, we implemented two prototypes each of which investigates a different distance to the target display: LucidDisplay allows users to place their mobile device directly on top of a larger external screen. MobileVue on the other hand enables users to interact with an external screen at a distance. In each of these prototypes we analyzed their effects on the remaining two criteria – namely the modality of the connection mechanism as well as the flexibility of the environment. With the findings gained in this initial phase we designed Shoot & Copy, a system that allows the detection of screens purely based on their visual content. Users aim their personal device’s camera at the target display which then appears in live video shown in the viewfinder. To select an item, users take a picture which is analyzed to determine the targeted region. We further extended this approach to multiple displays by using a centralized component serving as gateway to the display environment. In Tap & Drop we refined this prototype to support real-time feedback. Instead of taking pictures, users can now aim their mobile device at the display resulting and start interacting immediately. In doing so, we broke the rigid sequential interaction of content selection and content manipulation. Both prototypes allow for (1) connections in a non-modal way (i.e., aim at the display and start interacting with it) from the user’s point of view and (2) fully flexible environments (i.e., the mobile device tracks itself with respect to displays in the environment). However, the wide-angle lenses and thus greater field of views of current mobile devices still do not allow for variable distances. In Touch Projector, we overcome this limitation by introducing zooming in combination with temporarily freezing the video image. Based on our extensions to taxonomy of mobile device interaction on external displays, we created a refined model of interacting through the display for mobile use. It enables users to interact impromptu without explicitly establishing a connection to the target display (non-modal). As the mobile device tracks itself with respect to displays in the environment, the model further allows for full flexibility of the environment (i.e., displays can be re-arranged without affecting on the interaction). And above all, users can interact with external displays regardless of their actual size at variable distances without any loss of accuracy.Die steigende VerfĂŒgbarkeit von Bildschirmen hat zu deren Verbreitung in unserem Alltag gefĂŒhrt. Ferner sind mobile GerĂ€te immer griffbereit und wurden bereits als InteraktionsgerĂ€te fĂŒr zusĂ€tzliche Bildschirme vorgeschlagen. Es wurden jedoch nur Eingabemechanismen berĂŒcksichtigt ohne nĂ€her auf drei weitere Faktoren in Umgebungen mit mehreren Bildschirmen einzugehen: (1) Beide GerĂ€te mĂŒssen verbunden werden (ModalitĂ€t). (2) Bildschirme können in solchen Umgebungen umgeordnet werden (FlexibilitĂ€t). (3) Monitore können außer Reichweite sein (Distanz). Wir streben an, die Probleme, die durch diese Eigenschaften auftreten, zu lösen. Das ĂŒbergeordnete Ziel ist ein Interaktionsmodell, das einen nicht-modalen Verbindungsaufbau fĂŒr spontane Verwendung von Bildschirmen in solchen Umgebungen, (2) Interaktion auf und zwischen Bildschirmen in flexiblen Umgebungen, und (3) Interaktionen in variablen Distanzen erlaubt. Wir stellen ein Modell (Interaktion durch den Bildschirm) vor, mit dem Benutzer mit entfernten Inhalten in direkter und absoluter Weise auf ihrem MobilgerĂ€t interagieren können. Um die Effekte der hinzugefĂŒgten Charakteristiken besser zu verstehen, haben wir zwei Prototypen fĂŒr unterschiedliche Distanzen implementiert: LucidDisplay erlaubt Benutzern ihr mobiles GerĂ€t auf einen grĂ¶ĂŸeren, sekundĂ€ren Bildschirm zu legen. GegensĂ€tzlich dazu ermöglicht MobileVue die Interaktion mit einem zusĂ€tzlichen Monitor in einer gewissen Entfernung. In beiden Prototypen haben wir dann die Effekte der verbleibenden zwei Kriterien (d.h. ModalitĂ€t des Verbindungsaufbaus und FlexibilitĂ€t der Umgebung) analysiert. Mit den in dieser ersten Phase erhaltenen Ergebnissen haben wir Shoot & Copy entworfen. Dieser Prototyp erlaubt die Erkennung von Bildschirmen einzig ĂŒber deren visuellen Inhalt. Benutzer zeigen mit der Kamera ihres MobilgerĂ€ts auf einen Bildschirm dessen Inhalt dann in Form von Video im Sucher dargestellt wird. Durch die Aufnahme eines Bildes (und der darauf folgenden Analyse) wird Inhalt ausgewĂ€hlt. Wir haben dieses Konzept zudem auf mehrere Bildschirme erweitert, indem wir eine zentrale Instanz verwendet haben, die als Schnittstelle zur Umgebung agiert. Mit Tap & Drop haben wir den Prototyp verfeinert, um Echtzeit-Feedback zu ermöglichen. Anstelle der Bildaufnahme können Benutzer nun ihr mobiles GerĂ€t auf den Bildschirm richten und sofort interagieren. Dadurch haben wir die strikt sequentielle Interaktion (Inhalt auswĂ€hlen und Inhalt manipulieren) aufgebrochen. Beide Prototypen erlauben bereits nicht-modale Verbindungsmechanismen in flexiblen Umgebungen. Die in heutigen MobilgerĂ€ten verwendeten Weitwinkel-Objektive erlauben jedoch nach wie vor keine variablen Distanzen. Mit Touch Projector beseitigen wir diese EinschrĂ€nkung, indem wir Zoomen in Kombination mit einer vorĂŒbergehenden Pausierung des Videos im Sucher einfĂŒgen. Basierend auf den Erweiterungen der Klassifizierung von Interaktionen mit zusĂ€tzlichen Bildschirmen durch mobile GerĂ€te haben wir ein verbessertes Modell (Interaktion durch den Bildschirm) erstellt. Es erlaubt Benutzern spontan zu interagieren, ohne explizit eine Verbindung zum zweiten Bildschirm herstellen zu mĂŒssen (nicht-modal). Da das mobile GerĂ€t seinen rĂ€umlichen Bezug zu allen Bildschirmen selbst bestimmt, erlaubt unser Modell zusĂ€tzlich volle FlexibilitĂ€t in solchen Umgebungen. DarĂŒber hinaus können Benutzer mit zusĂ€tzlichen Bildschirmen (unabhĂ€ngig von deren GrĂ¶ĂŸe) in variablen Entfernungen interagieren

    RICHIE: A Step-by-step Navigation Widget to Enhance Broad Hierarchy Exploration on Handheld Tactile Devices

    No full text
    International audienceExploring large hierarchies is still a challenging task, especially for handheld tactile devices, due to the lack of visualization space and finger's occlusion. In this paper, we propose the RICHIE (Radial In-Cremental HIerarchy Exploration) tool, a new radial widget that allows step-by-step navigation through large hierarchies. We designed it to fit handheld tactile requirements such as target reaching and space optimization. Depth exploration is made by shifting two levels of hierarchy at the same time, for reducing the screen occupation. This widget was implemented in order to adapt a Command and Control (C2) system to mobile tactile devices, as these systems require the on-screen presence of an important unit's hierarchy (the ORder of BATtle). Nevertheless, we are convinced that RICHIE could be used on several systems that require hierarchical data exploration, such as phylogenetic trees or file browsing

    Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions

    Get PDF
    abstract: Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Multi-touch 3D Exploratory Analysis of Ocean Flow Models

    Get PDF
    Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance

    Direct interaction with large displays through monocular computer vision

    Get PDF
    Large displays are everywhere, and have been shown to provide higher productivity gain and user satisfaction compared to traditional desktop monitors. The computer mouse remains the most common input tool for users to interact with these larger displays. Much effort has been made on making this interaction more natural and more intuitive for the user. The use of computer vision for this purpose has been well researched as it provides freedom and mobility to the user and allows them to interact at a distance. Interaction that relies on monocular computer vision, however, has not been well researched, particularly when used for depth information recovery. This thesis aims to investigate the feasibility of using monocular computer vision to allow bare-hand interaction with large display systems from a distance. By taking into account the location of the user and the interaction area available, a dynamic virtual touchscreen can be estimated between the display and the user. In the process, theories and techniques that make interaction with computer display as easy as pointing to real world objects is explored. Studies were conducted to investigate the way human point at objects naturally with their hand and to examine the inadequacy in existing pointing systems. Models that underpin the pointing strategy used in many of the previous interactive systems were formalized. A proof-of-concept prototype is built and evaluated from various user studies. Results from this thesis suggested that it is possible to allow natural user interaction with large displays using low-cost monocular computer vision. Furthermore, models developed and lessons learnt in this research can assist designers to develop more accurate and natural interactive systems that make use of human’s natural pointing behaviours
    • 

    corecore