37 research outputs found

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes

    Interagir en l'absence de signifiants : le cas des swhidgets

    Get PDF
    At the heart of this thesis is a common but problematic situation that users of digital systems often face in their daily interactions: to interact with the system, they need some knowledge of an interaction possibility, some piece of information about the interface, but this information is not provided in the context in which they need it. I call such interaction possibilities non-signified, and signifier-less designs the interfaces and interaction techniques that rely on non-signified interaction possibilities.An example of modern signifier-less design is what I call "swhidgets" for "SWIpe-revealed HIDden WIDGETS": widgets that are hidden under the screen bezels or other interface elements, out of view and not advertised by any graphical mark, but that can be revealed by dragging them into view with a swipe gesture relying on a physical manipulation metaphor. Swhidgets are an important component of touch-based smartphone and tablets interfaces, and will be the principal signifier-less design studied in this thesis.When facing a signifier-less design, users may be confused about what they should do and how to achieve their goals; or they might have to use suboptimal ways of achieving their goals because they are unaware of the existence of more efficient options. It is thus usually advised to avoid signifier-less designs. Yet, despite designers’ awareness of the problems they may cause, signifier-less designs are common in user interfaces. They thus deserve a deeper analysis than simply advising to avoid them in interface design. Indeed, there might be good reasons to apply this design: maybe they provide some benefits that are hard to see with our current understanding of these designs, or maybe there is no way to avoid them.In this thesis, I study the question of why designers would create interfaces that do not clearly expose some of their interaction possibilities, taking the case of swhidgets as an example and focus of inquiry. As a preliminary work on swhidgets, I focus on the following questions: What are signifier-less designs and what aspects of swhidgets design make them unique? Do users know the swhidgets provided by their system? How did they get to know them despite their lack of signifiers? What are the benefits of not having signifiers in the design of swhidgets?My contributions to these questions are:- I define signifier-less designs and provide observations of this type of design in user interfaces.- I provide an analysis of the fundamental notions required to define signifier-less designs: affordances, signifiers and semiotics.- I propose a model of user discovery and adoption of interaction techniques in general, relying on three dimensions and their relationships: users’ current knowledge and skills, users’ motivations, and the design means of informing users provided by the interfaces.- I propose the notions of Degree of Knowledge and Source of Knowledge derived from this model, that can be used in experiments to evaluate how well the participants know an interaction technique and how they discovered it.- I present the design and results of two studies on iOS swhidgets that investigate how well users known them, how they discover them, their reasons for not using them, how they generally feel about them, and how they integrate them in the way they think about their interactions with the system. These studies revealed that swhidgets were globally appreciated and relatively well known by users, although there is still room for improvement, notably for some specific swhidgets.I conclude with perspective for future works regarding the transfer of knowledge about swhidgets from one application to another, the pertinence of considering all aspects of user experience to understand the design of swhidgets, and the possibility to increase the discoverability of swhidgets by using animated transitions between interface views.Le cas d'un utilisateur confronté à une interface qui ne l'informe pas d'une possibilité d'interaction au moment où il en a besoin est un problème fondamental d'IHM. La présence de telles possibilités d'interaction em non-signalées est fréquente dans les interfaces homme-machine modernes et potentiellement problématique, rendant nécessaire l'étude des interfaces et techniques d'interaction dites sans signifiants.Un exemple de conception "sans signifiants" moderne est ce que j'appelle un Swhidget pour "SWIpe-revealed HIDden WIDGET": un composant d'interface normalement caché sous les bords de l'écran ou sous un autre objet, pouvant être révélé en le tirant à l'aide d'un geste de balayage selon une métaphore de manipulation physique. Les Swhidgets sont des composants importants des interfaces de téléphones et tablettes à écran tactile, et sont la principale conception sans signifiant étudiée dans cette thèse.En présence d'une conception sans signifiant, les utilisateurs peuvent être confus quant à ce qu'ils doivent faire pour atteindre leur but, ou être réduits à utiliser des méthodes sous-optimales parce qu'ils ne sont pas conscients de l'existence de meilleures alternatives. Il est donc généralement recommandé d'éviter de concevoir des interfaces sans signifiants. De telles interfaces sont pourtant courantes bien que les concepteurs soient conscient des problèmes qu'elles causent. Elles méritent donc une analyse plus approfondie, au delà du simple conseil de les éviter. En effet, il pourrait y avoir de bonnes raisons de concevoir des interfaces sans signifiants, qu'elles aient des qualités difficiles à mettre en évidence en l'état actuel de notre compréhension ou qu'il soit simplement impossible de les éviter.Dans cette thèse, j'analyse les raisons pouvant inciter à la conception d'interfaces qui n'exposent pas clairement les possibilités d'interaction qu'elles offrent, en prenant les Swhidgets comme objet d'étude principal. Pour cette étude initiale des Swhidgets, je me concentre sur les points suivants : que sont les conceptions sans signifiants et quels aspects des Swhidgets leur sont propres ? Les utilisateurs connaissent-ils les Swhidgets de leurs systèmes ? Comment les ont-ils connus malgré l'absence de signifiants ? Quels avantages y a-t-il à ne pas avoir de signifiants ?Les contributions de cette thèse sur ces points sont :- Une definition des conceptions sans signifiants basée sur des observations de telles conceptions dans des interfaces. - Une analyse des notions fondamentales requises pour la définition des conceptions sans signifiants: affordances, signifiants et sémiotique.- Un modèle de la découverte et adoption par les utilisateurs de techniques d'interaction en général, reposant sur trois dimensions et leurs relations: les compétences et connaissances actuelles des utilisateurs, leur motivations, et les moyens par lesquels une interface peut informer ses utilisateurs.- Les notions de degré de connaissance et de source de connaissance, dérivées de ce modèle, qui permettent d'évaluer expérimentalement à quel point les utilisateurs connaissent une technique d'interaction et comment ils l'ont découverte.- La conception et les résultats de deux expériences sur les Swhidgets d'iOS pour évaluer la connaissance qu'en ont les utilisateurs, comment ils les ont découvert, leurs éventuelles raisons de ne pas les utiliser, comment ils les perçoivent globalement et les intègrent dans leur façon de penser l'interaction. Ces études montrent que les Swhidgets sont globalement appréciés et relativement bien connus, tout en laissant de la place pour des améliorations, surtout pour certains Swhidgets.Cette thèse ouvre des perspectives concernant le transfert de connaissances entre applications, la pertinence du concept d'expérience utilisateur pour la compréhension des Swhidgets, et la possibilité de favoriser leur découverte lors de transitions animées entre vues

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work

    KOMUNIKASI PEMASARAN OBJEK WISATA BUKIT SANJAYA DESA SAMIRAN SELO BOYOLALI DALAM MENINGKATKAN JUMLAH PENGUNJUNG

    Get PDF
    ABSTRACT RISK AZAHRA SUNARYA, NIM 18.121.1.114. Marketing Communication of Bukit Sanjaya Tourism Object, Samiran Selo Village Boyolali in Increasing the Number of Visitors. Thesis, Islamic Communication and Broadcasting Study Program. Department of Da'wah and Communication. Faculty of Usuluddin and Da'wah. Raden Mas Said State Islamic University, Surakarta. 2022. Sanjaya Hill is a tourist attraction which is a type of natural tourism, with various kinds of statues and statues that adorn the tourist object. This research was conducted because Bukit Sanjaya is a tourist attraction that can develop by using marketing in its communication. The purpose of this study is to find out how the marketing communication of the Sanjaya hill tourist attraction, Samiran Selo Boyolali village, increases the number of visitors. This thesis uses a descriptive qualitative approach to content analysis methods. Data collection techniques with interviews, observation and documentation. Data validation technique using data triangulation technique. As well as data analysis techniques through data reduction, data presentation and drawing conclusions. By using the Kotler 7P theory, namely product, price, promotion, place, people, process, physical evidence. The results of this study indicate that the Bukit Sanjaya tourist attraction in its marketing activities uses various kinds of promotions such as through social media, outdoor media and organizing events to increase the number of visitors and introduce tourism objects in Selo by holding the Selo Expo event. In addition, the Bukit Sanjaya tourist attraction has also been provided with facilities such as parking lots, toilets, gazebos and angkringan, although several facilities need to be repaired and added so that they can support the needs of visitors who are currently at the Bukit Sanjaya tourist attraction. Then the addition of products such as Sanjaya pure milk, restaurants and homestays that can support the needs of visitors. As well as having ticket prices that are quite affordable for the tourist area in Selo Boyolali. Keywords: Marketing Communication, tourist attraction, Bukit Sanjay

    Enhanced Multi-Touch Gestures for Complex Tasks

    Get PDF
    Recent technological advances have resulted in a major shift, from high-performance notebook and desktop computers -- devices that rely on keyboard and mouse for input -- towards smaller, personal devices like smartphones, tablets and smartwatches which rely primarily on touch input. Users of these devices typically have a relatively high level of skill in using multi-touch gestures to interact with them, but the multi-touch gesture sets that are supported are often restricted to a small subset of one and two-finger gestures, such as tap, double tap, drag, flick, pinch and spread. This is not due to technical limitations, since modern multi-touch smartphones and tablets are capable of accepting at least ten simultaneous points of contact. Likewise, human movement models suggest that humans are capable of richer and more expressive forms of interaction that utilize multiple fingers. This suggests a gap between the technical capabilities of multi-touch devices, the physical capabilities of end-users, and the gesture sets that have been implemented for these devices. Our work explores ways in which we can enrich multi-touch interaction on these devices by expanding these common gesture sets. Simple gestures are fine for simple use cases, but if we want to support a wide range of sophisticated behaviours -- the types of interactions required by expert users -- we need equally sophisticated capabilities from our devices. In this thesis, we refer to these more sophisticated, complex interactions as `enhanced gestures' to distinguish them from common but simple gestures, and to suggest the types of expert scenarios that we are targeting in their design. We do not need to necessarily replace current, familiar gestures, but it makes sense to consider augmenting them as multi-touch becomes more prevalent, and is applied to more sophisticated problems. This research explores issues of approachability and user acceptance around gesture sets. Using pinch-to-zoom as an example, we establish design guidelines for enhanced gestures, and systematically design, implement and evaluate two different types of expert gestures, illustrative of the type of functionality that we might build into future systems

    Integrating Usability Models into Pervasive Application Development

    Get PDF
    This thesis describes novel processes in two important areas of human-computer interaction (HCI) and demonstrates ways to combine these in appropriate ways. First, prototyping plays an essential role in the development of complex applications. This is especially true if a user-centred design process is followed. We describe and compare a set of existing toolkits and frameworks that support the development of prototypes in the area of pervasive computing. Based on these observations, we introduce the EIToolkit that allows the quick generation of mobile and pervasive applications, and approaches many issues found in previous works. Its application and use is demonstrated in several projects that base on the architecture and an implementation of the toolkit. Second, we present novel results and extensions in user modelling, specifically for predicting time to completion of tasks. We extended established concepts such as the Keystroke-Level Model to novel types of interaction with mobile devices, e.g. using optical markers and gestures. The design, creation, as well as a validation of this model are presented in some detail in order to show its use and usefulness for making usability predictions. The third part is concerned with the combination of both concepts, i.e. how to integrate user models into the design process of pervasive applications. We first examine current ways of developing and show generic approaches to this problem. This leads to a concrete implementation of such a solution. An innovative integrated development environment is provided that allows for quickly developing mobile applications, supports the automatic generation of user models, and helps in applying these models early in the design process. This can considerably ease the process of model creation and can replace some types of costly user studies.Diese Dissertation beschreibt neuartige Verfahren in zwei wichtigen Bereichen der Mensch-Maschine-Kommunikation und erläutert Wege, diese geeignet zu verknüpfen. Zum einen spielt die Entwicklung von Prototypen insbesondere bei der Verwendung von benutzerzentrierten Entwicklungsverfahren eine besondere Rolle. Es werden daher auf der einen Seite eine ganze Reihe vorhandener Arbeiten vorgestellt und verglichen, die die Entwicklung prototypischer Anwendungen speziell im Bereich des Pervasive Computing unterstützen. Ein eigener Satz an Werkzeugen und Komponenten wird präsentiert, der viele der herausgearbeiteten Nachteile und Probleme solcher existierender Projekte aufgreift und entsprechende Lösungen anbietet. Mehrere Beispiele und eigene Arbeiten werden beschrieben, die auf dieser Architektur basieren und entwickelt wurden. Auf der anderen Seite werden neue Forschungsergebnisse präsentiert, die Erweiterungen von Methoden in der Benutzermodellierung speziell im Bereich der Abschätzung von Interaktionszeiten beinhalten. Mit diesen in der Dissertation entwickelten Erweiterungen können etablierte Konzepte wie das Keystroke-Level Model auf aktuelle und neuartige Interaktionsmöglichkeiten mit mobilen Geräten angewandt werden. Der Entwurf, das Erstellen sowie eine Validierung der Ergebnisse dieser Erweiterungen werden detailliert dargestellt. Ein dritter Teil beschäftigt sich mit Möglichkeiten die beiden beschriebenen Konzepte, zum einen Prototypenentwicklung im Pervasive Computing und zum anderen Benutzermodellierung, geeignet zu kombinieren. Vorhandene Ansätze werden untersucht und generische Integrationsmöglichkeiten beschrieben. Dies führt zu konkreten Implementierungen solcher Lösungen zur Integration in vorhandene Umgebungen, als auch in Form einer eigenen Applikation spezialisiert auf die Entwicklung von Programmen für mobile Geräte. Sie erlaubt das schnelle Erstellen von Prototypen, unterstützt das automatische Erstellen spezialisierter Benutzermodelle und ermöglicht den Einsatz dieser Modelle früh im Entwicklungsprozess. Dies erleichtert die Anwendung solcher Modelle und kann Aufwand und Kosten für entsprechende Benutzerstudien einsparen

    Animators of Atlanta: Layering Authenticity in the Creative Industries

    Get PDF
    This dissertation explores post-authentic neoliberal animation production culture, tracing the ways authenticity is used as a resource to garner professional autonomy and security during precarious times. Animators engage in two modes of production, the first in creating animated content, and the other in constructing a professional identity. Analyzing animator discourse allows for a nuanced exploration of how these processes interact and congeal into common sense. The use of digital software impacts the animator’s capacity to legitimize themselves as creatives and experts, traditional tools become vital for signifying creative authenticity in a professional environment. The practice of decorating one’s desk functions as a tactic to layer creative authenticity, but the meaning of this ritual is changing now that studios shift to open spaces while many animators work from home. Layering authenticity on-screen often requires blending techniques from classical Hollywood cinema into animated performance, concomitant with a bid to legitimate the role of the authentic interlocutor for the character. Increasingly animators feel pressure to layer authenticity online, establishing an audience as a means to hedge against precarity. The recombined self must balance the many methods for layering creative and professional authenticity with the constraints and affordances of their tools, along with the demands of the studio, to yield cultural capital vital for an animator’s survival in an industry defined at once by its limitless expressive potential and economic uncertainty

    Dynamically generated multi-modal application interfaces

    Get PDF
    This work introduces a new UIMS (User Interface Management System), which aims to solve numerous problems in the field of user-interface development arising from hard-coded use of user interface toolkits. The presented solution is a concrete system architecture based on the abstract ARCH model consisting of an interface abstraction-layer, a dialog definition language called GIML (Generalized Interface Markup Language) and pluggable interface rendering modules. These components form an interface toolkit called GITK (Generalized Interface ToolKit). With the aid of GITK (Generalized Interface ToolKit) one can build an application, without explicitly creating a concrete end-user interface. At runtime GITK can create these interfaces as needed from the abstract specification and run them. Thereby GITK is equipping one application with many interfaces, even kinds of interfaces that did not exist when the application was written. It should be noted that this work will concentrate on providing the base infrastructure for adaptive/adaptable system, and does not aim to deliver a complete solution. This work shows that the proposed solution is a fundamental concept needed to create interfaces for everyone, which can be used everywhere and at any time. This text further discusses the impact of such technology for users and on the various aspects of software systems and their development. The targeted main audience of this work are software developers or people with strong interest in software development

    Integration of multiple data types in 3-D immersive virtual reality (VR) environments

    Get PDF
    Intelligent sensors have begun to play a key part in the monitoring and maintenance of complex infrastructures. Sensors have the capability not only to provide raw data, but also provide information by indicating the reliability of the measurements. The effect of this added information is a voluminous increase in the total data that is gathered. If an operator is required to perceive the state of a complex system, novel methods must be developed for sifting through enormous data sets. Virtual reality (VR) platforms are proposed as ideal candidates for performing this task-- a virtual world will allow the user to experience a complex system that is gathering a multitude of sensor data and are referred as Integrated Awareness models. This thesis presents techniques for visualizing such multiple data sets, specifically - graphical, measurement and health data inside a 3-D VR environment. The focus of this thesis is to develop pathways to generate the required 3-D models without sacrificing visual fidelity. The tasks include creating the visual representation, integrating multi-sensor measurements, creating user-specific visualizations and a performance evaluation of the completed virtual environment

    The ribosome builder: A software project to simulate the ribosome

    Get PDF
    corecore