683 research outputs found

    Usability of Web Browsers for Multi-touch Platforms

    Get PDF
    Multi-touch interface is an improvement within the existing touch screen technology, which allows the user to operate the electronic visual display with finger gestures. This work examines how good current web browsers are positioned to avail of the next generation HCI, currently dubbed Natural User Interfaces which are largely multi-touch interfaces at this point in time

    HCI models, theories, and frameworks: Toward a multidisciplinary science

    Get PDF
    Motivation The movement of body and limbs is inescapable in human-computer interaction (HCI). Whether browsing the web or intensively entering and editing text in a document, our arms, wrists, and fingers are at work on the keyboard, mouse, and desktop. Our head, neck, and eyes move about attending to feedback marking our progress. This chapter is motivated by the need to match the movement limits, capabilities, and potential of humans with input devices and interaction techniques on computing systems. Our focus is on models of human movement relevant to human-computer interaction. Some of the models discussed emerged from basic research in experimental psychology, whereas others emerged from, and were motivated by, the specific need in HCI to model the interaction between users and physical devices, such as mice and keyboards. As much as we focus on specific models of human movement and user interaction with devices, this chapter is also about models in general. We will say a lot about the nature of models, what they are, and why they are important tools for the research and development of humancomputer interfaces. Overview: Models and Modeling By its very nature, a model is a simplification of reality. However a model is useful only if it helps in designing, evaluating, or otherwise providing a basis for understanding the behaviour of a complex artifact such as a computer system. It is convenient to think of models as lying in a continuum, with analogy and metaphor at one end and mathematical equations at the other. Most models lie somewhere in-between. Toward the metaphoric end are descriptive models; toward the mathematical end are predictive models. These two categories are our particular focus in this chapter, and we shall visit a few examples of each. Two models will be presented in detail and in case studies: Fitts' model of the information processing capability of the human motor system and Guiard's model of bimanual control. Fitts' model is a mathematical expression emerging from the rigors of probability theory. It is a predictive model at the mathematical end of the continuum, to be sure, yet when applied as a model of human movement it has characteristics of a metaphor. Guiard's model emerged from a detailed analysis of how human's use their hands in everyday tasks, such as writing, drawing, playing a sport, or manipulating objects. It is a descriptive model, lacking in mathematical rigor but rich in expressive power

    Tangible cooperative gestures: Improving control and initiative in digital photo sharing

    Get PDF
    © 2015 by the authors. This paper focuses on co-present digital photo sharing on a notebook and investigates how this could be supported. While analyzing the current digital photo sharing situation we noticed that there was a high threshold for visitors to take control of the personal computer of the photo owner, resulting in inequity of participation. It was assumed that visitors would have the opportunity to interact with the notebook more freely if this threshold was lowered by distributing the user interface and creating a more public, instead of personal, interaction space. This, in turn, could make them feel more involved and in control during a session, creating a more enjoyable experience. To test these assumptions a design prototype was created that stimulates participants to use tangible artifacts for cooperative gestures, a promising direction for the future of HCI. The situation with the cooperative gestures was compared with the regular digital photo sharing situation, which makes use of a keyboard. In dyads, visitors felt more involved and in control in the design prototype cooperative gestures condition (especially during storytelling), resulting in a more enjoyable digital photo sharing experience

    A Survey of User Interfaces for Computer Algebra Systems

    Get PDF
    AbstractThis paper surveys work within the Computer Algebra community (and elsewhere) directed towards improving user interfaces for scientific computation during the period 1963–1994. It is intended to be useful to two groups of people: those who wish to know what work has been done and those who would like to do work in the field. It contains an extensive bibliography to assist readers in exploring the field in more depth. Work related to improving human interaction with computer algebra systems is the main focus of the paper. However, the paper includes additional materials on some closely related issues such as structured document editing, graphics, and communication protocols

    An Essay on Electronic Casebooks: My Pursuit of the Paperless Chase

    Get PDF

    The Use of Multiple Slate Devices to Support Active Reading Activities

    Get PDF
    Reading activities in the classroom and workplace occur predominantly on paper. Since existing electronic devices do not support these reading activities as well as paper, users have difficulty taking full advantage of the affordances of electronic documents. This dissertation makes three main contributions toward supporting active reading electronically. The first contribution is a comprehensive set of active reading requirements, drawn from three decades of research into reading processes. These requirements explain why existing devices are inadequate for supporting active reading activities. The second contribution is a multi-slate reading system that more completely supports the active reading requirements above. Researchers believe the suitability of paper for active reading is largely due to the fact it distributes content across different sheets of paper, which are capable of displaying information as well as capturing input. The multi-slate approach draws inspiration from the independent reading and writing surfaces that paper provides, to blend the beneficial features of e-book readers, tablets, PCs, and tabletop computers. The development of the multi-slate system began with the Dual-Display E-book, which used two screens to provide richer navigation capabilities than a single-screen device. Following the success of the Dual-Display E-book, the United Slates, a general-purpose reading system consisting of an extensible number of slates, was created. The United Slates consisted of custom slate hardware, specialized interactions that enabled the slates to be used cooperatively, and a cloud-based infrastructure that robustly integrated the slates with users' existing computing devices and workflow. The third contribution is a series of evaluations that characterized reading with multiple slates. A laboratory study with 12 participants compared the relative merits of paper and electronic reading surfaces. One month long in-situ deployments of the United Slates with graduate students in the humanities found the multi-slate configuration to be highly effective for reading. The United Slates system delivered desirable paper-like qualities that included enhanced reading engagement, ease of navigation, and peace-of-mind while also providing superior electronic functionality. The positive feedback suggests that the multi-slate configuration is a desirable method for supporting active reading activities

    Discoverable Free Space Gesture Sets for Walk-Up-and-Use Interactions

    Get PDF
    abstract: Advances in technology are fueling a movement toward ubiquity for beyond-the-desktop systems. Novel interaction modalities, such as free space or full body gestures are becoming more common, as demonstrated by the rise of systems such as the Microsoft Kinect. However, much of the interaction design research for such systems is still focused on desktop and touch interactions. Current thinking in free-space gestures are limited in capability and imagination and most gesture studies have not attempted to identify gestures appropriate for public walk-up-and-use applications. A walk-up-and-use display must be discoverable, such that first-time users can use the system without any training, flexible, and not fatiguing, especially in the case of longer-term interactions. One mechanism for defining gesture sets for walk-up-and-use interactions is a participatory design method called gesture elicitation. This method has been used to identify several user-generated gesture sets and shown that user-generated sets are preferred by users over those defined by system designers. However, for these studies to be successfully implemented in walk-up-and-use applications, there is a need to understand which components of these gestures are semantically meaningful (i.e. do users distinguish been using their left and right hand, or are those semantically the same thing?). Thus, defining a standardized gesture vocabulary for coding, characterizing, and evaluating gestures is critical. This dissertation presents three gesture elicitation studies for walk-up-and-use displays that employ a novel gesture elicitation methodology, alongside a novel coding scheme for gesture elicitation data that focuses on features most important to users’ mental models. Generalizable design principles, based on the three studies, are then derived and presented (e.g. changes in speed are meaningful for scroll actions in walk up and use displays but not for paging or selection). The major contributions of this work are: (1) an elicitation methodology that aids users in overcoming biases from existing interaction modalities; (2) a better understanding of the gestural features that matter, e.g. that capture the intent of the gestures; and (3) generalizable design principles for walk-up-and-use public displays.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Eignung von virtueller Physik und Touch-Gesten in Touchscreen-Benutzerschnittstellen für kritische Aufgaben

    Get PDF
    The goal of this reasearch was to examine if modern touch screen interaction concepts that are established on consumer electronic devices like smartphones can be used in time-critical and safety-critical use cases like for machine control or healthcare appliances. Several prevalent interaction concepts with and without touch gestures and virtual physics were tested experimentally in common use cases to assess their efficiency, error rate and user satisfaction during task completion. Based on the results, design recommendations for list scrolling and horizontal dialog navigation are given.Das Ziel dieser Forschungsarbeit war es zu untersuchen, ob moderne Touchscreen-Interaktionskonzepte, die auf Consumer-Electronic-Geräten wie Smartphones etabliert sind, für zeit- und sicherheitskritische Anwendungsfälle wie Maschinensteuerung und Medizingeräte geeignet sind. Mehrere gebräuchliche Interaktionskonzepte mit und ohne Touch-Gesten und virtueller Physik wurden in häufigen Anwendungsfällen experimentell auf ihre Effizienz, Fehlerrate und Nutzerzufriedenheit bei der Aufgabenlösung untersucht. Basierend auf den Resultaten werden Empfehlungen für das Scrollen in Listen und dem horizontalen Navigieren in mehrseitigen Software-Dialogen ausgesprochen

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専
    corecore