17 research outputs found
An Integrated Taxonomy of Online Help Based on User Interface View
We developed a comprehensive help taxonomy by combining both user interface and help system attributes, ranging from help access interface, presentation, and supporting knowledge structure, to implementation. The taxonomy systematically identifies independent axes along which help can be categorized which in turn encloses a space of help categories in which to place currently existing help research, and identifies distinct help software architectural features which contrast pros and cons in different approaches to implement help systems. The taxonomy projects a vision of what help can be like if it is on a par with advances in user interface technology, and desirable design features of help system architectures which are in the progressive direction along with the user interface software tools
Multimedia Help: A Literature Survey and a Preliminary Experimental Design
Multimedia has become widely accepted as an effective interface for exploratory education. Despite this fact, little is known about this new form of interface - in which context is it appropriate and which media should be used for presenting what kinds of information. Designing a multimedia interface is currently based on programmer's intuitions and, increasingly, design skills of graphic artists and educators whose experiences are on designing presentations on media. Multimedia applied to a help interface is a dynamic form of communication capable of bringing as many medium as appropriate into use. However, lack of understanding of the tie between efficacy in learning in a multiple-media help environment and effectiveness in user's operational performance implies a need of studies to understand this relationship. What we want in the long run from these studies is a model which would help predict the relationship between how users learn from different and intergrated media and translate the information learned into activities required to operate in interface environments.
Sun has a strong interest in supporting multimedia help in order to ease the learning process of the increasingly more sophisticated OPEN LOOK environment; the target audience are non-technical users. The objective of this Collaborative Research (CR) between Georgia Tech and SunSoft, Inc. is to investigate through conducting an experiment effectiveness of various mappings from help information to media. The investigation will explore, singularly and in combination, use of media such as text, static graphics, video, speech audio, and context-sensitive animation, in the context of online help. Expected results are experimental data analysis, discrete recommendations for integration of multimedia to Sun online help support, and the software architecture for a multimedia help prototype to be developed for the experiment
One Step Towards Multimedia Help: Adding Context-Sensitive Animated Help as One Ingredient
Users often have difficulties relating general help information to the specific computer tasks which they are attempting to complete. A factor contributing to this problem is the distance between the space in which help is presented and the space in which users have to apply what they learn from help. Context-sensitive multimedia help potentially can reduce these distances. This paper discusses the use of context-sensitive graphical animation as one medium of help presentation in which the user's task context is synthesized as part of procedural help demonstrations. This paper also discusses how a support for this kind of help has been integrated in an application environment, thus enabling an automatic generation of this kind of help for various applications
Supporting Adaptive Interfaces in a Knowledge-based User Interface Environment
Developing an adaptive interface requires a user interface that can be adapted, a user model, and an adaptation strategy. Research on adaptive interfaces in the past suffered from a lack of supporting tools which allow an interface to be easily created and modified. Also, adding adaptivity to a user interface so far has not been supported by any user interface systems or environments.
In this paper, we present an overview of a knowledge-based model of the User Interface Design Environment (UIDE). UIDE uses the knowledge of an application to support the run-time execution of the application's interface and provides various kinds of automatic help. We present how the knowledge model can be used as a basic construct of a user model. Finally, we present adaptive interface and adaptive help behaviors that can be extended to the current UIDE architecture utilizing the user model. These behaviors are options from which an application designer can choose for an application interface
Paleozoic Planktonic Faunas of North America
Our approach to user interface animation involves simulating the interaction of a user with the interface by synthetically generating the input events that drive the session. The interaction is made explicit by displaying the behavior of input devices audio-visually. Such "animation" is both educational and functional, and has the potential to become a powerful new medium in the graphical user interface domain. We describe the construction of a general purpose tool for animating user interfaces - the animation server. Clients drive the server with textual scripts that describe the interaction. These may contain constructs for obtaining application context information at runtime and synchronizing with other media servers. We present a few potential applications for animation servers, including a groupware package for loosely coupled collaboratio
Multimedia help : a literature survey and a preliminary experimental design
Issued as Report, and Memorandum [nos. 1-2], Project no. C-50-613Report has title: Multimedia help : a literature survey and a preliminary experimental designReport has author: Piyawadee "Noi" Sukaviriy
Built-In User Modelling Support, Adaptive Interfaces, and Adaptive Help in UIDE
Developing an adaptive interface requires a user interface which can be adapted, a user model, and an adaptation strategy. Research on adaptive interfaces in the past lacks support from user interface tools which allow designers to easily create and modify an interface. Also, current user interface tools provide no support for user models which can collect task-oriented information about users.
In this paper, we present the User Interface Design Environment (UIDE) which provides an automatic support for collecting task-oriented information about users. UIDE uses its high-level specifications in its application model as a basic construct for a user model. By using this model, UIDE will be able to provide a number of adaptive features as interface design options: 1) adapting menu and dialogue box layouts; 2) suggesting macros to users; and 3) adaptive help
Automatic Generation of Context-Sensitive "Show and Tell" Help
Textual help has become insufficient to guide users to carry out procedural tasks. Help for a direct manipulation interface requires that textual help be verbose in order to describe procedures accurately and specifically. In this paper, we attempt to alleviate this problem by presenting procedural help using coordinated textual and animated help. Our help presentation is dynamically created at runtime and is derived from the underlying user interface representations of the application model. Generating help from the user interface presentations allow help contents to be tailored specifically to the current context
Synthesized Interaction on the X Window System
We have come to regard the human user, manipulating physical input devices as the sole driver of interaction in the graphical workspace. It is conceivable that for a vari ety of applications, such as help and tutorial systems, macro-by-example systems, session-playback systems and in collaborative work, we may require an alternative agent to perform tasks on the workspace, alongside the user. In this paper we describe a relatively non-intrusive and porta ble scheme for supporting such "synthesized interaction" on the X window system, and illustrate how toolkits may be instrumented to cooperate with such an agent at runt ime, by providing information about the location of objects in their interfaces. In particular we describe the integration of synthesized interaction in the Artkit toolkit, which is structurally similar to most modern toolkits and should serve as relevant example
A Second Generation User Interface Design Environment: The Model and the Runtime Architecture
Several obstacles exist in the user interface design process which distract a developer from designing a good user interface. One of the problems is the lack of an application model to keep the designer in perspective with the application. The other problem is having to deal with massive user interface programming to achieve a desired interface and to provide users with correct help information on the interface. In this paper, we discuss an application model which captures information about an application at a high level, and maintains mappings from the application to specifications of a desired interface. The application model is then used to control the dialogues at runtime and can be used by a help component to automatically generate animated and textual help. Specification changes in the application model will automatically result in behavioral changes in the interface