4 research outputs found

    CREATING TOUCHPANEL GRAPHICS FOR CONTROL SYSTEMS

    Get PDF
    More often than system designers would like to admit, a discrepancy lies between the implementation of audiovisual control systems and their apparent ease of use to a novice or casual user. System designers and programmers are often hampered by the software tools provided by industry manufacturers and cannot reliably create desirable graphical interfaces that match the level of systems they are asked to program and install. Popular consumer trends in portable touchscreen devices, pioneered on devices such as the Apple iPhone, light a way forward into a new era of elegantly solving the audiovisual control system graphical user interface problem. Since expensive specialized hardware can be replaced by readily available consumer devices and a wide variety of tools exists with which to create content, possible alternatives to the current methods of designing the graphical user interface for the audiovisual system are ripe for discovery. Using the latest release of Autodesk Maya 2011, with features such as Python and Pymel, we have developed scripts to generate graphical user interface content for use with audiovisual control systems hardware. Also explored is the potential for a standalone development environment such that audiovisual designers and programmers are not required to operate Maya or adjust scripts to generate content. Given this new level of control over the graphical user interface, coupled with the flexibility of the control system central processor programming, a truly powerful, intuitive, and groundbreaking control interface can finally be realized

    Towards a Universal Toolkit Model for Structures

    Get PDF
    Abstract. Model-based toolkit widgets have the potential for (i) increasing automation and (ii) making it easy to substitute a user-interface with another one. Current toolkits, however, have focused only on the automation benefit as they do not allow different kinds of widgets to share a common model. Inspired by programming languages, operating systems and database systems that support a single data structure, we present here an interface that can serve as a model for not only the homogeneous model-based structured-widgets identified so far -tables and trees -but also several heterogeneous structured-widgets such as forms, tabbed panes, and multi-level browsers. We identify an architecture that allows this model to be added to an existing toolkit by automatically creating adapters between it and existing widget-specific models. We present several full examples to illustrate how such a model can increase both the automation and substitutability of the toolkit. We show that our approach retains model purity and, in comparison to current toolkits, does not increase the effort to create existing model-aware widgets

    Clique: Perceptually Based, Task Oriented Auditory Display for GUI Applications

    Get PDF
    Screen reading is the prevalent approach for presenting graphical desktop applications in audio. The primary function of a screen reader is to describe what the user encounters when interacting with a graphical user interface (GUI). This straightforward method allows people with visual impairments to hear exactly what is on the screen, but with significant usability problems in a multitasking environment. Screen reader users must infer the state of on-going tasks spanning multiple graphical windows from a single, serial stream of speech. In this dissertation, I explore a new approach to enabling auditory display of GUI programs. With this method, the display describes concurrent application tasks using a small set of simultaneous speech and sound streams. The user listens to and interacts solely with this display, never with the underlying graphical interfaces. Scripts support this level of adaption by mapping GUI components to task definitions. Evaluation of this approach shows improvements in user efficiency, satisfaction, and understanding with little development effort. To develop this method, I studied the literature on existing auditory displays, working user behavior, and theories of human auditory perception and processing. I then conducted a user study to observe problems encountered and techniques employed by users interacting with an ideal auditory display: another human being. Based on my findings, I designed and implemented a prototype auditory display, called Clique, along with scripts adapting seven GUI applications. I concluded my work by conducting a variety of evaluations on Clique. The results of these studies show the following benefits of Clique over the state of the art for users with visual impairments (1-5) and mobile sighted users (6): 1. Faster, accurate access to speech utterances through concurrent speech streams. 2. Better awareness of peripheral information via concurrent speech and sound streams. 3. Increased information bandwidth through concurrent streams. 4. More efficient information seeking enabled by ubiquitous tools for browsing and searching. 5. Greater accuracy in describing unfamiliar applications learned using a consistent, task-based user interface. 6. Faster completion of email tasks in a standard GUI after exposure to those tasks in audio

    Automatic Generation of Device User-Interfaces?

    No full text
    corecore