297,246 research outputs found

    Físchlár on a PDA: handheld user interface design to a video indexing, browsing and playback system

    Get PDF
    The Físchlár digital video system is a web-based system for recording, analysis, browsing and playback of TV programmes which currently has about 350 users. Although the user interface to the system is designed for desktop PCs with a large screen and a mouse, we are developing versions to allow the use of mobile devices to access the system to record and browse the video content. In this paper, the design of a PDA user interface to video content browsing is considered. We use a design framework we have developed previously to be able to specify various video browsing interface styles thus making it possible to design for all potential users and their various environments. We can then apply this to the particulars of the PDA's small, touch-sensitive screen and the mobile environment where it will be used. The resultant video browsing interfaces have highly interactive interfaces yet are simple, which requires relatively less visual attention and focusing, and can be comfortably used in a mobile situation to browse the available video contents. To date we have developed and tested such interfaces on a Revo PDA, and are in the process of developing others

    Advanced display object selection methods for enhancing user-computer productivity

    Get PDF
    The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications

    Levitate: Interaction with Floating Particle Displays

    Get PDF
    This demonstration showcases the current state of the art for the levitating particle display from the Levitate Project. In this demonstration, we show a new type of display consisting of floating voxels, small levitating particles that can be positioned and moved independently in 3D space. Phased ultrasound arrays are used to acoustically levitate the particles. Users can interact directly with each particle using pointing gestures. This allows users to walk-up and interact without any user instrumentation, creating an exciting opportunity to deploy these tangible displays in public spaces in the future. This demonstration explores the design potential of floating voxels and how these may be used to create new types of user interfaces

    GUIs Gain Prominence: Baton and Profound

    Get PDF
    Many text-based interfaces are creating graphical user interfaces (GUIs) for their old information systems. Some of the most imaginative GUI examples are being developed by small online services, such as EyeQ by DataTimes and Baton system from NewsNet. These and other examples are discussed

    Interference Alignment Through User Cooperation for Two-cell MIMO Interfering Broadcast Channels

    Full text link
    This paper focuses on two-cell multiple-input multiple-output (MIMO) Gaussian interfering broadcast channels (MIMO-IFBC) with KK cooperating users on the cell-boundary of each BS. It corresponds to a downlink scenario for cellular networks with two base stations (BSs), and KK users equipped with Wi-Fi interfaces enabling to cooperate among users on a peer-to-peer basis. In this scenario, we propose a novel interference alignment (IA) technique exploiting user cooperation. Our proposed algorithm obtains the achievable degrees of freedom (DoF) of 2K when each BS and user have M=K+1M=K+1 transmit antennas and N=KN=K receive antennas, respectively. Furthermore, the algorithm requires only a small amount of channel feedback information with the aid of the user cooperation channels. The simulations demonstrate that not only are the analytical results valid, but the achievable DoF of our proposed algorithm also outperforms those of conventional techniques.Comment: This paper will appear in IEEE GLOBECOM 201

    Haptic Interface for Center of Workspace Interaction

    Get PDF
    We build upon a new interaction style for 3D interfaces, called the center of workspace interaction. This style of interaction is defined with respect to a central fixed point in 3D space, conceptually within arm\u27s length of the user. For demonstration, we show a haptically enabled fish tank VR that utilizes a set of interaction widgets to support rapid navigation within a large virtual space. The fish tank VR refers to the creation of a small but high quality virtual reality that combines a number of technologies, such as head-tracking and stereo glasses, to their mutual advantag

    The challenges of mobile devices for human computer interaction

    Get PDF
    Current mobile computing devices such as palmtop computers, personal digital assistants (PDAs) and mobile phones, and future devices such as Bluetooth and GSM enabled cameras, and music players have many implications for the design of the user interface. These devices share a common problem: attempting to give users access to powerful computing services and resources through small interfaces, which typically have tiny visual displays, poor audio interaction facilities and limited input techniques. They also introduce new challenges such as designing for intermittent and expensive network access, and design for position awareness and context sensitivity. No longer can designers base computing designs around the traditional model of a single user working with a personal computer at his/her workplace. In addition to mobility and size requirements, mobile devices will also typically be used by a larger population spread than traditional PCs and without any training or support networks, whether formal or informal. Furthermore, unlike early computers which had many users per computer, and PCs with usually one computer per user, a single user is likely to own many mobiles devices [1] which they interact with indifferent ways and for different tasks

    Hidden Pursuits: Evaluating Gaze-selection via Pursuits when the Stimuli's Trajectory is Partially Hidden

    Get PDF
    The idea behind gaze interaction using Pursuits is to leverage the human's smooth pursuit eye movements performed when following moving targets. However, humans can also anticipate where a moving target would reappear if it temporarily hides from their view. In this work, we investigate how well users can select targets using Pursuits in cases where the target's trajectory is partially invisible (HiddenPursuits): e.g., can users select a moving target that temporarily hides behind another object? Although HiddenPursuits was not studied in the context of interaction before, understanding how well users can perform HiddenPursuits presents numerous opportunities, particularly for small interfaces where a target's trajectory can cover area outside of the screen. We found that users can still select targets quickly via Pursuits even if their trajectory is up to 50% hidden, and at the expense of longer selection times when the hidden portion is larger. We discuss how gaze-based interfaces can leverage HiddenPursuits for an improved user experience

    User friendly signal processing web services for annotators in AVATecH and AUVIS

    Get PDF
    User friendly signal processing web services: The joint Max Planck Fraunhofer project AVATecH aims to support the very time intensive work of annotating audio and video recordings, letting signal processing modules (recognizers) assist annotators. -*- We designed a small, flexible framework where XML metadata describes input, output and settings of recognizers. Building blocks are audio and video files, annotation tiers and numerical data, packaged in simple formats. Text pipes allow flexibility in implementation details. The popular TLA ELAN software even lets the user control recognizers directly in their annotation environment: It generates consistent user interfaces for all installed recognizers based on their metadata. -*- We realized that full recognizers can be inconvenient to install for the user. Hardware, operating system and license requirements can add complexity. AVATecH supported intranet recognizers early, but those are limited by the need for shared network drives between user and server. -*- Recently, we developed a system where recognizers are run on a server using the free open source CLAM software. With suitable configuration, CLAM can run any command line tool, controlled by remote REST requests. On the user side, only a small proxy tool is installed instead of a real recognizer: The tool dynamically mimicks a recognizer based on the same metadata as before, but actually transfers data to a remote server and back where the real recognizer is installed. -*- We present details of our setup and workflow, with an outlook towards future extensions within the successor project, AUVIS

    Interactive Spaces. Models and Algorithms for Reality-based Music Applications

    Get PDF
    Reality-based interfaces have the property of linking the user's physical space with the computer digital content, bringing in intuition, plasticity and expressiveness. Moreover, applications designed upon motion and gesture tracking technologies involve a lot of psychological features, like space cognition and implicit knowledge. All these elements are the background of three presented music applications, employing the characteristics of three different interactive spaces: a user centered three dimensional space, a floor bi-dimensional camera space, and a small sensor centered three dimensional space. The basic idea is to deploy the application's spatial properties in order to convey some musical knowledge, allowing the users to act inside the designed space and to learn through it in an enactive way
    corecore