37 research outputs found

    An Investigation of Target Acquisition with Visually Expanding Targets in Constant Motor-space

    Get PDF
    Target acquisition is a core part of modern computer use. Fitts’ law has frequently been proven to predict performance of target acquisition tasks; even with targets that change size as the cursor approaches. Research into expanding targets has focussed on targets that expand in both visual- and motor-space. We investigate whether a visual expansion with no change in motor-space offers any performance benefit. We investigate constant motor-space visual expansion in both abstract pointing tasks (based on the ISO9241–9 standard) and in a realistic deployment of the technique within fisheye menus. Our fisheye menu system eliminates the ‘hunting effect’ of target acquisition observed in Bederson’s initial proposal of fisheye menus, and in an evaluation we show that it allows faster selection times and is subjectively preferred to Bederson’s menus. We also show that visually expanding targets can improve selection times in target acquisition tasks, particularly with small targets

    A new method for interacting with multi-window applications on large, high resolution displays

    Get PDF
    Physically large display walls can now be constructed using off-the-shelf computer hardware. The high resolution of these displays (e.g., 50 million pixels) means that a large quantity of data can be presented to users, so the displays are well suited to visualization applications. However, current methods of interacting with display walls are somewhat time consuming. We have analyzed how users solve real visualization problems using three desktop applications (XmdvTool, Iris Explorer and Arc View), and used a new taxonomy to classify users’ actions and illustrate the deficiencies of current display wall interaction methods. Following this we designed a novel methodfor interacting with display walls, which aims to let users interact as quickly as when a visualization application is used on a desktop system. Informal feedback gathered from our working prototype shows that interaction is both fast and fluid

    Optimization Approaches to Adaptive Menus

    Get PDF
    Graphical menus perform as vital components and offer essential controls in today’s graphical interface. However, few studies have been conducted to modelling the performance of a menu. Furthermore, menu optimization methods previously proposed have been largely concentrating on reshaping layout of the whole menu system. In order to model menu performance, this thesis extends the Search-Decision-Pointing model by introducing two additional factors, i.e. the cost function and semantic function. The cost function is a penalty function which decreases the user expertise regarding a menu layout according to the degree of modification done to the menu. The semantic function is a reward function which encourages items with strong relations be positioned close to each other. Centered on this menu performance model, several optimization methods have been implemented. Each method focuses on improving menu performance by applying distinctive strategies, such as increasing item size or reducing item pointing distance. Three test cases have been exercised to evaluate the optimization methods in a simulated software which displays graphical user interfaces and emulates the menu utilization of real users. The results of test cases reveal that the menu performance has been successfully improved in all test cases by the fundamental heuristic search algorithm. Moreover, other optimization methods have been able to further increase menu performance ranging from 3% to 8% depending on test cases. In addition, it is identified that increasing the size of an item offers surprisingly little benefit. Conversely, reducing item pointing distance has greatly improved menu performance. Moreover, positioning items by their semantic relations may also enhance group saliency. On the other hand, optimization methods may not always succeed in providing usable menus due to design constraints. Hence, menu performance optimization shall be carefully exercised by considering the entire graphical user interface

    TOGO truck information service: Based on mobile tracking system

    Get PDF
    In our current economic state, instant gratification from the satisfaction by the real time products and services and experiences are in demand for consumers. As a result, there are several if not many of these real-time base services that have arisen from what the consumers\u27 desire. During the research, I have witnessed the phenomenon that would fit the criteria. The perfect example would be the popularity of meals on wheels in the United States. Setting a new trend in the fast food market, the food truck industry has reached the numbers well over the thousands and still counting with their own unique ideas and innovations. I can confidently say, with the rise of the food truck industry, we have observed the decline of stationary restaurants. But, unlike the mobile counterpart, the stationary restaurants still have one distinct advantage; consumers know where to find them. For example, when struck with a craving for tacos, it is easier to Google a Mexican restaurant down the street, rather than to track down a taco truck. To counterstrike the stationary opposition to quench the hunger for the food truck enthusiasts, several apps have been created. The applications general idea is to pin point gourmet food trucks on mobile maps to even the reliability of playing fields via Twitter feeds, GPS and truck-reported location data. While none of which has achieved an exhaustive or completely accurate system, the search still continues for trucks for users depending on their location. During the present market condition, I am certain of making an accurate and effective real-time information service would be an interesting subject to approach in satisfying users and business owners desires. In demand of real-time information services, I will create a prototype for a food truck information service, inclusive of real-time location service; GPS, mobile tracking, truck-reported data and alert service. The consumers and the food truck owners will both come out as winners, with relaying precise information via real time communication devices. As a student studying the art of user experience and interaction design, goal of this study is to figure out how to enhance the user friendly interface along with meeting the expectations of actual consumers. In order to have a deeper understanding about interaction between users and real time location applications to heighten the level of services, I am willing to go above and beyond with through research to develop a next generation real time app during this project. Another critical factor that I, a user experience designer would point out would be communication. A key factor in completing the task, finding a significant way of communication method would be an additional goal throughout this project

    Development of a Steering law experiment platform with haptic device Phantom Omni

    Full text link
    Tatay De Pascual, A. (2010). Development of a Steering law experiment platform with haptic device Phantom Omni. http://hdl.handle.net/10251/8631.Archivo delegad

    Modeling Three-Dimensional Interaction Tasks for Desktop Virtual Reality

    Get PDF

    A Genetic Algorithm for Optimizing Hierarchical Menus

    Get PDF

    Modeling Three-Dimensional Interaction Tasks for Desktop Virtual Reality

    Get PDF
    A virtual environment is an interactive, head-referenced computer display that gives a user the illusion of presence in real or imaginary worlds. Two most significant differences between a virtual environment and a more traditional interactive 3D computer graphics system are the extent of the user's sense of presence and the level of user participation that can be obtained in the virtual environment. Over the years, advances in computer display hardware and software have substantially progressed the realism of computer-generated images, which dramatically enhanced user’s sense of presence in virtual environments. Unfortunately, such progress of user’s interaction with a virtual environment has not been observed. The scope of the thesis lies in the study of human-computer interaction that occurs in a desktop virtual environment. The objective is to develop/verify 3D interaction models that can be used to quantitatively describe users’ performance for 3D pointing, steering and object pursuit tasks and through the analysis of the interaction models and experimental results to gain a better understanding of users’ movements in the virtual environment. The approach applied throughout the thesis is a modeling methodology that is composed of three procedures, including identifying the variables involved for modeling a 3D interaction task, formulating and verifying the interaction model through user studies and statistical analysis, and applying the model to the evaluation of interaction techniques and input devices and gaining an insight into users’ movements in the virtual environment. In the study of 3D pointing tasks, a two-component model is used to break the tasks into a ballistic phase and a correction phase, and comparison is made between the real-world and virtual-world tasks in each phase. The results indicate that temporal differences arise in both phases, but the difference is significantly greater in the correction phase. This finding inspires us to design a methodology with two-component model and Fitts’ law, which decomposes a pointing task into the ballistic and correction phase and decreases the index of the difficulty of the task during the correction phase. The methodology allows for the development and evaluation of interaction techniques for 3D pointing tasks. For 3D steering tasks, the steering law, which was proposed to model 2D steering tasks, is adapted to 3D tasks by introducing three additional variables, i.e., path curvature, orientation and haptic feedback. The new model suggests that a 3D ball-and-tunnel steering movement consists of a series of small and jerky sub-movements that are similar to the ballistic/correction movements observed in the pointing movements. An interaction model is originally proposed and empirically verified for 3D object pursuit tasks, making use of Stevens’ power law. The results indicate that the power law can be used to model all three common interaction tasks, which may serve as a general law for modeling interaction tasks, and also provides a way to quantitatively compare the tasks

    Steering in layers above the display surface

    Get PDF
    Interaction techniques that use the layers above the display surface to extend the functionality of pen-based digitized surfaces continue to emerge. In such techniques, stylus movements are constrained by the bounds of a layer inside which the interaction is active, as well as constraints on the direction of movement within the layer. The problem addressed in this thesis is that designers currently have no model to predict movement time (MT) or quantify the difficulty, for movement (steering) in layers above the display surface constrained by thickness of the layer, its height above the display, and the width and length of the path. The problem has two main parts: first, how to model steering in layers, and second, how to visualize the layers to provide feedback for the steering task. The solution described is a model that predicts movement time and that quantifies the difficulty of steering through constrained and unconstrained paths in layers above the display surface. Through a series of experiments we validated the derivation and applicability of the proposed models. A predictive model is necessary because the model serves as the basis for design of interaction techniques in the design space; and predictive models can be used for quantitative evaluation of interaction techniques. The predictive models are important as they allow researchers to evaluate potential solutions independent of experimental conditions.Addressing the second part of the problem, we describe four visualization designs using cursors. We evaluated the effectiveness of the visualization by conducting a controlled experiment

    MenuCraft: Interactive Menu System Design with Large Language Models

    Full text link
    Menu system design is a challenging task involving many design options and various human factors. For example, one crucial factor that designers need to consider is the semantic and systematic relation of menu commands. However, capturing these relations can be challenging due to limited available resources. With the advancement of neural language models, large language models can utilize their vast pre-existing knowledge in designing and refining menu systems. In this paper, we propose MenuCraft, an AI-assisted designer for menu design that enables collaboration between the designer and a dialogue system to design menus. MenuCraft offers an interactive language-based menu design tool that simplifies the menu design process and enables easy customization of design options. MenuCraft supports a variety of interactions through dialog that allows performing zero/few-shot learning
    corecore