561 research outputs found
Phrasing Bimanual Interaction for Visual Design
Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettesâtechniques originally designed for the mouse, not pen and touch.
We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes
Task-based Adaptation of Graphical Content in Smart Visual Interfaces
To be effective visual representations must be adapted to their respective context of use, especially in so-called Smart Visual Interfaces striving to present specifically those information required for the task at hand. This thesis proposes a generic approach that facilitate the automatic generation of task-specific visual representations from suitable task descriptions. It is discussed how the approach is applied to four principal content types raster images, 2D vector and 3D graphics as well as data visualizations, and how existing display techniques can be integrated into the approach.Effektive visuelle ReprĂ€sentationen mĂŒssen an den jeweiligen Nutzungskontext angepasst sein, insbesondere in sog. Smart Visual Interfaces, welche anstreben, möglichst genau fĂŒr die aktuelle Aufgabe benötigte Informationen anzubieten. Diese Arbeit entwirft einen generischen Ansatz zur automatischen Erzeugung aufgabenspezifischer Darstellungen anhand geeigneter Aufgabenbeschreibungen. Es wird gezeigt, wie dieser Ansatz auf vier grundlegende Inhaltstypen Rasterbilder, 2D-Vektor- und 3D-Grafik sowie Datenvisualisierungen anwendbar ist, und wie existierende Darstellungstechniken integrierbar sind
Collaborative adaptive accessibility and human capabilities
This thesis discusses the challenges and opportunities facing the field of accessibility, particularly as computing becomes ubiquitous. It is argued that a new approach is needed that centres around adaptations (specific, atomic changes) to user interfaces and content in order to improve their accessibility for a wider range of people than targeted by present Assistive Technologies (ATs). Further, the approach must take into consideration the capabilities of people at the human level and facilitate collaboration, in planned and ad-hoc environments.
There are two main areas of focus: (1) helping people experiencing minor-to-moderate, transient and potentially-overlapping impairments, as may be brought about by the ageing process and (2) supporting collaboration between people by reasoning about the consequences, from different users perspectives, of the adaptations they may require.
A theoretical basis for describing these problems and a reasoning process for the semi-automatic application of adaptations is developed. Impairments caused by the environment in which a device is being used are considered. Adaptations are drawn from other research and industry artefacts. Mechanical testing is carried out on key areas of the reasoning process, demonstrating fitness for purpose.
Several fundamental techniques to extend the reasoning process in order to take temporal factors (such as fluctuating user and device capabilities) into account are broadly described. These are proposed to be feasible, though inherently bring compromises (which are defined) in interaction stability and the needs of different actors (user, device, target level of accessibility).
This technical work forms the basis of the contribution of one work-package of the Sustaining ICT use to promote autonomy (Sus-IT) project, under the New Dynamics of Ageing (NDA) programme of research in the UK. Test designs for larger-scale assessment of the system with real-world participants are given. The wider Sus-IT project provides social motivations and informed design decisions for this work and is carrying out longitudinal acceptance testing of the processes developed here
Visual search fixation strategies in a 3D image set: an eye tracking study
In this study we explore whether inclusion of monocular depth within a pseudo-3D picture gallery negatively affects visual search strategy and performance. Experimental design facilitated control of i) the number of visible depth planes and ii) the presence of semantic sorting. Our results show that increasing the number of visual depth planes facilitates efficiency in search, which in turn results in a decreased response time to target selection, and a reduction in participant average pupil dilation â used for measuring cognitive load. Furthermore, results identified that search strategy is based on sorting, which implies that appropriate management of semantic associations can increase search efficiency by decreasing the number of potential targets
Light on horizontal interactive surfaces: Input space for tabletop computing
In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Måster y Doctorado en la Universidad Carlos III de Madrid, 2010
Recommended from our members
Understanding and Evaluating User Interface Visibility
Technology dominates our lives, mobile technology in particular. In 2016 Apple sold their billionth iPhone. By 2018 they had sold their 2 billionth device based on the same underlying operating system. We access such technology through the user interface (UI) and concerns have been raised about the usability of such devices. The situation has been described by some as a âusability crisisâ. One of the key issues raised is the lack of visibility of user interface elements, which is deemed to be a critical component of an effective UI.
An initial investigation highlighted that UI visibility can be broken down into three key aspects: Firstly; some user interface elements are effectively âmissingâ; Secondly, they are âmissedâ because they are not seen by the user; and thirdly, they are seen but âmisunderstoodâ. Further analysis of the home screen of an iPhone revealed that only 8% of the available functions were visible at the top level, in other words, 92% were effectively âmissingâ. This raises key questions about how UI visibility can be evaluated, and such evaluation adopted into design practice. This research took a psychophysical perspective to better understand UI visibility. This led to the development of an evaluation framework and associated tool called vis-UI-lise. The tool represents UI visibility as a series of 5 hurdles between the user and the interface that have to be overcome for a successful interaction.
This tool was applied to an everyday task on a mobile phone which resulted in highlighting a range of possible usability problems. Comparison of the predicted versus observed problems showed that the vis-UI-lise tool had predicted 74% of them, a score that compares well with other usability evaluation tools. A training and support package was also developed for the vis-UI-lise tool and evaluated with four different organisations. This provided key insights into how the tool could be improved to fit in with typical design practice. This thesis brings a new perspective to the understanding and evaluation of UI visibility that could have a real impact on the design of everyday user interfaces
- âŠ