482 research outputs found

    On intelligible multimodal visual analysis

    Get PDF
    Analyzing data becomes an important skill in a more and more digital world. Yet, many users are facing knowledge barriers preventing them to independently conduct their data analysis. To tear down some of these barriers, multimodal interaction for visual analysis has been proposed. Multimodal interaction through speech and touch enables not only experts, but also novice users to effortlessly interact with such kind of technology. However, current approaches do not take the user differences into account. In fact, whether visual analysis is intelligible ultimately depends on the user. In order to close this research gap, this dissertation explores how multimodal visual analysis can be personalized. To do so, it takes a holistic view. First, an intelligible task space of visual analysis tasks is defined by considering personalization potentials. This task space provides an initial basis for understanding how effective personalization in visual analysis can be approached. Second, empirical analyses on speech commands in visual analysis as well as used visualizations from scientific publications further reveal patterns and structures. These behavior-indicated findings help to better understand expectations towards multimodal visual analysis. Third, a technical prototype is designed considering the previous findings. Enriching the visual analysis by a persistent dialogue and a transparency of the underlying computations, conducted user studies show not only advantages, but address the relevance of considering the user’s characteristics. Finally, both communications channels – visualizations and dialogue – are personalized. Leveraging linguistic theory and reinforcement learning, the results highlight a positive effect of adjusting to the user. Especially when the user’s knowledge is exceeded, personalizations helps to improve the user experience. Overall, this dissertations confirms not only the importance of considering the user’s characteristics in multimodal visual analysis, but also provides insights on how an intelligible analysis can be achieved. By understanding the use of input modalities, a system can focus only on the user’s needs. By understanding preferences on the output modalities, the system can better adapt to the user. Combining both directions imporves user experience and contributes towards an intelligible multimodal visual analysis

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Diverse Contributions to Implicit Human-Computer Interaction

    Full text link
    Cuando las personas interactúan con los ordenadores, hay mucha información que no se proporciona a propósito. Mediante el estudio de estas interacciones implícitas es posible entender qué características de la interfaz de usuario son beneficiosas (o no), derivando así en implicaciones para el diseño de futuros sistemas interactivos. La principal ventaja de aprovechar datos implícitos del usuario en aplicaciones informáticas es que cualquier interacción con el sistema puede contribuir a mejorar su utilidad. Además, dichos datos eliminan el coste de tener que interrumpir al usuario para que envíe información explícitamente sobre un tema que en principio no tiene por qué guardar relación con la intención de utilizar el sistema. Por el contrario, en ocasiones las interacciones implícitas no proporcionan datos claros y concretos. Por ello, hay que prestar especial atención a la manera de gestionar esta fuente de información. El propósito de esta investigación es doble: 1) aplicar una nueva visión tanto al diseño como al desarrollo de aplicaciones que puedan reaccionar consecuentemente a las interacciones implícitas del usuario, y 2) proporcionar una serie de metodologías para la evaluación de dichos sistemas interactivos. Cinco escenarios sirven para ilustrar la viabilidad y la adecuación del marco de trabajo de la tesis. Resultados empíricos con usuarios reales demuestran que aprovechar la interacción implícita es un medio tanto adecuado como conveniente para mejorar de múltiples maneras los sistemas interactivos.Leiva Torres, LA. (2012). Diverse Contributions to Implicit Human-Computer Interaction [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17803Palanci

    Interactions in Virtual Worlds:Proceedings Twente Workshop on Language Technology 15

    Get PDF

    A Haptic Study to Inclusively Aid Teaching and Learning in the Discipline of Design

    Get PDF
    Designers are known to use a blend of manual and virtual processes to produce design prototype solutions. For modern designers, computer-aided design (CAD) tools are an essential requirement to begin to develop design concept solutions. CAD, together with augmented reality (AR) systems have altered the face of design practice, as witnessed by the way a designer can now change a 3D concept shape, form, color, pattern, and texture of a product by the click of a button in minutes, rather than the classic approach to labor on a physical model in the studio for hours. However, often CAD can limit a designer’s experience of being ‘hands-on’ with materials and processes. The rise of machine haptic1 (MH) tools have afforded a great potential for designers to feel more ‘hands-on’ with the virtual modeling processes. Through the use of MH, product designers are able to control, virtually sculpt, and manipulate virtual 3D objects on-screen. Design practitioners are well placed to make use of haptics, to augment 3D concept creation which is traditionally a highly tactile process. For similar reasoning, it could also be said that, non-sighted and visually impaired (NS, VI) communities could also benefit from using MH tools to increase touch-based interactions, thereby creating better access for NS, VI designers. In spite of this the use of MH within the design industry (specifically product design), or for use by the non-sighted community is still in its infancy. Therefore the full benefit of haptics to aid non-sighted designers has not yet been fully realised. This thesis empirically investigates the use of multimodal MH as a step closer to improving the virtual hands-on process, for the benefit of NS, VI and fully sighted (FS) Designer-Makers. This thesis comprises four experiments, embedded within four case studies (CS1-4). Case study 1and2 worked with self-employed NS, VI Art Makers at Henshaws College for the Blind and Visual Impaired. The study examined the effects of haptics on NS, VI users, evaluations of experience. Case study 3 and4, featuring experiments 3 and4, have been designed to examine the effects of haptics on distance learning design students at the Open University. The empirical results from all four case studies showed that NS, VI users were able to navigate and perceive virtual objects via the force from the haptically rendered objects on-screen. Moreover, they were assisted by the whole multimodal MH assistance, which in CS2 appeared to offer better assistance to NS versus FS participants. In CS3 and 4 MH and multimodal assistance afforded equal assistance to NS, VI, and FS, but haptics were not as successful in bettering the time results recorded in manual (M) haptic conditions. However, the collision data between M and MH showed little statistical difference. The thesis showed that multimodal MH systems, specifically used in kinesthetic mode have enabled human (non-disabled and disabled) to credibly judge objects within the virtual realm. It also shows that multimodal augmented tooling can improve the interaction and afford better access to the graphical user interface for a wider body of users

    The digitally 'Hand Made' object

    Get PDF
    This article will outline the author’s investigations of types of computer interfaces in practical three-dimensional design practice. The paper contains a description of two main projects in glass and ceramic tableware design, using a Microscribe G2L digitising arm as an interface to record three-dimensional spatial\ud design input.\ud \ud The article will provide critical reflections on the results of the investigations and will argue that new approaches in digital design interfaces could have relevance in developing design methods which incorporate more physical ‘human’ expressions in a three-dimensional design practice. The research builds on concepts indentified in traditional craft practice as foundations for constructing new types of creative practices based on the use of digital technologies, as outlined by McCullough (1996)

    Physical contraptions as social interaction catalysts

    Get PDF

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures
    corecore