132 research outputs found

    Modelling virtual urban environments

    Get PDF
    In this paper, we explore the way in which virtual reality (VR) systems are being broadened to encompass a wide array of virtual worlds, many of which have immediate applicability to understanding urban issues through geocomputation. Wesketch distinctions between immersive, semi-immersive and remote environments in which single and multiple users interact in a variety of ways. We show how suchenvironments might be modelled in terms of ways of navigating within, processes of decision-making which link users to one another, analytic functions that users have to make sense of the environment, and functions through which users can manipulate, change, or design their world. We illustrate these ideas using four exemplars that we have under construction: a multi-user internet GIS for Londonwith extensive links to 3-d, video, text and related media, an exploration of optimal retail location using a semi-immersive visualisation in which experts can explore such problems, a virtual urban world in which remote users as avatars can manipulate urban designs, and an approach to simulating such virtual worlds through morphological modelling based on the digital record of the entire decision-making process through which such worlds are built

    Tac-tiles: multimodal pie charts for visually impaired users

    Get PDF
    Tac-tiles is an accessible interface that allows visually impaired users to browse graphical information using tactile and audio feedback. The system uses a graphics tablet which is augmented with a tangible overlay tile to guide user exploration. Dynamic feedback is provided by a tactile pin-array at the fingertips, and through speech/non-speech audio cues. In designing the system, we seek to preserve the affordances and metaphors of traditional, low-tech teaching media for the blind, and combine this with the benefits of a digital representation. Traditional tangible media allow rapid, non-sequential access to data, promote easy and unambiguous access to resources such as axes and gridlines, allow the use of external memory, and preserve visual conventions, thus promoting collaboration with sighted colleagues. A prototype system was evaluated with visually impaired users, and recommendations for multimodal design were derived

    Feeling what you hear: tactile feedback for navigation of audio graphs

    Get PDF
    Access to digitally stored numerical data is currently very limited for sight impaired people. Graphs and visualizations are often used to analyze relationships between numerical data, but the current methods of accessing them are highly visually mediated. Representing data using audio feedback is a common method of making data more accessible, but methods of navigating and accessing the data are often serial in nature and laborious. Tactile or haptic displays could be used to provide additional feedback to support a point-and-click type interaction for the visually impaired. A requirements capture conducted with sight impaired computer users produced a review of current accessibility technologies, and guidelines were extracted for using tactile feedback to aid navigation. The results of a qualitative evaluation with a prototype interface are also presented. Providing an absolute position input device and tactile feedback allowed the users to explore the graph using tactile and proprioceptive cues in a manner analogous to point-and-click techniques

    Web-based multimodal graphs for visually impaired people

    Get PDF
    This paper describes the development and evaluation of Web-based multimodal graphs designed for visually impaired and blind people. The information in the graphs is conveyed to visually impaired people through haptic and audio channels. The motivation of this work is to address problems faced by visually impaired people in accessing graphical information on the Internet, particularly the common types of graphs for data visualization. In our work, line graphs, bar charts and pie charts are accessible through a force feedback device, the Logitech WingMan Force Feedback Mouse. Pre-recorded sound files are used to represent graph contents to users. In order to test the usability of the developed Web graphs, an evaluation was conducted with bar charts as the experimental platform. The results showed that the participants could successfully use the haptic and audio features to extract information from the Web graphs

    Visual data mining: integrating machine learning with information visualization

    Get PDF
    Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. Most existing systems concentrate either on mining algorithms or on visualization techniques. Though visual methods developed in information visualization have been helpful, for improved understanding of a complex large high-dimensional dataset, there is a need for an effective projection of such a dataset onto a lower-dimension (2D or 3D) manifold. This paper introduces a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualization domain. The framework follows Shneiderman’s mantra to provide an effective user interface. The advantage of such an interface is that the user is directly involved in the data mining process. We integrate principled projection methods, such as Generative Topographic Mapping (GTM) and Hierarchical GTM (HGTM), with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, billboarding, and user interaction facilities, to provide an integrated visual data mining framework. Results on a real life high-dimensional dataset from the chemoinformatics domain are also reported and discussed. Projection results of GTM are analytically compared with the projection results from other traditional projection methods, and it is also shown that the HGTM algorithm provides additional value for large datasets. The computational complexity of these algorithms is discussed to demonstrate their suitability for the visual data mining framework

    A Support System for Graphics for Visually Impaired People

    Get PDF
    As the Internet plays an important role in today’s society, graphics is widely used to present, convey and communicate information in many different areas. Complex information is often easier to understand and analyze by graphics. Even though graphics plays an important role, accessibility support is very limited for web graphics. Web graphics accessibility is not only for people with disabilities, but also for people who want to get and use information in ways different from the ones originally intended. One of the problems regarding graphics for blind people is that we have few data on how a blind person draws or how he/she receives graphical information. Based on Katz’s pupils’ research, one concludes that blind people can draw in outline and that they have a good sense of three-dimensional shape and space. In this thesis, I propose and develop a system, which can serve as a tool to be used by researchers investigating these and related issues. Our support system is built to collect the drawings from visually impaired people by finger movement on Braille devices or touch devices, such as tablets. When the drawing data is collected, the system will automatically generate the graphical XML data, which are easily accessed by applications and web services. The graphical XML data are stored locally or remotely. Compared to other support systems, our system is the first automatic system to provide web services to collect and access such data. The system also has the capability to integrate into cloud computing so that people can use the system anywhere to collect and access the data

    Sonification System of Maps for Blind

    Get PDF

    Design: One, but in different forms

    Full text link
    This overview paper defends an augmented cognitively oriented generic-design hypothesis: there are both significant similarities between the design activities implemented in different situations and crucial differences between these and other cognitive activities; yet, characteristics of a design situation (related to the design process, the designers, and the artefact) introduce specificities in the corresponding cognitive activities and structures that are used, and in the resulting designs. We thus augment the classical generic-design hypothesis with that of different forms of designing. We review the data available in the cognitive design research literature and propose a series of candidates underlying such forms of design, outlining a number of directions requiring further elaboration

    The Reality of the Situation: A Survey of Situated Analytics

    Get PDF

    Mobile gaze interaction : gaze gestures with haptic feedback

    Get PDF
    There has been an increasing need for alternate interaction techniques to support mobile usage context. Gaze tracking technology is anticipated to soon appear in commercial mobile devices. There are two important considerations when designing mobile gaze interactions. Firstly, the interaction should be robust to accuracy problems. Secondly, user feedback should be instantaneous, meaningful and appropriate to ease the interaction. This thesis proposes gaze gesture input with haptic feedback as an interaction technique in the mobile context. This work presents the results of an experiment that was conducted to understand the effectiveness of vibrotactile feedback in two stroke gaze gesture based mobile interaction and to find the best temporal point in terms of gesture progression to provide the feedback. Four feedback conditions were used, NO (no tactile feedback), OUT (tactile feedback at the end of first stroke), FULL (tactile feedback at the end of second stroke) and BOTH (tactile feedback at the end of first and second strokes). The results suggest that haptic feedback does help the interaction. The participants completed the tasks with fewer errors when haptic feedback was provided. The feedback conditions OUT and BOTH were found to be equally effective in terms of task completion time. The participants also subjectively rated these feedback conditions as being more comfortable and easier to use than FULL and NO feedback conditions
    • …
    corecore