93,382 research outputs found

    Presenting dynamic information on mobile computers

    Get PDF
    A problem with mobile computing devices is the output of dynamic information owing to their small screens. This paper describes an experiment to investigate the use of non-speech sounds to present dynamic information without using visual display space. Results showed that non-speech sound could be used in a simple share-dealing scenario to present a “sound graph” of share prices. This allowed participants to reduce the workload they had to invest in share-price monitoring as they could listen to the graph whilst they worked in a share accumulation window

    Presenting Dynamic Information on Mobile Computers

    Get PDF

    Sensing and visualizing spatial relations of mobile devices

    Get PDF
    Location information can be used to enhance interaction with mobile devices. While many location systems require instrumentation of the environment, we present a system that allows devices to measure their spatial relations in a true peer-to-peer fashion. The system is based on custom sensor hardware implemented as USB dongle, and computes spatial relations in real-time. In extension of this system we propose a set of spatialized widgets for incorporation of spatial relations in the user interface. The use of these widgets is illustrated in a number of applications, showing how spatial relations can be employed to support and streamline interaction with mobile devices

    Personalization in cultural heritage: the road travelled and the one ahead

    Get PDF
    Over the last 20 years, cultural heritage has been a favored domain for personalization research. For years, researchers have experimented with the cutting edge technology of the day; now, with the convergence of internet and wireless technology, and the increasing adoption of the Web as a platform for the publication of information, the visitor is able to exploit cultural heritage material before, during and after the visit, having different goals and requirements in each phase. However, cultural heritage sites have a huge amount of information to present, which must be filtered and personalized in order to enable the individual user to easily access it. Personalization of cultural heritage information requires a system that is able to model the user (e.g., interest, knowledge and other personal characteristics), as well as contextual aspects, select the most appropriate content, and deliver it in the most suitable way. It should be noted that achieving this result is extremely challenging in the case of first-time users, such as tourists who visit a cultural heritage site for the first time (and maybe the only time in their life). In addition, as tourism is a social activity, adapting to the individual is not enough because groups and communities have to be modeled and supported as well, taking into account their mutual interests, previous mutual experience, and requirements. How to model and represent the user(s) and the context of the visit and how to reason with regard to the information that is available are the challenges faced by researchers in personalization of cultural heritage. Notwithstanding the effort invested so far, a definite solution is far from being reached, mainly because new technology and new aspects of personalization are constantly being introduced. This article surveys the research in this area. Starting from the earlier systems, which presented cultural heritage information in kiosks, it summarizes the evolution of personalization techniques in museum web sites, virtual collections and mobile guides, until recent extension of cultural heritage toward the semantic and social web. The paper concludes with current challenges and points out areas where future research is needed

    Triggering information by context

    Get PDF
    With the increased availability of personal computers with attached sensors to capture their environment, there is a big opportunity for context-aware applications; these automatically provide information and/or take actions according to the user's present context, as detected by sensors. When wel l designed, these applications provide an opportunity to tailor the provision of information closely to the user's current needs. A sub-set of context-a ware applications are discrete applications, where discrete pieces of i nformation are attached to individual contexts, to be triggered when the user enters those contexts. The advantage of discrete applications is that authori ng them can be solely a creative process rather than a programming process: it can be a task akin to creating simple web pages. This paper looks at a general system that can be used in any discrete context- aware application. It propounds a general triggering rule, and investigates how this rule applies in practical applications

    The Ubiquitous Interactor - Device Independent Access to Mobile Services

    Full text link
    The Ubiquitous Interactor (UBI) addresses the problems of design and development that arise around services that need to be accessed from many different devices. In UBI, the same service can present itself with different user interfaces on different devices. This is done by separating interaction between users and services from presentation. The interaction is kept the same for all devices, and different presentation information is provided for different devices. This way, tailored user interfaces for many different devices can be created without multiplying development and maintenance work. In this paper we describe the system design of UBI, the system implementation, and two services implemented for the system: a calendar service and a stockbroker service

    Model-based target sonification on mobile devices

    Get PDF
    We investigate the use of audio and haptic feedback to augment the display of a mobile device controlled by tilt input. We provide an example of this based on Doppler effects, which highlight the user's approach to a target, or a target's movement from the current state, in the same way we hear the pitch of a siren change as it passes us. Twelve participants practiced navigation/browsing a state-space that was displayed via audio and vibrotactile modalities. We implemented the experiment on a Pocket PC, with an accelerometer attached to the serial port and a headset attached to audio port. Users navigated through the environment by tilting the device. Feedback was provided via audio displayed via a headset, and by vibrotactile information displayed by a vibrotactile unit in the Pocket PC. Users selected targets placed randomly in the state-space, supported by combinations of audio, visual and vibrotactile cues. The speed of target acquisition and error rate were measured, and summary statistics on the acquisition trajectories were calculated. These data were used to compare different display combinations and configurations. The results in the paper quantified the changes brought by predictive or 'quickened' sonified displays in mobile, gestural interaction

    Stressing the Boundaries of Mobile Accessibility

    Full text link
    Mobile devices gather the communication capabilities as no other gadget. Plus, they now comprise a wider set of applications while still maintaining reduced size and weight. They have started to include accessibility features that enable the inclusion of disabled people. However, these inclusive efforts still fall short considering the possibilities of such devices. This is mainly due to the lack of interoperability and extensibility of current mobile operating systems (OS). In this paper, we present a case study of a multi-impaired person where access to basic mobile applications was provided in an applicational basis. We outline the main flaws in current mobile OS and suggest how these could further empower developers to provide accessibility components. These could then be compounded to provide system-wide inclusion to a wider range of (multi)-impairments.Comment: 3 pages, two figures, ACM CHI 2013 Mobile Accessibility Worksho

    Tactons: structured tactile messages for non-visual information display

    Get PDF
    Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including: frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices. This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given

    You Want Coffee with That? Revisiting the Library as Place

    Full text link
    The constantly changing roles of libraries and librarians, as well as the onslaught of electronic resources and mobile technology, have refocused attention on the library’s place and value in today’s society. This paper highlights a 2015 academic library conference presentation and includes supplemental information on the subject. It focuses on the library less as the traditional place to gather information and more as the meeting place – a third place – where like-minded individuals, their information-gathering devices in tow, enter and expect “super-sized” customer service
    corecore