2,881 research outputs found

    The effect of age and font size on reading text on handheld computers

    Get PDF
    Though there have been many studies of computer based text reading, only a few have considered the small screens of handheld computers. This paper presents an investigation into the effect of varying font size between 2 and 16 point on reading text on a handheld computer. By using both older and younger participants the possible effects of age were examined. Reading speed and accuracy were measured and subjective views of participants recorded. Objective results showed that there was little difference in reading performance above 6 point, but subjective comments from participants showed a preference for sizes in the middle range. We therefore suggest, for reading tasks, that designers of interfaces for mobile computers provide fonts in the range of 8-12 point to maximize readability for the widest range of users

    Comparison of in-sight and handheld navigation devices toward supporting industry 4.0 supply chains: First and last mile deliveries at the human level

    Get PDF
    Last (and First) mile deliveries are an increasingly important and costly component of supply chains especially those that require transport within city centres. With reduction in anticipated manufacturing and delivery timescales, logistics personnel are expected to identify the correct location (accurately) and supply the goods in appropriate condition (safe delivery). Moving towards more environmentally sustainable supply chains, the last/first mile of deliveries may be completed by a cyclist courier which could result in significant reductions in congestion and emissions in cities. In addition, the last metres of an increasing number of deliveries are completed on foot i.e. as a pedestrian. Although research into new technologies to support enhanced navigation capabilities is ongoing, the focus to date has been on technical implementations with limited studies addressing how information is perceived and actioned by a human courier. In the research reported in this paper a comparison study has been conducted with 24 participants evaluating two examples of state-of-the-art navigation aids to support accurate (right time and place) and safe (right condition) navigation. Participants completed 4 navigation tasks, 2 whilst cycling and 2 whilst walking. The navigation devices under investigation were a handheld display presenting a map and instructions and an in-sight monocular display presenting text and arrow instructions. Navigation was conducted in a real-world environment in which eye movements and device interaction were recorded using Tobii-Pro 2 eye tracking glasses. The results indicate that the handheld device provided better support for accurate navigation (right time and place), with longer but less frequent gaze interactions and higher perceived usability. The in-sight display supported improved situation awareness with a greater number of hazards acknowledged. The benefits and drawbacks of each device and use of visual navigation support tools are discussed

    A driving simulator study to explore the effects of text size on the visual demand of in-vehicle displays

    Get PDF
    Modern vehicles increasingly utilise a large display within the centre console, often with touchscreen capability, to enable access to a wide range of driving and non-driving-related functionality. The text provided on such displays can vary considerably in size, yet little is known about the effects of different text dimensions on how drivers visually sample the interface while driving and the potential implications for driving performance and user acceptance. A study is described in which sixteen people drove motorway routes in a medium-fidelity simulator and were asked to read text of varying sizes (9 mm, 8 mm, 6.5 mm, 5 mm, or 4 mm) from a central in-vehicle display. Pseudo-text was used as a stimulus to ensure that participants scanned the text in a consistent fashion that was unaffected by comprehension. There was no evidence of an effect of text size on the total time spent glancing at the display, but significant differences arose regarding how glances were distributed. Specifically, larger text sizes were associated with a high number of relatively short glances, whereas smaller text led to a smaller number of long glances. No differences were found in driving performance measures (speed, lateral lane position). Drivers overwhelmingly preferred the ‘compromise’ text sizes (6.5 mm and 8 mm). Results are discussed in relation to the development of large touchscreens within vehicles

    Reading on small displays: reading performance and perceived ease of reading

    Get PDF
    The present thesis explores and discusses reading continuous text on small screens, namely on mobile devices, and aims at identifying a model capturing those factors that most influence the perceived experience of reading. The thesis also provides input for the user interface and content creation industries, offering them some direction as to what to focus on when producing interfaces intended for reading or text-based content that is likely to be read on a small display. The thesis starts with an overview of the special characteristics of reading on small screens and identifies, through existing literature, issues that may affect fluency and ease of reading on mobile devices. The thesis then presents six experiments and studies on reading performance and perceived experience when reading on small screens. The mixed-methods research presented in the thesis showed that reading performance and subjective perception of reading fluency and ease do not always correspond, and perceived experience can have a strong influence over an end-user’s choice of whether to access text based content on a small display device or not. The research shows that it is important to measure interface quality not only in terms of functionality, but also for the user experience offered – and, ideally, to measure experience through more than one variable. The thesis offers a factor model (mobile reading acceptance model) of those factors that collectively influence subjective experience when reading via small screens. The key factors in the model are visibility of text, overview of contents, navigation within the contents and interaction with the interface/device. Further contributions include methods for cost-efficient user experience testing: a modified critical incident technique and using an optical character recognition to gauge legibility user experience at early design iterations

    Educational Handheld Video: Examining Shot Composition, Graphic Design, And Their Impact On Learning

    Get PDF
    Formal features of video such as shot composition and graphic design can weigh heavily on the success or failure of educational videos. Many studies have assessed the proper use of these techniques given the psychological expectations that viewers have for video programming (Hawkins et al., 2002; Kenny, 2002; Lang, Zhou, Schwardtz, Bolls, & Potter, 2000; McCain, Chilberg, & Wakshlag, 1977; McCain & Repensky, 1972; Miller, 2005; Morris, 1984; Roe, 1998; Schmitt, Anderson, & Collins, 1999; Sherman & Etling, 1991; Tannenbaum & Fosdick, 1960; Wagner, 1953). This study examined formal features within the context of the newly emerging distribution method of viewing video productions on mobile handheld devices. Shot composition and graphic design were examined in the context of an educational video to measure whether or not they had any influence on user perceptions of learning and learning outcomes. The two formal features were modified for display on 24 inch screens and on 3.5 inch or smaller screens. Participants were shown one of the four modified treatments, then presented with a test to measure whether or not the modified formal features had any impact or influence on learning outcomes from a sample of 132 undergraduate college students. No significant differences were found to occur as a result of manipulation of formal features between the treatment groups

    Designs for a general purpose wearable computer

    Get PDF
    To provide input and control, wearable computer solutions must replace the familiar desktop interface devices of keyboard and mouse with specialized hardware. While successful wearable input solutions have been developed for domain specific applications, a standard input interface for general purpose wearable computing has yet to emerge. The steep learning curves and unruly hardware of the solutions proposed thus far are one of the factors keeping wearable computing out of the mainstream. This thesis proposes a new input and control approach that increases wearable computing usability by integrating several commonly available devices into a comprehensive system. The proposed system integrates commercial, off the shelf hardware together with generalized software applications that increase the usability and general utility of a wearable computer. The hardware consists of a wearable computer, a clip-on microdisplay eyepiece and a standard PDA running Pocket PC. Through a Bluetooth network, the PDA can wirelessly control the text input (keyboard) and pointer control (mouse) of the wearable computer. The software consists of two applications designed to provide easy access to new content and previously stored data. One application presents a user with a continuous scroll of new content which can be attended to at the user\u27s discretion. The content is dynamically retrieved from any online sources, and can range from news feeds and stock quotes to calendars and weather reports. New content can be added to the user\u27s persistent digital store at any time. The second application, a private peer-to-peer data sharing program called the Tangle, was developed to fuse the user\u27s multiple data sources (home or work computer, wearable computer, PDA) into a single, searchable repository. Tangle also provides easy access to the digital assets of other, trusted Tangle users. Tangle makes it easy for virtually any content that a user encounters while using the system to be easily added to the user\u27s persistent data store

    Robot Navigation in Unseen Spaces using an Abstract Map

    Full text link
    Human navigation in built environments depends on symbolic spatial information which has unrealised potential to enhance robot navigation capabilities. Information sources such as labels, signs, maps, planners, spoken directions, and navigational gestures communicate a wealth of spatial information to the navigators of built environments; a wealth of information that robots typically ignore. We present a robot navigation system that uses the same symbolic spatial information employed by humans to purposefully navigate in unseen built environments with a level of performance comparable to humans. The navigation system uses a novel data structure called the abstract map to imagine malleable spatial models for unseen spaces from spatial symbols. Sensorimotor perceptions from a robot are then employed to provide purposeful navigation to symbolic goal locations in the unseen environment. We show how a dynamic system can be used to create malleable spatial models for the abstract map, and provide an open source implementation to encourage future work in the area of symbolic navigation. Symbolic navigation performance of humans and a robot is evaluated in a real-world built environment. The paper concludes with a qualitative analysis of human navigation strategies, providing further insights into how the symbolic navigation capabilities of robots in unseen built environments can be improved in the future.Comment: 15 pages, published in IEEE Transactions on Cognitive and Developmental Systems (http://doi.org/10.1109/TCDS.2020.2993855), see https://btalb.github.io/abstract_map/ for access to softwar

    Interaction in motion: designing truly mobile interaction

    Get PDF
    The use of technology while being mobile now takes place in many areas of people’s lives in a wide range of scenarios, for example users cycle, climb, run and even swim while interacting with devices. Conflict between locomotion and system use can reduce interaction performance and also the ability to safely move. We discuss the risks of such “interaction in motion”, which we argue make it desirable to design with locomotion in mind. To aid such design we present a taxonomy and framework based on two key dimensions: relation of interaction task to locomotion task, and the amount that a locomotion activity inhibits use of input and output interfaces. We accompany this with four strategies for interaction in motion. With this work, we ultimately aim to enhance our understanding of what being “mobile” actually means for interaction, and help practitioners design truly mobile interactions
    corecore