24,006 research outputs found
Mobile three-dimensional city maps
Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability.
Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices?
This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments.
It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability
This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer
We address the problem of controlling the workspace of a 3-DoF mobile robot.
In a human-robot shared space, robots should navigate in a human-acceptable way
according to the users' demands. For this purpose, we employ virtual borders,
that are non-physical borders, to allow a user the restriction of the robot's
workspace. To this end, we propose an interaction method based on a laser
pointer to intuitively define virtual borders. This interaction method uses a
previously developed framework based on robot guidance to change the robot's
navigational behavior. Furthermore, we extend this framework to increase the
flexibility by considering different types of virtual borders, i.e. polygons
and curves separating an area. We evaluated our method with 15 non-expert users
concerning correctness, accuracy and teaching time. The experimental results
revealed a high accuracy and linear teaching time with respect to the border
length while correctly incorporating the borders into the robot's navigational
map. Finally, our user study showed that non-expert users can employ our
interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic
Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
Mobile Agents for Mobile Tourists: A User Evaluation of Gulliver's Genie
How mobile computing applications and services may be best designed, implemented and deployed remains the subject of much research. One alternative approach to developing software for mobile users that is receiving increasing attention from the research community is that of one based on intelligent agents. Recent advances in mobile computing technology have made such an approach feasible. We present an overview of the design and implementation of an archetypical mobile computing application, namely that of an electronic tourist guide. This guide is unique in that it comprises a suite of intelligent agents that conform to the strong intentional stance. However, the focus of this paper is primarily concerned with the results of detailed user evaluations conducted on this system. Within the literature, comprehensive evaluations of mobile context-sensitive systems are sparse and therefore, this paper seeks, in part, to address this deficiency
Testing Two Tools for Multimodal Navigation
The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment
User preferences on route instruction types for mobile indoor route guidance
Adaptive mobile wayfinding systems are being developed to ease wayfinding in the indoor environment. They present wayfinding information to the user, which is adapted to the context. Wayfinding information can be communicated by using different types of route instructions, such as text, photos, videos, symbols or a combination thereof. The need for a different type of route instruction may vary at decision points, for example because of its complexity. Furthermore, these needs may be different for different user characteristics (e.g., age, gender, level of education). To determine this need for information, an online survey has been executed where participants rated 10 different route instruction types at several decision points in a case study building. Results show that the types with additional text were preferred over those without text. The photo instructions, combined with text, generally received the highest ratings, especially from first-time visitors. 3D simulations were appreciated at complex decision points and by younger people. When text (with symbols) is considered as a route instruction type, it is best used for the start or end instruction
Current Practices for Product Usability Testing in Web and Mobile Applications
Software usability testing is a key methodology that ensures applications are intuitive and easy to use for the target audience. Usability testing has direct benefits for companies as usability improvements often are fundamental to the success of a product. A standard usability test study includes the following five steps: obtain suitable participants, design test scripts, conduct usability sessions, interpret test outcomes, and produce recommendations. Due to the increasing importance for more usable applications, effective techniques to develop usable products, as well as technologies to improve usability testing, have been widely utilized. However, as companies are developing more cross-platform web and mobile apps, traditional single-platform usability testing has shortcomings with respect to ensuring a uniform user experience. In this report, a new strategy is proposed to promote a consistent user experience across all application versions and platforms. This method integrates the testing of different application versions, e.g., the website, mobile app, mobile website. Participants are recruited with a better-defined criterion according to their preferred devices. The usability session is conducted iteratively on several different devices, and the test results of individual application versions are compared on a per-device basis to improve the test outcomes. This strategy is expected to extend on current practices for usability testing by incorporating cross-platform consistency of software versions on most devices
Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality
We address the problem of interactively controlling the workspace of a mobile
robot to ensure a human-aware navigation. This is especially of relevance for
non-expert users living in human-robot shared spaces, e.g. home environments,
since they want to keep the control of their mobile robots, such as vacuum
cleaning or companion robots. Therefore, we introduce virtual borders that are
respected by a robot while performing its tasks. For this purpose, we employ a
RGB-D Google Tango tablet as human-robot interface in combination with an
augmented reality application to flexibly define virtual borders. We evaluated
our system with 15 non-expert users concerning accuracy, teaching time and
correctness and compared the results with other baseline methods based on
visual markers and a laser pointer. The experimental results show that our
method features an equally high accuracy while reducing the teaching time
significantly compared to the baseline methods. This holds for different border
lengths, shapes and variations in the teaching process. Finally, we
demonstrated the correctness of the approach, i.e. the mobile robot changes its
navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR
- …