76,835 research outputs found

    An investigation into the problems of user oriented interfaces in mobile applications

    Get PDF
    The purpose of this paper is the analysis and evaluation of the mobile interface design. This study consisted of a random sample of 55 user interfaces for mobile applications. In addition, the restriction of all the components of the user interface quantified. An analysis was conducted of these interfaces, in order to represent graphically. Then, evaluated and produced the following results: First, the smaller number of pages in the application is better. Second, decreasing the navigation bars, buttons and menus in user interfaces for mobile applications gives additional space on the screen, making the application easy to use and maintaining the context. Third, diversity, the use of tools ensures good interaction with the user. Finally, a range of results for the design of the user interface and some ideas are provided about what should be taken of these results in mind when designing interfaces for mobile applications

    User-Accustomed Interaction: An Usability Approach for Designing Mobile Application for Novice and Expert Users

    Get PDF
    The development of smartphone applications is prevailing globally, including the underserved communities consisting of a huge group of novice users. In spite of the growing number of novice users, we hardly consider usability for users with varying expertise level when we evaluate performance and satisfaction with usage of mobile applications. In this study, we argue that it is not suitable to design one interface for all users of progressively varying communities. Based on theories in design science research, we propose a user-accustomed approach to adapt mobile applications that integrate three types of interaction elements, namely localization, structural navigation and illustration. In an investigation of the proposed approach on mobile application, we empirically proved the effects of user-accustomed interaction techniques on performance and satisfaction between novice and expert users. The findings provide significant theoretical and practical implications for design and implementation of user interfaces on mobile application

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Natural interaction framework for pedestrian navigation systems on mobile devices

    Get PDF
    Mobile Augmented Reality applications base on navigation frameworks try to promote interaction beyond the desktop by employing wearable sensors, which collect user's position, orientation or diverse types of activities. Most navigation frameworks track location and heading of the user in the global coordinate frame using Global Positioning System (GPS) data. On the other hand, in the wearable computing area researchers studied angular data of human b o y segments in the local coordinate frame using inertial orientation trackers. We propose a combination of global and local coordinate frame approaches and provide a context-aware interaction framework for mobile devices by seamlessly changing Graphical User Interfaces (GUIs) for pedestrians wandering in urban environments. The system is designed and tested on a Personal Digital Assistant (PDA) based handheld prototype mounted with a GPS receiver and inertial orientation tracker. It introduces a method to estimate orientation of a mobile user's hand. The recognition algorithm is based on state transitions triggered by time-line analysis of pitch angle and angular velocity of the orientation tracker. The prototype system can differentiate between three postures successfully. We associated each posture with different contexts which are of interest for pedestrian navigation systems: investigation, navigation and idle. Thus, we introduce the idea that once orientation trackers became part of mobile computers, they can be used to create natural interaction techniques with mobile computers. The prototype is tested successfully in two urban environments: Sabanci University campus area, . 9th International Istanbul Biennial venues in Beyoglu, Istanbul

    A multi-modal person perception framework for socially interactive mobile service robots

    Get PDF
    In order to meet the increasing demands of mobile service robot applications, a dedicated perception module is an essential requirement for the interaction with users in real-world scenarios. In particular, multi sensor fusion and human re-identification are recognized as active research fronts. Through this paper we contribute to the topic and present a modular detection and tracking system that models position and additional properties of persons in the surroundings of a mobile robot. The proposed system introduces a probability-based data association method that besides the position can incorporate face and color-based appearance features in order to realize a re-identification of persons when tracking gets interrupted. The system combines the results of various state-of-the-art image-based detection systems for person recognition, person identification and attribute estimation. This allows a stable estimate of a mobile robot’s user, even in complex, cluttered environments with long-lasting occlusions. In our benchmark, we introduce a new measure for tracking consistency and show the improvements when face and appearance-based re-identification are combined. The tracking system was applied in a real world application with a mobile rehabilitation assistant robot in a public hospital. The estimated states of persons are used for the user-centered navigation behaviors, e.g., guiding or approaching a person, but also for realizing a socially acceptable navigation in public environments

    Accessible user interface support for multi-device ubiquitous applications: architectural modifiability considerations

    Get PDF
    The market for personal computing devices is rapidly expanding from PC, to mobile, home entertainment systems, and even the automotive industry. When developing software targeting such ubiquitous devices, the balance between development costs and market coverage has turned out to be a challenging issue. With the rise of Web technology and the Internet of things, ubiquitous applications have become a reality. Nonetheless, the diversity of presentation and interaction modalities still drastically limit the number of targetable devices and the accessibility toward end users. This paper presents webinos, a multi-device application middleware platform founded on the Future Internet infrastructure. Hereto, the platform's architectural modifiability considerations are described and evaluated as a generic enabler for supporting applications, which are executed in ubiquitous computing environments
    • …
    corecore