694 research outputs found

    Context-Aware Software

    Get PDF
    With the advent of PDAs (Personal Digital Assistants), smart phones, and other forms of mobile and ubiquitous computers, our computing resources are increasingly moving off of our desktops and into our everyday lives. However, the software and user interfaces for these devices are generally very similar to that of their desktop counterparts, despite the radically different and dynamic environments that they face. We propose that to better assist their users, such devices should be able to sense, react to, and utilise, the user's current environment or context. That is, they should become context-aware. In this thesis we investigate context-awareness at three levels: user interfaces, applications, and supporting architectures/frameworks. To promote the use of context-awareness, and to aid its deployment in software, we have developed two supporting frameworks. The first is an application-oriented framework called stick-e notes. Based on an electronic version of the common Post-It Note, stick-e notes enable the attachment of any electronic resource (e.g. a text file, movie, Java program, etc.) to any type of context (e.g. location, temperature, time, etc.). The second framework we devised seeks to provide a more universal support for the capture, manipulation, and representation of context information. We call it the Context Information Service (CIS). It fills a similar role in context-aware software development as GUI libraries do in user interface development. Our applications research explored how context-awareness can be exploited in real environments with real users. In particular, we developed a suite of PDA-based context-aware tools for fieldworkers. These were used extensively by a group of ecologists in Africa to record observations of giraffe and rhinos in a remote Kenyan game reserve. These tools also provided the foundations for our HCI work, in which we developed the concept of the Minimal Attention User Interface (MAUI). The aim of the MAUI is to reduce the attention required by the user in operating a device by carefully selecting input/output modes that are harmonious to their tasks and environment. To evaluate our ideas and applications a field study was conducted in which over forty volunteers used our system for data collection activities over the course of a summer season at the Kenyan game reserve. The PDA-based tools were unanimously preferred to the paper-based alternatives, and the context-aware features were cited as particular reasons for preferring them. In summary, this thesis presents two frameworks to support context-aware software, a set of applications demonstrating how context-awareness can be utilised in the ''real world'', and a set of HCI guidelines and principles that help in creating user interfaces that fit to their context of use

    Augmented Reality and Context Awareness for Mobile Learning Systems

    Get PDF
    Learning is one of the most interactive processes that humans practice. The level of interaction between the instructor and his or her audience has the greatest effect on the output of the learning process. Recent years have witnessed the introduction of e-learning (electronic learning), which was then followed by m-learning (mobile learning). While researchers have studied e-learning and m-learning to devise a framework that can be followed to provide the best possible output of the learning process, m-learning is still being studied in the shadow of e-learning. Such an approach might be valid to a limited extent, since both aims to provide educational material over electronic channels. However, m-learning has more space for user interaction because of the nature of the devices and their capabilities. The objective of this work is to devise a framework that utilises augmented reality and context awareness in m-learning systems to increase their level of interaction and, hence, their usability. The proposed framework was implemented and deployed over an iPhone device. The implementation focused on a specific course. Its material represented the use of augmented reality and the flow of the material utilised context awareness. Furthermore, a software prototype application for smart phones, to assess usability issues of m-learning applications, was designed and implemented. This prototype application was developed using the Java language and the Android software development kit, so that the recommended guidelines of the proposed framework were maintained. A questionnaire survey was conducted at the University, with approximately twenty-four undergraduate computer science students. Twenty-four identical smart phones were used to evaluate the developed prototype, in terms of ease of use, ease of navigating the application content, user satisfaction, attractiveness and learnability. Several validation tests were conducted on the proposed augmented reality m-learning verses m-learning. Generally, the respondents rated m-learning with augmented reality as superior to m-learning alone

    A Mobile Healthcare Solution for Ambient Assisted Living Environments

    Get PDF
    Elderly people need regular healthcare services and, several times, are dependent of physicians’ personal attendance. This dependence raises several issues to elders, such as, the need to travel and mobility support. Ambient Assisted Living (AAL) and Mobile Health (m-Health) services and applications offer good healthcare solutions that can be used both on indoor and in mobility environments. This dissertation presents an ambient assisted living (AAL) solution for mobile environments. It includes elderly biofeedback monitoring using body sensors for data collection offering support for remote monitoring. The used sensors are attached to the human body (such as the electrocardiogram, blood pressure, and temperature). They collect data providing comfort, mobility, and guaranteeing efficiency and data confidentiality. Periodic collection of patients’ data is important to gather more accurate measurements and to avoid common risky situations, like a physical fall may be considered something natural in life span and it is more dangerous for senior people. One fall can out a life in extreme cases or cause fractures, injuries, but when it is early detected through an accelerometer, for example, it can avoid a tragic outcome. The presented proposal monitors elderly people, storing collected data in a personal computer, tablet, or smartphone through Bluetooth. This application allows an analysis of possible health condition warnings based on the input of supporting charts, and real-time bio-signals monitoring and is able to warn users and the caretakers. These mobile devices are also used to collect data, which allow data storage and its possible consultation in the future. The proposed system is evaluated, demonstrated and validated through a prototype and it is ready for use. The watch Texas ez430-Chronos, which is capable to store information for later analysis and the sensors Shimmer who allow the creation of a personalized application that it is capable of measuring biosignals of the patient in real time is described throughout this dissertation

    Context-aware software

    Get PDF
    With the advent of PDAs (Personal Digital Assistants), smart phones, and other forms of mobile and ubiquitous computers, our computing resources are increasingly moving off of our desktops and into our everyday lives. However, the software and user interfaces for these devices are generally very similar to that of their desktop counterparts, despite the radically different and dynamic environments that they face. We propose that to better assist their users, such devices should be able to sense, react to, and utilise, the user's current environment or context. That is, they should become context-aware. In this thesis we investigate context-awareness at three levels: user interfaces, applications, and supporting architectures/frameworks. To promote the use of context-awareness, and to aid its deployment in software, we have developed two supporting frameworks. The first is an application-oriented framework called stick-e notes. Based on an electronic version of the common Post-It Note, stick-e notes enable the attachment of any electronic resource (e.g. a text file, movie, Java program, etc.) to any type of context (e.g. location, temperature, time, etc.). The second framework we devised seeks to provide a more universal support for the capture, manipulation, and representation of context information. We call it the Context Information Service (CIS). It fills a similar role in context-aware software development as GUI libraries do in user interface development. Our applications research explored how context-awareness can be exploited in real environments with real users. In particular, we developed a suite of PDA-based context-aware tools for fieldworkers. These were used extensively by a group of ecologists in Africa to record observations of giraffe and rhinos in a remote Kenyan game reserve. These tools also provided the foundations for our HCI work, in which we developed the concept of the Minimal Attention User Interface (MAUI). The aim of the MAUI is to reduce the attention required by the user in operating a device by carefully selecting input/output modes that are harmonious to their tasks and environment. To evaluate our ideas and applications a field study was conducted in which over forty volunteers used our system for data collection activities over the course of a summer season at the Kenyan game reserve. The PDA-based tools were unanimously preferred to the paper-based alternatives, and the context-aware features were cited as particular reasons for preferring them. In summary, this thesis presents two frameworks to support context-aware software, a set of applications demonstrating how context-awareness can be utilised in the ''real world'', and a set of HCI guidelines and principles that help in creating user interfaces that fit to their context of use

    Spatial Augmented Reality Using Structured Light Illumination

    Get PDF
    Spatial augmented reality is a particular kind of augmented reality technique that uses projector to blend the real objects with virtual contents. Coincidentally, as a means of 3D shape measurement, structured light illumination makes use of projector as part of its system as well. It uses the projector to generate important clues to establish the correspondence between the 2D image coordinate system and the 3D world coordinate system. So it is appealing to build a system that can carry out the functionalities of both spatial augmented reality and structured light illumination. In this dissertation, we present all the hardware platforms we developed and their related applications in spatial augmented reality and structured light illumination. Firstly, it is a dual-projector structured light 3D scanning system that has two synchronized projectors operate simultaneously, consequently it outperforms the traditional structured light 3D scanning system which only include one projector in terms of the quality of 3D reconstructions. Secondly, we introduce a modified dual-projector structured light 3D scanning system aiming at detecting and solving the multi-path interference. Thirdly, we propose an augmented reality face paint system which detects human face in a scene and paints the face with any favorite colors by projection. Additionally, the system incorporates a second camera to realize the 3D space position tracking by exploiting the principle of structured light illumination. At last, a structured light 3D scanning system with its own built-in machine vision camera is presented as the future work. So far the standalone camera has been completed from the a bare CMOS sensor. With this customized camera, we can achieve high dynamic range imaging and better synchronization between the camera and projector. But the full-blown system that includes HDMI transmitter, structured light pattern generator and synchronization logic has yet to be done due to the lack of a well designed high speed PCB

    TiFEE : an input event-handling framework with touchless device support

    Get PDF
    Tese de mestrado integrado. Engenharia Informática e Computação. Universidade do Porto. Faculdade de Engenharia. 201

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment
    • …
    corecore