184 research outputs found

    Proceedings of the 4th Workshop on Interacting with Smart Objects 2015

    Get PDF
    These are the Proceedings of the 4th IUI Workshop on Interacting with Smart Objects. Objects that we use in our everyday life are expanding their restricted interaction capabilities and provide functionalities that go far beyond their original functionality. They feature computing capabilities and are thus able to capture information, process and store it and interact with their environments, turning them into smart objects

    Bridging the physical and virtual with mobile media

    No full text
    This thesis examines how mobile technologies can contribute towards bridging physical and virtual space through interactive, and location-specific, media experiences. Building on a research analysis of contextual discussions and precedents, it is noticeable that there is a discord between physical and virtual space usage as they are often utilised in different situational settings. This thesis therefore develops a mobile application as a wider investigation into how the physical setting and live data can be used to achieve a better link for contextualised content between the physical and virtual in urban areas. It explores this by making a location specific media experience, where the limits of the physical space are incorporated as boundaries in the virtual environment. Further to this, live data is used to influence the dynamics of the environment so that conditions are reflective of the physical world. These investigations are utilised with Augmented Reality, providing an end application that allows the viewer to physically explore urban space within an interactive mobile media experience. This approach offers a new perspective in urban space exploration and mobile media design, highlighting that contextual significance in media experiences are important aspects to consider and design for. Ultimately, such approaches may lead to larger narratives and experiences encompassing entire cities, or other diverse geographies

    GECAF : a generic and extensible framework for developing context-aware smart environments

    Get PDF
    The new pervasive and context-aware computing models have resulted in the development of modern environments which are responsive to the changing needs of the people who live, work or socialise in them. These are called smart envirnments and they employ high degree of intelligence to consume and process information in order to provide services to users in accordance with their current needs. To achieve this level of intelligence, such environments collect, store, represent and interpret a vast amount of information which describes the current context of their users. Since context-aware systems differ in the way they interact with users and interpret the context of their entities and the actions they need to take, each individual system is developed in its own way with no common architecture. This fact makes the development of every context aware system a challenge. To address this issue, a new and generic framework has been developed which is based on the Pipe-and-Filter software architectural style, and can be applied to many systems. This framework uses a number of independent components that represent the usual functions of any context-aware system. These components can be configured in different arrangements to suit the various systems' requirements. The framework and architecture use a model to represent raw context information as a function of context primitives, referred to as Who, When, Where, What and How (4W1H). Historical context information is also defined and added to the model to predict some actions in the system. The framework uses XML code to represent the model and describes the sequence in which context information is being processed by the architecture's components (or filters). Moreover, a mechanism for describing interpretation rules for the purpose of context reasoning is proposed and implemented. A set of guidelines is provided for both the deployment and rule languages to help application developers in constructing and customising their own systems using various components of the new framework. To test and demonstrate the functionality of the generic architecture, a smart classroom environment has been adopted as a case study. An evaluation of the new framework has also been conducted using two methods: quantitative and case study driven evaluation. The quantitative method used information obtained from reviewing the literature which is then analysed and compared with the new framework in order to verify the completeness of the framework's components for different xiisituations. On the other hand, in the case study method the new framework has been applied in the implementation of different scenarios of well known systems. This method is used for verifying the applicability and generic nature of the framework. As an outcome, the framework is proven to be extensible with high degree of reusability and adaptability, and can be used to develop various context-aware systems.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Contextual sensing : integrating contextual information with human and technical geo-sensor information for smart cities

    Get PDF
    In this article we critically discuss the challenge of integrating contextual information, in particular spatiotemporal contextual information, with human and technical sensor information, which we approach from a geospatial perspective. We start by highlighting the significance of context in general and spatiotemporal context in particular and introduce a smart city model of interactions between humans, the environment, and technology, with context at the common interface. We then focus on both the intentional and the unintentional sensing capabilities of todays technologies and discuss current technological trends that we consider have the ability to enrich human and technical geo-sensor information with contextual detail. The different types of sensors used to collect contextual information are analyzed and sorted into three groups on the basis of names considering frequently used related terms, and characteristic contextual parameters. These three groups, namely technical in situ sensors, technical remote sensors, and human sensors are analyzed and linked to three dimensions involved in sensing (data generation, geographic phenomena, and type of sensing). In contrast to other scientific publications, we found a large number of technologies and applications using in situ and mobile technical sensors within the context of smart cities, and surprisingly limited use of remote sensing approaches. In this article we further provide a critical discussion of possible impacts and influences of both technical and human sensing approaches on society, pointing out that a larger number of sensors, increased fusion of information, and the use of standardized data formats and interfaces will not necessarily result in any improvement in the quality of life of the citizens of a smart city. This article seeks to improve our understanding of technical and human geo-sensing capabilities, and to demonstrate that the use of such sensors can facilitate the integration of different types of contextual information, thus providing an additional, namely the geo-spatial perspective on the future development of smart cities.(VLID)165464

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    GECAF : a generic and extensible framework for developing context-aware smart environments

    Get PDF
    The new pervasive and context-aware computing models have resulted in the development of modern environments which are responsive to the changing needs of the people who live, work or socialise in them. These are called smart envirnments and they employ high degree of intelligence to consume and process information in order to provide services to users in accordance with their current needs. To achieve this level of intelligence, such environments collect, store, represent and interpret a vast amount of information which describes the current context of their users. Since context-aware systems differ in the way they interact with users and interpret the context of their entities and the actions they need to take, each individual system is developed in its own way with no common architecture. This fact makes the development of every context aware system a challenge. To address this issue, a new and generic framework has been developed which is based on the Pipe-and-Filter software architectural style, and can be applied to many systems. This framework uses a number of independent components that represent the usual functions of any context-aware system. These components can be configured in different arrangements to suit the various systems' requirements. The framework and architecture use a model to represent raw context information as a function of context primitives, referred to as Who, When, Where, What and How (4W1H). Historical context information is also defined and added to the model to predict some actions in the system. The framework uses XML code to represent the model and describes the sequence in which context information is being processed by the architecture's components (or filters). Moreover, a mechanism for describing interpretation rules for the purpose of context reasoning is proposed and implemented. A set of guidelines is provided for both the deployment and rule languages to help application developers in constructing and customising their own systems using various components of the new framework. To test and demonstrate the functionality of the generic architecture, a smart classroom environment has been adopted as a case study. An evaluation of the new framework has also been conducted using two methods: quantitative and case study driven evaluation. The quantitative method used information obtained from reviewing the literature which is then analysed and compared with the new framework in order to verify the completeness of the framework's components for differentxiisituations. On the other hand, in the case study method the new framework has been applied in the implementation of different scenarios of well known systems. This method is used for verifying the applicability and generic nature of the framework. As an outcome, the framework is proven to be extensible with high degree of reusability and adaptability, and can be used to develop various context-aware systems

    Context Aware Drivers' Behaviour Detection System for VANET

    Get PDF
    Wireless communications and mobile computing have led to the enhancement of, and improvement in, intelligent transportation systems (ITS) that focus on road safety applications. As a promising technology and a core component of ITS, Vehicle Ad hoc Networks (VANET) have emerged as an application of Mobile Ad hoc Networks (MANET), which use Dedicated Short Range Communication (DSRC) to allow vehicles in close proximity to communicate with one another, or to communicate with roadside equipment. These types of communication open up a wide range of potential safety and non-safety applications, with the aim of providing an intelligent driving environment that will offer road users more pleasant journeys. VANET safety applications are considered to represent a vital step towards improving road safety and enhancing traffic efficiency, as a consequence of their capacity to share information about the road between moving vehicles. This results in decreasing numbers of accidents and increasing the opportunity to save people's lives. Many researchers from different disciplines have focused their research on the development of vehicle safety applications. Designing an accurate and efficient driver behaviour detection system that can detect the abnormal behaviours exhibited by drivers (i.e. drunkenness and fatigue) and alert them may have an impact on the prevention of road accidents. Moreover, using Context-aware systems in vehicles can improve the driving by collecting and analysing contextual information about the driving environment, hence, increasing the awareness of the driver while driving his/her car. In this thesis, we propose a novel driver behaviour detection system in VANET by utilising a context-aware system approach. The system is comprehensive, non-intrusive and is able to detect four styles of driving behaviour: drunkenness, fatigue, reckless and normal behaviour. The behaviour of the driver in this study is considered to be uncertain context and is defined as a dynamic interaction between the driver, the vehicle and the environment; meaning it is affected by many factors and develops over the time. Therefore, we have introduced a novel Dynamic Bayesian Network (DBN) framework to perform reasoning about uncertainty and to deduce the behaviour of drivers by combining information regarding the above mentioned factors. A novel On Board Unit (OBU) architecture for detecting the behaviour of the driver has been introduced. The architecture has been built based on the concept of context-awareness; it is divided into three phases that represent the three main subsystems of context-aware system; sensing, reasoning and acting subsystems. The proposed architecture explains how the system components interact in order to detect abnormal behaviour that is being exhibited by driver; this is done to alert the driver and prevent accidents from occurring. The implementation of the proposed system has been carried out using GeNIe version 2.0 software to construct the DBN model. The DBN model has been evaluated using synthetic data in order to demonstrate the detection accuracy of the proposed model under uncertainty, and the importance of including a large amount of contextual information within the detection process
    • …
    corecore