12 research outputs found

    Ifaces: Adaptative user interfaces for ambient intelligence

    Full text link
    Proceedings of the IADIS International Conference on Interfaces and Human Computer Interaction. Amsterdam, The Netherlands 25-27 July 2008In this paper we present an ontology language to model an environment and its graphical user interface in the field of ambient intelligence. This language allows a simple definition of the environment and automatically produces its associated interaction interface. The interface dynamically readjusts to the characteristics of the environment and the available devices. Therefore it adapts to the necessities of the people who have to use it and their resources. The system has been developed and tested employing a real ambient intelligence environment.This work has been partly funded by HADA project number TIN2007 – 64718 and the UAM – Indra Chair in Ambient Intelligence

    Using 2D codes for creating ubiquitous user interfaces for ambient intelligence environments

    Full text link
    Workshop Proceedings of the 6th International Conference on Intelligent Environments, Vol 8, Javier Gómez, Germán Montoro, Pablo A. Haya, Xavier Alamán, Using 2D Codes for Creating Ubiquitous User Interfaces for Ambient Intelligence Environments, 42 - 51, Copyright 2010, with permission from IOS PressThis is an electronic version of the paper presented at the 1st International Workshop on Human-Centric Interfaces for Ambient Intelligence (HCIAmI'10), held in Kuala Lumpur (Malaysia) on 2010Smart phones are one of the most popular devices nowadays. The enrichment of their technical capabilities allows them to carry out new operations beyond the traditional in telephony. This work presents a system that automatically generates user interfaces for Ambient Intelligence environments. This way, smart phones act as “ubiquitous remote controllers” for the elements of the environment. The paper proposes some ideas about the usability and adequacy of these interfaces.This work was partially funded by projects eMadrid (Comunidad de Madrid, S2009/TIC-1650), Vesta (Ministerio de Industria, Turismo y Comercio, TSI-020100- 2009-828) and HADA (Ministerio de Ciencia y Educación, TIN 2007-64718

    Extending an XML environment definition language for spoken dialogue and web-based interfaces

    Full text link
    This is an electronic version of the paper presented at the Workshop "Developing User Interfaces with XML: Advances on User Interface Description Languages", during the International Working Conference on Advanced Visual Interfaces (AVI), held in Gallipoli (Italy) on 2004In this work we describe how we employ XML-compliant languages to define an intelligent environment. This language represents the environment, its entities and their relationships. The XML environment definition is transformed in a middleware layer that provides interaction with the environment. Additionally, this XML definition language has been extended to support two different user interfaces. A spoken dialogue interface is created by means of specific linguistic information. GUI interaction information is converted in a web-based interface.This work has been sponsored by the Spanish Ministry of Science and Technology, project number TIC2000-046

    Distributed schema-based middleware for ambient intelligence environments

    Full text link
    In this work we present a middleware developed for Ambient Intelligence environments. The proposed model is based on the blackboard metaphor, which is logically centralized but physically distributed. Although it is based on a data-oriented model, some extra services have been added to this middle layer to improve the functionality of the modules that employ it. The system has been developed and tested in a real Ambient Intelligence environment.This work was partially funded by ASIES (Adapting Social & Intelligent Environments to Support people with special needs), Ministerio de Ciencia e Innovación – TIN2010-17344, e-Madrid (Investigación y desarrollo de tecnologías para el e-learning en la Comunidad de Madrid) S2009/ TIC-1650 and Vesta (Ministerio de Industria, Turismo y Comercio, TSI-020100-2009-828) projects

    Ambient Multimodality: an Asset for Developing Universal Access to the Information Society

    Get PDF
    International audienceThe paper tries to point out the benefits that can be derived from research advances in the implementation of concepts such as ambient intelligence (AmI) and ubiquitous or pervasive computing for promoting Universal Access (UA) to the Information Society, that is, for contributing to enable everybody, especially Physically Disabled (PD) people, to have easy access to all computing resources and information services that the coming worldwide Information Society will soon make available to the general public. Following definitions of basic concepts relating to multimodal interaction, the significant contribution of multimodality to developing UA is briefly argued. Then, a short state of the art in AmI research is presented. In the last section we bring out the potential contribution of advances in AmI research and technology to the improvement of computer access for PD people. This claim is supported by the following observations: (i) most projects aiming at implementing AmI focus on the design of new interaction modalities and flexible multimodal user interfaces which may facilitate PD users' computer access ; (ii) targeted applications will support users in a wide range of daily activities which will be performed simultaneously with supporting computing tasks; therefore, users will be placed in contexts where they will be confronted with similar difficulties to those encountered by PD users; (iii) AmI applications being intended for the general public, a wide range of new interaction devices and flexible processing software will be available, making it possible to provide PD users with human-computer facilities tailored to their specific needs at reasonable expense.

    Empowering design through non-visual process: The blind add new vision to innovation

    Get PDF
    Currently, the design of products and services is focused on visual processes that exclude the other senses. The study herein presented explores the flaws of using a fully visual approach in the areas of education, product design and services. This paper also discusses the deficiencies of a first order thinking approach and presents an alternative based on second order thinking that can be used to overcome these weaknesses while at the same time nurturing innovation. Through this narrative Rachel Magario, a blind student in the business and interaction design graduate programs at the University of Kansas, shows how she was able to overcome the mechanical limitations inherent in a visually oriented academic world. Magario explains how a project to design a tactile map taught her to look for solutions through a second order thinking approach complemented by the use of low fidelity prototypes. In this process she was able to create audio and Velcro low fidelity prototypes to fill in the gaps of research for audio and haptic design. All this was achieved through a process of observing, reflecting, imagining and building to validate hypotheses that can be approached through second order thinking, frameworks and methods into the design process. The result is a process anchored in a human and activity centered design that accounts for all senses and can be used to achieve success in different areas of innovation

    A reactive behavioral system for the intelligent room

    Get PDF
    Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (leaves 59-63).Traditional computing interfaces have drawn users into a world of windows, icons and pointers. Pervasive computing believes that human-computer interaction (HCI) should be more natural: computers should be brought into our world of human discourse. The Intelligent Room project shares this vision. We are building an Intelligent Environment (IE) to allow for more natural forms of HCI. We believe that to move in this direction, we need our IE to respond to more than just direct commands; it also needs to respond to implicit user commands, such as body language, behavior, and context, just as another human would. This thesis presents ReBa, a context-aware system to provide the IE with this type of complex behavior, in reaction to user activity.by Ajay A. Kulkarni.M.Eng

    Context-aware gestural interaction in the smart environments of the ubiquitous computing era

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms

    CuriosityXR: Contextualizing Learning through Immersive Mixed Reality Experiences Beyond the Classroom

    Get PDF
    The focus of education is shifting towards a learner-centered approach that highlights the importance of engagement, interaction, and personalization in learning. This thesis explores new technologies to facilitate immersive, self-directed, curiosity-driven learning experiences aimed at addressing these key factors. I explore the use of Mixed Reality (MR) to build a context-aware system that can support learners’ curiosity and improve knowledge recall. I design and build “Curiosity XR,” an application for MR headsets using a research-through-design methodology. Curiosity XR is also a platform that enables educators to create contextual multi-modal interactive mini-lessons, and learners can engage with these lessons and other AI-assisted learning content. To evaluate my design, I conduct a user participant study followed by interviews. The participants’ responses show higher levels of engagement, curiosity to learn more, and better visual retention of the learning content. I hope this work will inspire others in the MR community and advance the use of MR and AI hybrid designs for the future of curiosity-driven education
    corecore