92,136 research outputs found

    Spartan Daily, March 22, 2007

    Get PDF
    Volume 128, Issue 33https://scholarworks.sjsu.edu/spartandaily/10347/thumbnail.jp

    Towards a Multimodal Adaptive Lighting System for Visually Impaired Children

    Get PDF
    Visually impaired children often have difficulty with everyday activities like locating items, e.g. favourite toys, and moving safely around the home. It is important to assist them during activities like these because it can promote independence from adults and helps to develop skills. Our demonstration shows our work towards a multimodal sensing and output system that adapts the lighting conditions at home to help visually impaired children with such tasks

    When you Step Out

    Get PDF
    PDF pages:

    Why aren't we all living in Smart Homes

    Get PDF
    Visions of the Future, like the Jetsons cartoons, show homes which are smart and able to control household appliances, to make living easier and more comfortable. Although much research has been carried out into the effectiveness of different visualisation techniques for conveying useful energy consumption information to householders, and in techniques for controlling the timing and coordination of appliance use, these techniques have failed to achieve widespread penetration, and the vision still seems far from a reality. This paper examines the reasons why smart home technologies have so far failed to have any real impact, which is intricately intertwined with the design of visualisations in this context, and why we are not already living in Smart Homes. It examines these questions under four sections: Technology, Consumers, Electricity retailers and Government agencies, using examples from New Zealand’s electricity sector

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear
    corecore