3,108 research outputs found

    FM radio: family interplay with sonic mementos

    Get PDF
    Digital mementos are increasingly problematic, as people acquire large amounts of digital belongings that are hard to access and often forgotten. Based on fieldwork with 10 families, we designed a new type of embodied digital memento, the FM Radio. It allows families to access and play sonic mementos of their previous holidays. We describe our underlying design motivation where recordings are presented as a series of channels on an old fashioned radio. User feedback suggests that the device met our design goals: being playful and intriguing, easy to use and social. It facilitated family interaction, and allowed ready access to mementos, thus sharing many of the properties of physical mementos that we intended to trigger

    A novel user-centered design for personalized video summarization

    Get PDF
    In the past, several automatic video summarization systems had been proposed to generate video summary. However, a generic video summary that is generated based only on audio, visual and textual saliencies will not satisfy every user. This paper proposes a novel system for generating semantically meaningful personalized video summaries, which are tailored to the individual user's preferences over video semantics. Each video shot is represented using a semantic multinomial which is a vector of posterior semantic concept probabilities. The proposed system stitches video summary based on summary time span and top-ranked shots that are semantically relevant to the user's preferences. The proposed summarization system is evaluated using both quantitative and subjective evaluation metrics. The experimental results on the performance of the proposed video summarization system are encouraging

    Understanding aural fluency in auditory display design for ambient intelligent environments

    Get PDF
    This paper presents the design and some evaluation results from the auditory display model of an ambient intelligent game named socio-ec(h)o. socio-ec(h)o is played physically by a team of four, and displays information via a responsive environment of light and sound. Based on a study of 56 participants involving both qualitative and preliminary quantitative analysis, we present our findings to date as they relate to the auditory display model, future directions and implications. Based on our design and evaluation experience we begin building a theoretical understanding for the unique requirements of informative sonic displays in ambient intelligent and ubiquitous computing systems. We develop and discuss the emerging research concept of aural fluency in ambient intelligent settings

    Surface electromyographic control of a novel phonemic interface for speech synthesis

    Full text link
    Many individuals with minimal movement capabilities use AAC to communicate. These individuals require both an interface with which to construct a message (e.g., a grid of letters) and an input modality with which to select targets. This study evaluated the interaction of two such systems: (a) an input modality using surface electromyography (sEMG) of spared facial musculature, and (b) an onscreen interface from which users select phonemic targets. These systems were evaluated in two experiments: (a) participants without motor impairments used the systems during a series of eight training sessions, and (b) one individual who uses AAC used the systems for two sessions. Both the phonemic interface and the electromyographic cursor show promise for future AAC applications.F31 DC014872 - NIDCD NIH HHS; R01 DC002852 - NIDCD NIH HHS; R01 DC007683 - NIDCD NIH HHS; T90 DA032484 - NIDA NIH HHShttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesishttps://www.ncbi.nlm.nih.gov/pubmed/?term=Surface+electromyographic+control+of+a+novel+phonemic+interface+for+speech+synthesisPublished versio

    Multimodal Content Delivery for Geo-services

    Get PDF
    This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device

    Exploring context-sensitive collaborative augmented reality applications

    Get PDF
    In smart spaces limited amount of physical resources are available. Also, system should be able to offer relevant information according to user’s personal preferences. At the same time smart environments could serve many users with same requirement of relevancy and operate on limited resources. Sometimes it may not be possible to share resource in a way that respects all users without compromising collaboration. This thesis is focused on solving the problem of shared resource from the perspective of augmented reality. Selected standpoint is on mobile collaborative augmented reality and context-awareness. A small user study has been arranged as part of thesis to bring out information about user’s thoughts and emotions while using a simple prototype application. In addition, a small literature review about main concepts is conducted. There is a short analysis of some collaborative augmented reality applications presented based on recent literature. Results of the thesis show that even with small experiments it is possible to discover new information from users. Results also provide tentative answers to presented research questions. Main findings are that users have high expectations towards context-awareness and augmented reality technologies. They also expect applications to offer relevant, validated and also surprising information in each situation. This thesis has some evidence about suitability of augmented reality in context-aware applications that are targeted to support human-to human collaboration. With augmented reality it is possible to offer individual standpoints for users while they are inspecting limited, shared resources. Endorsement of user’s ability to monitor their environment is one challenge in large smart environments. Finally, software engineer can take user’s expectations into account when designing context-aware systems for smart environments. Also, developer could implement system that takes advantage of different human sensory modalities
    • …
    corecore