33,479 research outputs found

    Mobile MultiModal presentation

    Get PDF
    ABSTRACT This paper presents the latest research into a mobile intelligent multimedia presentation system called TeleMorph which can dynamically generate a multimedia presentation using output modalities that are determined by the bandwidth available on a mobile device's wireless connection. To demonstrate the effectiveness of this research TeleTuras, a tourist information guide will implement the solution provided by TeleMorph, thus demonstrating its effectiveness. This paper highlights issues surrounding such a system & introduces the architecture

    A Multi-channel Application Framework for Customer Care Service Using Best-First Search Technique

    Get PDF
    It has become imperative to find a solution to the dissatisfaction in response by mobile service providers when interacting with their customer care centres. Problems faced with Human to Human Interaction (H2H) between customer care centres and their customers include delayed response time, inconsistent solutions to questions or enquires and lack of dedicated access channels for interaction with customer care centres in some cases. This paper presents a framework and development techniques for a multi-channel application providing Human to System (H2S) interaction for customer care centre of a mobile telecommunication provider. The proposed solution is called Interactive Customer Service Agent (ICSA). Based on single-authoring, it will provide three media of interaction with the customer care centre of a mobile telecommunication operator: voice, phone and web browsing. A mathematical search technique called Best-First Search to generate accurate results in a search environmen

    Trajectories of learning across museums and classrooms

    Get PDF
    This paper explores the use of social and mobile technologies on school field trips as means of enhancing the visitor experience. It employs the notion of a ‘trajectory’ (Ludvigsen et al. 2010; Pierroux et al., 2010; Littleton & Kerawalla, 2012) as appropriate means of connecting learners temporal experiences with informal and formal learning contexts. The focus of the analysis is on a group’s trajectory with an aim to examine the meanings made and represented in multimodal ‘ensembles’ and further, to explore whether artefacts and tools encountered or used inform students’ ensembles and assist them in making connections across the settings. This paper aims to contribute to contemporary discourse on technology-enhanced museum learning by exploring aspects of the visitor experience, such as meaning making across and between contexts

    MobiBits: Multimodal Mobile Biometric Database

    Full text link
    This paper presents a novel database comprising representations of five different biometric characteristics, collected in a mobile, unconstrained or semi-constrained setting with three different mobile devices, including characteristics previously unavailable in existing datasets, namely hand images, thermal hand images, and thermal face images, all acquired with a mobile, off-the-shelf device. In addition to this collection of data we perform an extensive set of experiments providing insight on benchmark recognition performance that can be achieved with these data, carried out with existing commercial and academic biometric solutions. This is the first known to us mobile biometric database introducing samples of biometric traits such as thermal hand images and thermal face images. We hope that this contribution will make a valuable addition to the already existing databases and enable new experiments and studies in the field of mobile authentication. The MobiBits database is made publicly available to the research community at no cost for non-commercial purposes.Comment: Submitted for the BIOSIG2018 conference on June 18, 2018. Accepted for publication on July 20, 201

    DOLPHIN: the design and initial evaluation of multimodal focus and context

    Get PDF
    In this paper we describe a new focus and context visualisation technique called multimodal focus and context. This technique uses a hybrid visual and spatialised audio display space to overcome the limited visual displays of mobile devices. We demonstrate this technique by applying it to maps of theme parks. We present the results of an experiment comparing multimodal focus and context to a purely visual display technique. The results showed that neither system was significantly better than the other. We believe that this is due to issues involving the perception of multiple structured audio sources

    Who Learns from Collaborative Digital Projects? Cultivating Critical Consciousness and Metacognition to Democratize Digital Literacy Learning

    Get PDF
    Collaborative group work is common in writing classrooms, especially ones assigning digital projects. While a wealth of scholarship theorizes collaboration and advocates for specific collaborative pedagogies, writing studies has yet to address the ways in which privilege tied to race, gender, class, and other identity characteristics replicates itself within student groups by shaping the responsibilities individual group members assume, thereby affecting students\u27 opportunities for learning. Such concerns about equity are especially pressing where civically and professionally valuable twenty-first century digital literacies are concerned. This article uses theories of cultural capital and the participation gap to (1) analyze role uptake in case studies of diverse student groups and (2) suggest ways to expand writing studies\u27 current use of metacognition to address such inequities

    Multimodal agent interfaces and system architectures for health and fitness companions

    Get PDF
    Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. In this paper we present how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. In particular, we focus on different forms of multimodality and system architectures for such interfaces

    SALSA: A Novel Dataset for Multimodal Group Behavior Analysis

    Get PDF
    Studying free-standing conversational groups (FCGs) in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels. However, analyzing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioral cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. To this end, we propose SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and make two main contributions to research on automated social interaction analysis: (1) SALSA records social interactions among 18 participants in a natural, indoor environment for over 60 minutes, under the poster presentation and cocktail party contexts presenting difficulties in the form of low-resolution images, lighting variations, numerous occlusions, reverberations and interfering sound sources; (2) To alleviate these problems we facilitate multimodal analysis by recording the social interplay using four static surveillance cameras and sociometric badges worn by each participant, comprising the microphone, accelerometer, bluetooth and infrared sensors. In addition to raw data, we also provide annotations concerning individuals' personality as well as their position, head, body orientation and F-formation information over the entire event duration. Through extensive experiments with state-of-the-art approaches, we show (a) the limitations of current methods and (b) how the recorded multiple cues synergetically aid automatic analysis of social interactions. SALSA is available at http://tev.fbk.eu/salsa.Comment: 14 pages, 11 figure
    • 

    corecore