14,151 research outputs found

    Bringing context-aware access to the web through spoken interaction

    Get PDF
    The web has become the largest repository of multimedia information and its convergence with telecommunications is now bringing the benefits of web technology to hand-held devices. To optimize data access using these devices and provide services which meet the user needs through intelligent information retrieval, the system must sense and interpret the user environment and the communication context. In addition, natural spoken conversation with handheld devices makes possible the use of these applications in environments in which the use of GUI interfaces is not effective, provides a more natural human-computer interaction, and facilitates access to the web for people with visual or motor disabilities, allowing their integration and the elimination of barriers to Internet access. In this paper, we present an architecture for the design of context-aware systems that use speech to access web services. Our contribution focuses specifically on the use of context information to improve the effectiveness of providing web services by using a spoken dialog system for the user-system interaction. We also describe an application of our proposal to develop a context-aware railway information system, and provide a detailed evaluation of the influence of the context information in the quality of the services that are supplied.Research funded by projects CICYT TIN2011-28620-C02-01, CICYT TEC 2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485), and DPS2008-07029-C02-02.Publicad

    A novel approach for data fusion and dialog management in user-adapted multimodal dialog systems

    Get PDF
    Proceedings of: 17th International Conference on Information Fusion (FUSION 2014): Salamanca, Spain 7-10 July 2014.Multimodal dialog systems have demonstrated a high potential for more flexible, usable and natural humancomputer interaction. These improvements are highly dependent on the fusion and dialog management processes, which respectively integrates and interprets multimedia multimodal information and decides the next system response for the current dialog state. In this paper we propose to carry out the multimodal fusion and dialog management processes at the dialog level in a single step. To do this, we describe an approach based on a statistical model that takes user's intention into account, generates a single representation obtained from the different input modalities and their confidence scores, and selects the next system action based on this representation. The paper also describes the practical application of the proposed approach to develop a multimodal dialog system providing travel and tourist information.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485).Publicad

    Accessible user interface support for multi-device ubiquitous applications: architectural modifiability considerations

    Get PDF
    The market for personal computing devices is rapidly expanding from PC, to mobile, home entertainment systems, and even the automotive industry. When developing software targeting such ubiquitous devices, the balance between development costs and market coverage has turned out to be a challenging issue. With the rise of Web technology and the Internet of things, ubiquitous applications have become a reality. Nonetheless, the diversity of presentation and interaction modalities still drastically limit the number of targetable devices and the accessibility toward end users. This paper presents webinos, a multi-device application middleware platform founded on the Future Internet infrastructure. Hereto, the platform's architectural modifiability considerations are described and evaluated as a generic enabler for supporting applications, which are executed in ubiquitous computing environments

    Collaborative hybrid agent provision of learner needs using ontology based semantic technology

    Get PDF
    © Springer International Publishing AG 2017. This paper describes the use of Intelligent Agents and Ontologies to implement knowledge navigation and learner choice when interacting with complex information locations. The paper is in two parts: the first looks at how Agent Based Semantic Technology can be used to give users a more personalised experience as an individual. The paper then looks to generalise this technology to allow users to work with agents in hybrid group scenarios. In the context of University Learners, the paper outlines how we employ an Ontology of Student Characteristics to personalise information retrieval specifically suited to an individual’s needs. Choice is not a simple “show me your hand and make me a match” but a deliberative artificial intelligence (AI) that uses an ontologically informed agent society to consider the weighted solution paths before choosing the appropriate best. The aim is to enrich the student experience and significantly re-route the student’s journey. The paper uses knowledge-level interoperation of agents to personalise the learning space of students and deliver to them the information and knowledge to suite them best. The aim is to personalise their learning in the presentation/format that is most appropriate for their needs. The paper then generalises this Semantic Technology Framework using shared vocabulary libraries that enable individuals to work in groups with other agents, which might be other people or actually be AIs. The task they undertake is a formal assessment but the interaction mode is one of informal collaboration. Pedagogically this addresses issues of ensuring fairness between students since we can ensure each has the same experience (as provided by the same set of Agents) as each other and an individual mark may be gained. This is achieved by forming a hybrid group of learner and AI Software Agents. Different agent architectures are discussed and a worked example presented. The work here thus aims at fulfilling the student’s needs both in the context of matching their needs but also in allowing them to work in an Agent Based Synthetic Group. This in turn opens us new areas of potential collaborative technology

    Guidelines for annotating the LUNA corpus with frame information

    Get PDF
    This document defines the annotation workflow aimed at adding frame information to the LUNA corpus of conversational speech. In particular, it details both the corpus pre-processing steps and the proper annotation process, giving hints about how to choose the frame and the frame element labels. Besides, the description of 20 new domain-specific and language-specific frames is reported. To our knowledge, this is the first attempt to adapt the frame paradigm to dialogs and at the same time to define new frames and frame elements for the specific domain of software/hardware assistance. The technical report is structured as follows: in Section 2 an overview of the FrameNet project is given, while Section 3 introduces the LUNA project and the annotation framework involving the Italian dialogs. Section 4 details the annotation workflow, including the format preparation of the dialog files and the annotation strategy. In Section 5 we discuss the main issues of the annotation of frame information in dialogs and we describe how the standard annotation procedure was changed in order to face such issues. Then, the 20 newly introduced frames are reported in Section 6
    • …
    corecore