5,213 research outputs found

    Evaluating Centering for Information Ordering Using Corpora

    Get PDF
    In this article we discuss several metrics of coherence defined using centering theory and investigate the usefulness of such metrics for information ordering in automatic text generation. We estimate empirically which is the most promising metric and how useful this metric is using a general methodology applied on several corpora. Our main result is that the simplest metric (which relies exclusively on NOCB transitions) sets a robust baseline that cannot be outperformed by other metrics which make use of additional centering-based features. This baseline can be used for the development of both text-to-text and concept-to-text generation systems. </jats:p

    A knowledge hub to enhance the learning processes of an industrial cluster

    Get PDF
    Industrial clusters have been defined as ?networks of production of strongly interdependent firms (including specialised suppliers), knowledge producing agents (universities, research institutes, engineering companies), institutions (brokers, consultants), linked to each other in a value adding production chain? (OECD Focus Group, 1999). The industrial clusters distinctive mode of production is specialisation, based on a sophisticated division of labour, that leads to interlinked activities and need for cooperation, with the consequent emergence of communities of practice (CoPs). CoPs are here conceived as groups of people and/or organisations bound together by shared expertise and propensity towards a joint work (Wenger and Suyden, 1999). Cooperation needs closeness for just-in-time delivery, for communication, for the exchange of knowledge, especially in its tacit form. Indeed the knowledge exchanges between the CoPs specialised actors, in geographical proximity, lead to spillovers and synergies. In the digital economy landscape, the use of collaborative technologies, such as shared repositories, chat rooms and videoconferences can, when appropriately used, have a positive impact on the development of the CoP exchanges process of codified knowledge. On the other end, systems for the individuals profile management, e-learning platforms and intelligent agents can trigger also some socialisation mechanisms of tacit knowledge. In this perspective, we have set-up a model of a Knowledge Hub (KH), driven by the Information and Communication Technologies (ICT-driven), that enables the knowledge exchanges of a CoP. In order to present the model, the paper is organised in the following logical steps: - an overview of the most seminal and consolidated approaches to CoPs; - a description of the KH model, ICT-driven, conceived as a booster of the knowledge exchanges of a CoP, that adds to the economic benefits coming from geographical proximity, the advantages coming from organizational proximity, based on the ICTs; - a discussion of some preliminary results that we are obtaining during the implementation of the model.

    Implementing Session Centered Calculi

    Get PDF
    Recently, specific attention has been devoted to the development of service oriented process calculi. Besides the foundational aspects, it is also interesting to have prototype implementations for them in order to assess usability and to minimize the gap between theory and practice. Typically, these implementations are done in Java taking advantage of its mechanisms supporting network applications. However, most of the recurrent features of service oriented applications are re-implemented from scratch. In this paper we show how to implement a service oriented calculus, CaSPiS (Calculus of Services with Pipelines and Sessions) using the Java framework IMC, where recurrent mechanisms for network applications are already provided. By using the session oriented and pattern matching communication mechanisms provided by IMC, it is relatively simple to implement in Java all CaSPiS abstractions and thus to easily write the implementation in Java of a CaSPiS process

    Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation

    Get PDF
    This paper surveys the current state of the art in Natural Language Generation (NLG), defined as the task of generating text or speech from non-linguistic input. A survey of NLG is timely in view of the changes that the field has undergone over the past decade or so, especially in relation to new (usually data-driven) methods, as well as new applications of NLG technology. This survey therefore aims to (a) give an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organised; (b) highlight a number of relatively recent research topics that have arisen partly as a result of growing synergies between NLG and other areas of artificial intelligence; (c) draw attention to the challenges in NLG evaluation, relating them to similar challenges faced in other areas of Natural Language Processing, with an emphasis on different evaluation methods and the relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118 pages, 8 figures, 1 tabl

    HaIRST: Harvesting Institutional Resources in Scotland Testbed. Final Project Report

    Get PDF
    The HaIRST project conducted research into the design, implementation and deployment of a pilot service for UK-wide access of autonomously created institutional resources in Scotland, the aim being to investigate and advise on some of the technical, cultural, and organisational requirements associated with the deposit, disclosure, and discovery of institutional resources in the JISC Information Environment. The project involved a consortium of Scottish higher and further education institutions, with significant assistance from the Scottish Library and Information Council. The project investigated the use of technologies based on the Open Archives Initiative (OAI), including the implementation of OAI-compatible repositories for metadata which describe and link to institutional digital resources, the use of the OAI protocol for metadata harvesting (OAI-PMH) to automatically copy the metadata from multiple repositories to a central repository, and the creation of a service to search and identify resources described in the central repository. An important aim of the project was to identify issues of metadata interoperability arising from the requirements of individual institutional repositories and their impact on services based on the aggregation of metadata through harvesting. The project also sought to investigate issues in using these technologies for a wide range of resources including learning, teaching and administrative materials as well as the research and scholarly communication materials considered by many of the other projects in the JISC Focus on Access to Institutional Resources (FAIR) Programme, of which HaIRST was a part. The project tested and implemented a number of open source software packages supporting OAI, and was successful in creating a pilot service which provides effective information retrieval of a range of resources created by the project consortium institutions. The pilot service has been extended to cover research and scholarly communication materials produced by other Scottish universities, and administrative materials produced by a non-educational institution in Scotland. It is an effective testbed for further research and development in these areas. The project has worked extensively with a new OAI standard for 'static repositories' which offers a low-barrier, low-cost mechanism for participation in OAI-based consortia by smaller institutions with a low volume of resources. The project identified and successfully tested tools for transforming pre-existing metadata into a format compliant with OAI standards. The project identified and assessed OAI-related documentation in English from around the world, and has produced metadata for retrieving and accessing it. The project created a Web-based advisory service for institutions and consortia. The OAI Scotland Information Service (OAISIS) provides links to related standards, guidance and documentation, and discusses the findings of HaIRST relating to interoperability and the pilot harvesting service. The project found that open source packages relating to OAI can be installed and made to interoperate to create a viable method of sharing institutional resources within a consortium. HaIRST identified issues affecting the interoperability of shared metadata and suggested ways of resolving them to improve the effectiveness and efficiency of shared information retrieval environments based on OAI. The project demonstrated that application of OAI technologies to administrative materials is an effective way for institutions to meet obligations under Freedom of Information legislation

    A Framework for Structuring Learning Assessment in a Massively Multiplayer Online Educational Game: Experiment Centered Design

    Get PDF
    Educational games offer an opportunity to engage and inspire students to take interest in science, technology, engineering, and mathematical (STEM) subjects. Unobtrusive learning assessment techniques coupled with machine learning algorithms can be utilized to record students' in-game actions and formulate a model of the students' knowledge without interrupting the students' play. This paper introduces “Experiment Centered Assessment Design” (XCD), a framework for structuring a learning assessment feedback loop. XCD builds on the “Evidence Centered Assessment Design” (ECD) approach, which uses tasks to elicit evidence about students and their learning. XCD defines every task as an experiment in the scientific method, where an experiment maps a test of factors to observable outcomes. This XCD framework was applied to prototype quests in a massively multiplayer online (MMO) educational game. Future work would build upon the XCD framework and use machine learning techniques to provide feedback to students, teachers, and researchers
    • 

    corecore