3 research outputs found

    EVALITA Goes Social: Tasks, Data, and Community at the 2016 Edition

    Get PDF
    EVALITA, the evaluation campaign of Natural Language Processing and Speech Tools for the Italian language, was organised for the fifth time in 2016. Six tasks, covering both re-reruns as well as completely new tasks, and an IBM-sponsored challenge, attracted a total of 34 submissions. An innovative aspect at this edition was the focus on social media data, especially Twitter, and the use of shared data across tasks, yielding a test set with layers of annotation concerning PoS tags, sentiment information, named entities and linking, and factuality information. Differently from the previous edition(s), many systems relied on a neural architecture, and achieved best results when used. From the experience and success of this edition, also in terms of dissemination of information and data, and in terms of collaboration between organisers of different tasks, we collected some reflections and suggestions that prospective EVALITA chairs might be willing to take into account for future editions

    ChiLab4It system in the QA4FAQ competition

    Get PDF
    ChiLab4It is the Question Answering system (QA) for Frequently Asked Questions (FAQ) developed by the Computer-Human Interaction Laboratory (ChiLab) at the University of Palermo for participating to the QA4FAQ task at EVALITA 2016 competition. The system is the versioning of the QuASIt framework developed by the same authors, which has been customized to address the particular task. This technical report describes the strategies that have been imported from QuASIt for implementing ChiLab4It, the actual system implementation, and the comparative evaluations with the results of the other participant tools, as provided by the organizers of the task. ChiLab4It was the only system whose score resulted to be above the experimental baseline fixed for the task. A discussion about future extensions of the system is also provided
    corecore