19 research outputs found

    Generic semantics-based task-oriented dialogue system framework for human-machine interaction in industrial scenarios

    Get PDF
    285 p.En Industria 5.0, los trabajadores y su bienestar son cruciales en el proceso de producción. En estecontexto, los sistemas de diálogo orientados a tareas permiten que los operarios deleguen las tareas mássencillas a los sistemas industriales mientras trabajan en otras más complejas. Además, la posibilidad deinteractuar de forma natural con estos sistemas reduce la carga cognitiva para usarlos y genera aceptaciónpor parte de los usuarios. Sin embargo, la mayoría de las soluciones existentes no permiten unacomunicación natural, y las técnicas actuales para obtener dichos sistemas necesitan grandes cantidadesde datos para ser entrenados, que son escasos en este tipo de escenarios. Esto provoca que los sistemas dediálogo orientados a tareas en el ámbito industrial sean muy específicos, lo que limita su capacidad de sermodificados o reutilizados en otros escenarios, tareas que están ligadas a un gran esfuerzo en términos detiempo y costes. Dados estos retos, en esta tesis se combinan Tecnologías de la Web Semántica contécnicas de Procesamiento del Lenguaje Natural para desarrollar KIDE4I, un sistema de diálogo orientadoa tareas semántico para entornos industriales que permite una comunicación natural entre humanos ysistemas industriales. Los módulos de KIDE4I están diseñados para ser genéricos para una sencillaadaptación a nuevos casos de uso. La ontología modular TODO es el núcleo de KIDE4I, y se encarga demodelar el dominio y el proceso de diálogo, además de almacenar las trazas generadas. KIDE4I se haimplementado y adaptado para su uso en cuatro casos de uso industriales, demostrando que el proceso deadaptación para ello no es complejo y se beneficia del uso de recursos

    Products and Services

    Get PDF
    Today’s global economy offers more opportunities, but is also more complex and competitive than ever before. This fact leads to a wide range of research activity in different fields of interest, especially in the so-called high-tech sectors. This book is a result of widespread research and development activity from many researchers worldwide, covering the aspects of development activities in general, as well as various aspects of the practical application of knowledge

    Ecology-based planning. Italian and French experimentations

    Get PDF
    This paper examines some French and Italian experimentations of green infrastructures’ (GI) construction in relation to their techniques and methodologies. The construction of a multifunctional green infrastructure can lead to the generation of a number of relevant bene fi ts able to face the increasing challenges of climate change and resilience (for example, social, ecological and environmental through the recognition of the concept of ecosystem services) and could ease the achievement of a performance-based approach. This approach, differently from the traditional prescriptive one, helps to attain a better and more fl exible land-use integration. In both countries, GI play an important role in contrasting land take and, for their adaptive and cross-scale nature, they help to generate a res ilient approach to urban plans and projects. Due to their fl exible and site-based nature, GI can be adapted, even if through different methodologies and approaches, both to urban and extra-urban contexts. On one hand, France, through its strong national policy on ecological networks, recognizes them as one of the major planning strategies toward a more sustainable development of territories; on the other hand, Italy has no national policy and Regions still have a hard time integrating them in already existing planning tools. In this perspective, Italian experimentations on GI construction appear to be a simple and sporadic add-on of urban and regional plans

    Environmental and territorial modelling for planning and design

    Get PDF
    Between 5th and 8th September 2018 the tenth edition of the INPUT conference took place in Viterbo, guests of the beautiful setting of the University of Tuscia and its DAFNE Department. INPUT is managed by an informal group of Italian academic researchers working in many fields related to the exploitation of informatics in planning. This Tenth Edition pursed multiple objectives with a holistic, boundary-less character, to face the complexity of today socio-ecological systems following a systemic approach aimed to problem solving. In particular, the Conference will aim to present the state of art of modeling approaches employed in urban and territorial planning in national and international contexts. Moreover, the conference has hosted a Geodesign workshop, by Carl Steinitz (Harvard Graduate School of Design) and Hrishi Ballal (on skype), Tess Canfield, Michele Campagna. Finally, on the last day of the conference, took place the QGIS hackfest, in which over 20 free software developers from all over Italy discussed the latest news and updates from the QGIS network. The acronym INPUT was born as INformatics for Urban and Regional Planning. In the transition to graphics, unintentionally, the first term was transformed into “Innovation”, with a fine example of serendipity, in which a small mistake turns into something new and intriguing. The opportunity is taken to propose to the organizers and the scientific committee of the next appointment to formalize this change of the acronym. This 10th edition was focused on Environmental and Territorial Modeling for planning and design. It has been considered a fundamental theme, especially in relation to the issue of environmental sustainability, which requires a rigorous and in-depth analysis of processes, a theme which can be satisfied by the territorial information systems and, above all, by modeling simulation of processes. In this topic, models are useful with the managerial approach, to highlight the many aspects of complex city and landscape systems. In consequence, their use must be deeply critical, not for rigid forecasts, but as an aid to the management decisions of complex systems

    Knowledge Extraction for Hybrid Question Answering

    Get PDF
    Since the proposal of hypertext by Tim Berners-Lee to his employer CERN on March 12, 1989 the World Wide Web has grown to more than one billion Web pages and still grows. With the later proposed Semantic Web vision,Berners-Lee et al. suggested an extension of the existing (Document) Web to allow better reuse, sharing and understanding of data. Both the Document Web and the Web of Data (which is the current implementation of the Semantic Web) grow continuously. This is a mixed blessing, as the two forms of the Web grow concurrently and most commonly contain different pieces of information. Modern information systems must thus bridge a Semantic Gap to allow a holistic and unified access to information about a particular information independent of the representation of the data. One way to bridge the gap between the two forms of the Web is the extraction of structured data, i.e., RDF, from the growing amount of unstructured and semi-structured information (e.g., tables and XML) on the Document Web. Note, that unstructured data stands for any type of textual information like news, blogs or tweets. While extracting structured data from unstructured data allows the development of powerful information system, it requires high-quality and scalable knowledge extraction frameworks to lead to useful results. The dire need for such approaches has led to the development of a multitude of annotation frameworks and tools. However, most of these approaches are not evaluated on the same datasets or using the same measures. The resulting Evaluation Gap needs to be tackled by a concise evaluation framework to foster fine-grained and uniform evaluations of annotation tools and frameworks over any knowledge bases. Moreover, with the constant growth of data and the ongoing decentralization of knowledge, intuitive ways for non-experts to access the generated data are required. Humans adapted their search behavior to current Web data by access paradigms such as keyword search so as to retrieve high-quality results. Hence, most Web users only expect Web documents in return. However, humans think and most commonly express their information needs in their natural language rather than using keyword phrases. Answering complex information needs often requires the combination of knowledge from various, differently structured data sources. Thus, we observe an Information Gap between natural-language questions and current keyword-based search paradigms, which in addition do not make use of the available structured and unstructured data sources. Question Answering (QA) systems provide an easy and efficient way to bridge this gap by allowing to query data via natural language, thus reducing (1) a possible loss of precision and (2) potential loss of time while reformulating the search intention to transform it into a machine-readable way. Furthermore, QA systems enable answering natural language queries with concise results instead of links to verbose Web documents. Additionally, they allow as well as encourage the access to and the combination of knowledge from heterogeneous knowledge bases (KBs) within one answer. Consequently, three main research gaps are considered and addressed in this work: First, addressing the Semantic Gap between the unstructured Document Web and the Semantic Gap requires the development of scalable and accurate approaches for the extraction of structured data in RDF. This research challenge is addressed by several approaches within this thesis. This thesis presents CETUS, an approach for recognizing entity types to populate RDF KBs. Furthermore, our knowledge base-agnostic disambiguation framework AGDISTIS can efficiently detect the correct URIs for a given set of named entities. Additionally, we introduce REX, a Web-scale framework for RDF extraction from semi-structured (i.e., templated) websites which makes use of the semantics of the reference knowledge based to check the extracted data. The ongoing research on closing the Semantic Gap has already yielded a large number of annotation tools and frameworks. However, these approaches are currently still hard to compare since the published evaluation results are calculated on diverse datasets and evaluated based on different measures. On the other hand, the issue of comparability of results is not to be regarded as being intrinsic to the annotation task. Indeed, it is now well established that scientists spend between 60% and 80% of their time preparing data for experiments. Data preparation being such a tedious problem in the annotation domain is mostly due to the different formats of the gold standards as well as the different data representations across reference datasets. We tackle the resulting Evaluation Gap in two ways: First, we introduce a collection of three novel datasets, dubbed N3, to leverage the possibility of optimizing NER and NED algorithms via Linked Data and to ensure a maximal interoperability to overcome the need for corpus-specific parsers. Second, we present GERBIL, an evaluation framework for semantic entity annotation. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools and frameworks on multiple datasets. The decentral architecture behind the Web has led to pieces of information being distributed across data sources with varying structure. Moreover, the increasing the demand for natural-language interfaces as depicted by current mobile applications requires systems to deeply understand the underlying user information need. In conclusion, the natural language interface for asking questions requires a hybrid approach to data usage, i.e., simultaneously performing a search on full-texts and semantic knowledge bases. To close the Information Gap, this thesis presents HAWK, a novel entity search approach developed for hybrid QA based on combining structured RDF and unstructured full-text data sources

    Qualifying Ontology-Based Visual Query Formulation

    Full text link
    Abstract. This paper elaborates on ontology-based end-user visual query formulation, particularly for users who otherwise cannot/do not desire to use formal textual query languages to retrieve data due to the lack of technical knowledge and skills. Then, it provides a set of quality attributes and features, primarily elicited via a series of industrial end-user workshops and user studies carried out in the course of an industrial EU project, to guide the design and development of successor visual query systems

    Modelo de acesso a fontes em linguagem natural no governo electrónico

    Get PDF
    Doutoramento em Engenharia InformáticaFor the actual existence of e-government it is necessary and crucial to provide public information and documentation, making its access simple to citizens. A portion, not necessarily small, of these documents is in an unstructured form and in natural language, and consequently outside of which the current search systems are generally able to cope and effectively handle. Thus, in thesis, it is possible to improve access to these contents using systems that process natural language and create structured information, particularly if supported in semantics. In order to put this thesis to test, this work was developed in three major phases: (1) design of a conceptual model integrating the creation of structured information and making it available to various actors, in line with the vision of e-government 2.0; (2) definition and development of a prototype instantiating the key modules of this conceptual model, including ontology based information extraction supported by examples of relevant information, knowledge management and access based on natural language; (3) assessment of the usability and acceptability of querying information as made possible by the prototype - and in consequence of the conceptual model - by users in a realistic scenario, that included comparison with existing forms of access. In addition to this evaluation, at another level more related to technology assessment and not to the model, evaluations were made on the performance of the subsystem responsible for information extraction. The evaluation results show that the proposed model was perceived as more effective and useful than the alternatives. Associated with the performance of the prototype to extract information from documents, comparable to the state of the art, results demonstrate the feasibility and advantages, with current technology, of using natural language processing and integration of semantic information to improve access to unstructured contents in natural language. The conceptual model and the prototype demonstrator intend to contribute to the future existence of more sophisticated search systems that are also more suitable for e-government. To have transparency in governance, active citizenship, greater agility in the interaction with the public administration, among others, it is necessary that citizens and businesses have quick and easy access to official information, even if it was originally created in natural language.Para a efectiva existência de governo electrónico é necessário e crucial a disponibilização de informação e documentação pública e tornar simples o acesso a esta pelos cidadãos. Uma parte, não necessariamente pequena, destes documentos encontra-se sob uma forma não estruturada e em linguagem natural e, consequentemente, fora do que os sistemas de pesquisa actuais conseguem em geral suportar e disponibilizar eficazmente. Assim, em tese, é possível melhorar o acesso a estes conteúdos com recurso a sistemas que processem linguagem natural e que sejam capazes de criar informação estruturada, em especial se suportados numa semântica. Com o objectivo de colocar esta tese à prova, o desenvolvimento deste trabalho integrou três grandes fases ou vertentes: (1) Criação de um modelo conceptual integrando a criação de informação estruturada e a sua disponibilização para vários actores, alinhado com a visão do governo electrónico 2.0; (2) Definição e desenvolvimento de um protótipo instanciando os módulos essenciais deste modelo conceptual, nomeadamente a extracção de informação suportada em ontologias e exemplos de informação relevante, gestão de conhecimento e acesso baseado em linguagem natural; (3) Uma avaliação de usabilidade e aceitabilidade da consulta à informação tornada possível pelo protótipo – e em consequência do modelo conceptual - por utilizadores num cenário realista e que incluiu comparação com formas de acesso existentes. Além desta avaliação, a outro nível, mais relacionado com avaliação de tecnologias e não do modelo, foram efectuadas avaliações do desempenho do subsistema responsável pela extracção de informação. Os resultados da avaliação mostram que o modelo proposto foi percepcionado como mais eficaz e mais útil que as alternativas. Associado ao desempenho do protótipo a extrair informação dos documentos, comparável com o estado da arte, os resultados obtidos mostram a viabilidade e as vantagens, com a tecnologia actual, de utilizar processamento de linguagem natural e integração de informação semântica para melhorar acesso a conteúdos em linguagem natural e não estruturados. O modelo conceptual e o protótipo demonstrador pretendem contribuir para a existência futura de sistemas de pesquisa mais sofisticados e adequados ao governo electrónico. Para existir transparência na governação, cidadania activa, maior agilidade na interacção com a administração pública, entre outros, é necessário que cidadãos e empresas tenham acesso rápido e fácil a informação oficial, mesmo que ela tenha sido originalmente criada em linguagem natural
    corecore