49 research outputs found

    An OpenCCG-Based Approach to Question Generation from Concepts

    Get PDF

    High-level methodologies for grammar engineering, introduction to the special issue

    Full text link

    Conceptual spatial representations for indoor mobile robots

    Get PDF
    We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following findings in cognitive psychology, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system

    A study of the use of natural language processing for conversational agents

    Get PDF
    Language is a mark of humanity and conscience, with the conversation (or dialogue) as one of the most fundamental manners of communication that we learn as children. Therefore one way to make a computer more attractive for interaction with users is through the use of natural language. Among the systems with some degree of language capabilities developed, the Eliza chatterbot is probably the first with a focus on dialogue. In order to make the interaction more interesting and useful to the user there are other approaches besides chatterbots, like conversational agents. These agents generally have, to some degree, properties like: a body (with cognitive states, including beliefs, desires and intentions or objectives); an interactive incorporation in the real or virtual world (including perception of events, communication, ability to manipulate the world and communicate with others); and behavior similar to a human (including affective abilities). This type of agents has been called by several terms, including animated agents or embedded conversational agents (ECA). A dialogue system has six basic components. (1) The speech recognition component is responsible for translating the user’s speech into text. (2) The Natural Language Understanding component produces a semantic representation suitable for dialogues, usually using grammars and ontologies. (3) The Task Manager chooses the concepts to be expressed to the user. (4) The Natural Language Generation component defines how to express these concepts in words. (5) The dialog manager controls the structure of the dialogue. (6) The synthesizer is responsible for translating the agents answer into speech. However, there is no consensus about the necessary resources for developing conversational agents and the difficulties involved (especially in resource-poor languages). This work focuses on the influence of natural language components (dialogue understander and manager) and analyses, in particular the use of parsing systems as part of developing conversational agents with more flexible language capabilities. This work analyses what kind of parsing resources contributes to conversational agents and discusses how to develop them targeting Portuguese, which is a resource-poor language. To do so we analyze approaches to the understanding of natural language, and identify parsing approaches that offer good performance, based on which we develop a prototype to evaluate the impact of using a parser in a conversational agent.linguagem é uma marca da humanidade e da consciência, sendo a conversação (ou diálogo) uma das maneiras de comunicacão mais fundamentais que aprendemos quando crianças. Por isso uma forma de fazer um computador mais atrativo para interação com usuários é usando linguagem natural. Dos sistemas com algum grau de capacidade de linguagem desenvolvidos, o chatterbot Eliza é, provavelmente, o primeiro sistema com foco em diálogo. Com o objetivo de tornar a interação mais interessante e útil para o usuário há outras aplicações alem de chatterbots, como agentes conversacionais. Estes agentes geralmente possuem, em algum grau, propriedades como: corpo (com estados cognitivos, incluindo crenças, desejos e intenções ou objetivos); incorporação interativa no mundo real ou virtual (incluindo percepções de eventos, comunicação, habilidade de manipular o mundo e comunicar com outros agentes); e comportamento similar ao humano (incluindo habilidades afetivas). Este tipo de agente tem sido chamado de diversos nomes como agentes animados ou agentes conversacionais incorporados. Um sistema de diálogo possui seis componentes básicos. (1) O componente de reconhecimento de fala que é responsável por traduzir a fala do usuário em texto. (2) O componente de entendimento de linguagem natural que produz uma representação semântica adequada para diálogos, normalmente utilizando gramáticas e ontologias. (3) O gerenciador de tarefa que escolhe os conceitos a serem expressos ao usuário. (4) O componente de geração de linguagem natural que define como expressar estes conceitos em palavras. (5) O gerenciador de diálogo controla a estrutura do diálogo. (6) O sintetizador de voz é responsável por traduzir a resposta do agente em fala. No entanto, não há consenso sobre os recursos necessários para desenvolver agentes conversacionais e a dificuldade envolvida nisso (especialmente em línguas com poucos recursos disponíveis). Este trabalho foca na influência dos componentes de linguagem natural (entendimento e gerência de diálogo) e analisa em especial o uso de sistemas de análise sintática (parser) como parte do desenvolvimento de agentes conversacionais com habilidades de linguagem mais flexível. Este trabalho analisa quais os recursos do analisador sintático contribuem para agentes conversacionais e aborda como os desenvolver, tendo como língua alvo o português (uma língua com poucos recursos disponíveis). Para isto, analisamos as abordagens de entendimento de linguagem natural e identificamos as abordagens de análise sintática que oferecem um bom desempenho. Baseados nesta análise, desenvolvemos um protótipo para avaliar o impacto do uso de analisador sintático em um agente conversacional

    Making effective use of healthcare data using data-to-text technology

    Full text link
    Healthcare organizations are in a continuous effort to improve health outcomes, reduce costs and enhance patient experience of care. Data is essential to measure and help achieving these improvements in healthcare delivery. Consequently, a data influx from various clinical, financial and operational sources is now overtaking healthcare organizations and their patients. The effective use of this data, however, is a major challenge. Clearly, text is an important medium to make data accessible. Financial reports are produced to assess healthcare organizations on some key performance indicators to steer their healthcare delivery. Similarly, at a clinical level, data on patient status is conveyed by means of textual descriptions to facilitate patient review, shift handover and care transitions. Likewise, patients are informed about data on their health status and treatments via text, in the form of reports or via ehealth platforms by their doctors. Unfortunately, such text is the outcome of a highly labour-intensive process if it is done by healthcare professionals. It is also prone to incompleteness, subjectivity and hard to scale up to different domains, wider audiences and varying communication purposes. Data-to-text is a recent breakthrough technology in artificial intelligence which automatically generates natural language in the form of text or speech from data. This chapter provides a survey of data-to-text technology, with a focus on how it can be deployed in a healthcare setting. It will (1) give an up-to-date synthesis of data-to-text approaches, (2) give a categorized overview of use cases in healthcare, (3) seek to make a strong case for evaluating and implementing data-to-text in a healthcare setting, and (4) highlight recent research challenges.Comment: 27 pages, 2 figures, book chapte

    Evaluating the impact of variation in automatically generated embodied object descriptions

    Get PDF
    Institute for Communicating and Collaborative SystemsThe primary task for any system that aims to automatically generate human-readable output is choice: the input to the system is usually well-specified, but there can be a wide range of options for creating a presentation based on that input. When designing such a system, an important decision is to select which aspects of the output are hard-wired and which allow for dynamic variation. Supporting dynamic choice requires additional representation and processing effort in the system, so it is important to ensure that incorporating variation has a positive effect on the generated output. In this thesis, we concentrate on two types of output generated by a multimodal dialogue system: linguistic descriptions of objects drawn from a database, and conversational facial displays of an embodied talking head. In a series of experiments, we add different types of variation to one of these types of output. The impact of each implementation is then assessed through a user evaluation in which human judges compare outputs generated by the basic version of the system to those generated by the modified version; in some cases, we also use automated metrics to compare the versions of the generated output. This series of implementations and evaluations allows us to address three related issues. First, we explore the circumstances under which users perceive and appreciate variation in generated output. Second, we compare two methods of including variation into the output of a corpus-based generation system. Third, we compare human judgements of output quality to the predictions of a range of automated metrics. The results of the thesis are as follows. The judges generally preferred output that incorporated variation, except for a small number of cases where other aspects of the output obscured it or the variation was not marked. In general, the output of systems that chose the majority option was judged worse than that of systems that chose from a wider range of outputs. However, the results for non-verbal displays were mixed: users mildly preferred agent outputs where the facial displays were generated using stochastic techniques to those where a simple rule was used, but the stochastic facial displays decreased users’ ability to identify contextual tailoring in speech while the rule-based displays did not. Finally, automated metrics based on simple corpus similarity favour generation strategies that do not diverge far from the average corpus examples, which are exactly the strategies that human judges tend to dislike. Automated metrics that measure other properties of the generated output correspond more closely to users’ preferences

    Automatic Generation of Sports News

    Get PDF
    Nesta dissertação foi desenvolvido um sistema de geração de linguagem natural, que a partir de dados de um determinado jogo de futebol, é capaz de criar uma notícia com o rescaldo desse jogo, automaticamente

    Statistical Language Models applied to News Generation

    Get PDF
    Geração de Linguagem Natural (GLN) é um subcampo da Inteligência Artificial. O seu principal objetivo é produzir texto percetível em linguagem natural, a partir de dados de entrada não linguísticos.Geração Automática de Notícias é um campo promissor na área de Jornalismo Computacional, que usa GLN para criar ferramentas que ajudam os jornalistas na produção de notícias, automatizando alguns passos. Estas ferramentas precisam de uma grande quantidade de dados estruturados como entrada e, por esta razão, desporto é um tema natural a abordar pois tem dados bem organizados. A automatização de passos, na produção de notícias, traz benefícios para os jornalistas, nomeadamente as ferramentas podem sumarizar dados e transformá-los em texto percetível instantaneamente. Seguidamente apenas tem de ser ajustado, acelerando bastante o processo de produção. A necessidade de um processo mais rápido foi a principal motivação desta dissertação.A finalidade desta dissertação é implementar um algoritmo de Geração Automática de Notícias com a colaboração da ZOS, Lda. que é proprietária do projeto zerozero.pt, um jornal online com uma das maiores bases de dados do mundo. O zerozero.pt vai fornecer um conjunto de dados para exploração e investigação nesta área. Esta dissertação continua o trabalho de João Aires, em 2016, quando escreveu uma dissertação acerca deste mesmo tópico. Nesta dissertação vai ser usada uma abordagem diferente para abordar o problema.O objetivo principal é usar Modelos de Linguagem Estatísticos para gerar notícias de raiz, aplicando-os a um sistema onde o utilizador pode gerar frases relativas a um determinado jogo.O zerozero.pt regista dados de mais de 6000 jogos por semana produzindo notícias de uma média de 100 desses jogos. Após uma análise manual de parte desses dados, foi decidido que uma notícia seria dividida em 4 partes: Introdução, Golos, Expulsões e Conclusão. Com a criação de Modelos de Linguagem Estatísticos, para cada uma dessas partes, é possível sumarizar cada jogo, tornando mais fácil o uso desta grande quantidade de dados estruturados e consequentemente aumentar a produtividade dos jornalistas.A avaliação do sistema será feita usando avaliação manual, tal como inquéritos. Desta forma, será possível analisar e discutir os resultados obtidos.Natural Language Generation (NLG) is a subfield of Artificial Intelligence. Its main goal is to produce understandable text in natural language, from a non-linguistic data input.Automated News Generation is a promising subject in the area of Computational Journalism, which uses NLG to create tools that helps journalists in the news production, automating some steps. These tools need a large amount of structured data as input and, for this reason, sports is a very natural subject to use, because the data is very well organized. The automatization of steps, in the news production, brings benefits to journalists, namely the tools can summarize data and make it readable instantly. Then they just have to adjust it, making the process of production a lot faster. The need for this agile process was the main motivation of this dissertation. The goal of this dissertation is to implement an Automated News Generation algorithm with the collaboration of ZOS, Lda. who owns the zerozero.pt project, an online social media publisher with one of the largest football databases in the world. They will provide a dataset for exploration and research in this field. This dissertation continues the work done by João Aires, in 2016, when he wrote a dissertation about this same topic. In this dissertation will be used a different approach to address the problem.The primary objective is to use Statistical Language Models to generate news from scratch, applying them to a system where the user can generate sentences about a specific match.Zerozero.pt saves data of more than 6000 matches per week and produces news for an average of 100 games per week. After a manual analysis of a part of that data, was decided that a news piece would be divided in 4 parts: Introduction, Goals, Sent offs and Conclusion. With the creation of Statistical Language Models for each part it is possible to summarize each match, making it easier to use this large amount of structured data and consequently increase the journalist's productivity.The evaluation of the system will be done using manual evaluation such as inquiries. This way, it will be possible to analyze and discuss the obtained results
    corecore