1,037 research outputs found

    Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

    Full text link
    Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.Comment: To be appear in EMNLP 201

    Ontology learning for the semantic deep web

    Get PDF
    Ontologies could play an important role in assisting users in their search for Web pages. This dissertation considers the problem of constructing natural ontologies that support users in their Web search efforts and increase the number of relevant Web pages that are returned. To achieve this goal, this thesis suggests combining the Deep Web information, which consists of dynamically generated Web pages and cannot be indexed by the existing automated Web crawlers, with ontologies, resulting in the Semantic Deep Web. The Deep Web information is exploited in three different ways: extracting attributes from the Deep Web data sources automatically, generating domain ontologies from the Deep Web automatically, and extracting instances from the Deep Web to enhance the domain ontologies. Several algorithms for the above mentioned tasks are presented. Lxperimeiital results suggest that the proposed methods assist users with finding more relevant Web sites. Another contribution of this dissertation includes developing a methodology to evaluate existing general purpose ontologies using the Web as a corpus. The quality of ontologies (QoO) is quantified by analyzing existing ontologies to get numeric measures of how natural their concepts and their relationships are. This methodology was first applied to several major, popular ontologies, such as WordNet, OpenCyc and the UMLS. Subsequently the domain ontologies developed in this research were evaluated from the naturalness perspective

    Classifying Web Exploits with Topic Modeling

    Full text link
    This short empirical paper investigates how well topic modeling and database meta-data characteristics can classify web and other proof-of-concept (PoC) exploits for publicly disclosed software vulnerabilities. By using a dataset comprised of over 36 thousand PoC exploits, near a 0.9 accuracy rate is obtained in the empirical experiment. Text mining and topic modeling are a significant boost factor behind this classification performance. In addition to these empirical results, the paper contributes to the research tradition of enhancing software vulnerability information with text mining, providing also a few scholarly observations about the potential for semi-automatic classification of exploits in the existing tracking infrastructures.Comment: Proceedings of the 2017 28th International Workshop on Database and Expert Systems Applications (DEXA). http://ieeexplore.ieee.org/abstract/document/8049693

    Robust Dialog Management Through A Context-centric Architecture

    Get PDF
    This dissertation presents and evaluates a method of managing spoken dialog interactions with a robust attention to fulfilling the human user’s goals in the presence of speech recognition limitations. Assistive speech-based embodied conversation agents are computer-based entities that interact with humans to help accomplish a certain task or communicate information via spoken input and output. A challenging aspect of this task involves open dialog, where the user is free to converse in an unstructured manner. With this style of input, the machine’s ability to communicate may be hindered by poor reception of utterances, caused by a user’s inadequate command of a language and/or faults in the speech recognition facilities. Since a speech-based input is emphasized, this endeavor involves the fundamental issues associated with natural language processing, automatic speech recognition and dialog system design. Driven by ContextBased Reasoning, the presented dialog manager features a discourse model that implements mixed-initiative conversation with a focus on the user’s assistive needs. The discourse behavior must maintain a sense of generality, where the assistive nature of the system remains constant regardless of its knowledge corpus. The dialog manager was encapsulated into a speech-based embodied conversation agent platform for prototyping and testing purposes. A battery of user trials was performed on this agent to evaluate its performance as a robust, domain-independent, speech-based interaction entity capable of satisfying the needs of its users

    How research programs come apart: the example of supersymmetry and the disunity of physics

    Full text link
    According to Peter Galison, the coordination of different ``subcultures'' within a scientific field happens through local exchanges within ``trading zones''. In his view, the workability of such trading zones is not guaranteed, and science is not necessarily driven towards further integration. In this paper, we develop and apply quantitative methods (using semantic, authorship, and citation data from scientific literature), inspired by Galison's framework, to the case of the disunity of high-energy physics. We give prominence to supersymmetry, a concept that has given rise to several major but distinct research programs in the field, such as the formulation of a consistent theory of quantum gravity or the search for new particles. We show that ``theory'' and `phenomenology'' in high-energy physics should be regarded as distinct theoretical subcultures, between which supersymmetry has helped sustain scientific ``trades''. However, as we demonstrate using a topic model, the phenomenological component of supersymmetry research has lost traction and the ability of supersymmetry to tie these subcultures together is now compromised. Our work supports that even fields with an initially strong sentiment of unity may eventually generate diverging research programs and demonstrates the fruitfulness of the notion of trading zones for informing quantitative approaches to scientific pluralism

    The multiple ontologies of freshness in the UK and Portuguese agri-food sectors

    Get PDF
    This paper adopts a material-semiotic approach to explore the multiple ontologies of ‘freshness’ as a quality of food. The analysis is based on fieldwork in the UK and Portugal, with particular emphasis on fish, poultry, and fruit and vegetables. Using evidence from archival research, ethnographic observation and interviews with food businesses (including major retailers and their suppliers) plus qualitative household-level research with consumers, the paper unsettles the conventional view of freshness as a single, stable quality of food. Rather than approaching the multiplicity of freshness as a series of social constructions (different perspectives on essentially the same thing), we identify its multiple ontologies. The analysis explores their enactment as uniform and consistent, local and seasonal, natural and authentic, and sentient and lively. The paper traces the effects of these enactments across the food system, drawing out the significance of our approach for current and future geographical studies of food

    An Approach for Intention-Driven, Dialogue-Based Web Search

    Get PDF
    Web search engines facilitate the achievement of Web-mediated tasks, including information retrieval, Web page navigation, and online transactions. These tasks often involve goals that pertain to multiple topics, or domains. Current search engines are not suitable for satisfying complex, multi-domain needs due to their lack of interactivity and knowledge. This thesis presents a novel intention-driven, dialogue-based Web search approach that uncovers and combines users\u27 multi-domain goals to provide helpful virtual assistance. The intention discovery procedure uses a hierarchy of Partially Observable Markov Decision Process-based dialogue managers and a backing knowledge base to systematically explore the dialogue\u27s information space, probabilistically refining the perception of user goals. The search approach has been implemented in IDS, a search engine for online gift shopping. A usability study comparing IDS-based searching with Google-based searching found that the IDS-based approach takes significantly less time and effort, and results in higher user confidence in the retrieved results

    Multi-domain neural network language generation for spoken dialogue systems

    Get PDF
    Moving from limited-domain natural language generation (NLG) to open domain is difficult because the number of semantic input combinations grows exponentially with the number of domains. Therefore, it is important to leverage existing resources and exploit similarities between domains to facilitate domain adaptation. In this paper, we propose a procedure to train multi-domain, Recurrent Neural Network-based (RNN) language generators via multiple adaptation steps. In this procedure, a model is first trained on counterfeited data synthesised from an out-of-domain dataset, and then fine tuned on a small set of in-domain utterances with a discriminative objective function. Corpus-based evaluation results show that the proposed procedure can achieve competitive performance in terms of BLEU score and slot error rate while significantly reducing the data needed to train generators in new, unseen domains. In subjective testing, human judges confirm that the procedure greatly improves generator performance when only a small amount of data is available in the domain.Toshiba Research Europe Ltd.This is the accepted manuscript. It is currently embargoed pending publication

    A Knowledge Multidimensional Representation Model for Automatic Text Analysis and Generation: Applications for Cultural Heritage

    Get PDF
    Knowledge is information that has been contextualized in a certain domain, where it can be used and applied. Natural Language provides a most direct way to transfer knowledge at different levels of conceptual density. The opportunity provided by the evolution of the technologies of Natural Language Processing is thus of making more fluid and universal the process of knowledge transfer. Indeed, unfolding domain knowledge is one way to bring to larger audiences contents that would be otherwise restricted to specialists. This has been done so far in a totally manual way through the skills of divulgators and popular science writers. Technology provides now a way to make this transfer both less expensive and more widespread. Extracting knowledge and then generating from it suitably communicable text in natural language are the two related subtasks that need be fulfilled in order to attain the general goal. To this aim, two fields from information technology have achieved the needed maturity and can therefore be effectively combined. In fact, on the one hand Information Extraction and Retrieval (IER) can extract knowledge from texts and map it into a neutral, abstract form, hence liberating it from the stylistic constraints into which it was originated. From there, Natural Language Generation can take charge, by regenerating automatically, or semi-automatically, the extracted knowledge into texts targeting new communities. This doctoral thesis provides a contribution to making substantial this combination through the definition and implementation of a novel multidimensional model for the representation of conceptual knowledge and of a workflow that can produce strongly customized textual descriptions. By exploiting techniques for the generation of paraphrases and by profiling target users, applications and domains, a target-driven approach is proposed to automatically generate multiple texts from the same information core. An extended case study is described to demonstrate the effectiveness of the proposed model and approach in the Cultural Heritage application domain, so as to compare and position this contribution within the current state of the art and to outline future directions
    • …
    corecore