17,568 research outputs found

    Natural Language Dialogue Service for Appointment Scheduling Agents

    Get PDF
    Appointment scheduling is a problem faced daily by many individuals and organizations. Cooperating agent systems have been developed to partially automate this task. In order to extend the circle of participants as far as possible we advocate the use of natural language transmitted by e-mail. We describe COSMA, a fully implemented German language server for existing appointment scheduling agent systems. COSMA can cope with multiple dialogues in parallel, and accounts for differences in dialogue behaviour between human and machine agents. NL coverage of the sublanguage is achieved through both corpus-based grammar development and the use of message extraction techniques.Comment: 8 or 9 pages, LaTeX; uses aclap.sty, epsf.te

    A System for Deduction-based Formal Verification of Workflow-oriented Software Models

    Full text link
    The work concerns formal verification of workflow-oriented software models using deductive approach. The formal correctness of a model's behaviour is considered. Manually building logical specifications, which are considered as a set of temporal logic formulas, seems to be the significant obstacle for an inexperienced user when applying the deductive approach. A system, and its architecture, for the deduction-based verification of workflow-oriented models is proposed. The process of inference is based on the semantic tableaux method which has some advantages when compared to traditional deduction strategies. The algorithm for an automatic generation of logical specifications is proposed. The generation procedure is based on the predefined workflow patterns for BPMN, which is a standard and dominant notation for the modeling of business processes. The main idea for the approach is to consider patterns, defined in terms of temporal logic,as a kind of (logical) primitives which enable the transformation of models to temporal logic formulas constituting a logical specification. Automation of the generation process is crucial for bridging the gap between intuitiveness of the deductive reasoning and the difficulty of its practical application in the case when logical specifications are built manually. This approach has gone some way towards supporting, hopefully enhancing our understanding of, the deduction-based formal verification of workflow-oriented models.Comment: International Journal of Applied Mathematics and Computer Scienc

    A Continuously Growing Dataset of Sentential Paraphrases

    Full text link
    A major challenge in paraphrase research is the lack of parallel corpora. In this paper, we present a new method to collect large-scale sentential paraphrases from Twitter by linking tweets through shared URLs. The main advantage of our method is its simplicity, as it gets rid of the classifier or human in the loop needed to select data before annotation and subsequent application of paraphrase identification algorithms in the previous work. We present the largest human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking for automatic paraphrase identification. In addition, we show that more than 30,000 new sentential paraphrases can be easily and continuously captured every month at ~70% precision, and demonstrate their utility for downstream NLP tasks through phrasal paraphrase extraction. We make our code and data freely available.Comment: 11 pages, accepted to EMNLP 201

    Fidelity-Weighted Learning

    Full text link
    Training deep neural networks requires many training samples, but in practice training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental quality versus-quantity trade-off in the learning process. Do we learn from the small amount of high-quality data or the potentially large amount of weakly-labeled data? We argue that if the learner could somehow know and take the label-quality into account when learning the data representation, we could get the best of both worlds. To this end, we propose "fidelity-weighted learning" (FWL), a semi-supervised student-teacher approach for training deep neural networks using weakly-labeled data. FWL modulates the parameter updates to a student network (trained on the task we care about) on a per-sample basis according to the posterior confidence of its label-quality estimated by a teacher (who has access to the high-quality labels). Both student and teacher are learned from the data. We evaluate FWL on two tasks in information retrieval and natural language processing where we outperform state-of-the-art alternative semi-supervised methods, indicating that our approach makes better use of strong and weak labels, and leads to better task-dependent data representations.Comment: Published as a conference paper at ICLR 201

    Bridging the gap between textual and formal business process representations

    Get PDF
    Tesi en modalitat de compendi de publicacionsIn the era of digital transformation, an increasing number of organizations are start ing to think in terms of business processes. Processes are at the very heart of each business, and must be understood and carried out by a wide range of actors, from both technical and non-technical backgrounds alike. When embracing digital transformation practices, there is a need for all involved parties to be aware of the underlying business processes in an organization. However, the representational complexity and biases of the state-of-the-art modeling notations pose a challenge in understandability. On the other hand, plain language representations, accessible by nature and easily understood by everyone, are often frowned upon by technical specialists due to their ambiguity. The aim of this thesis is precisely to bridge this gap: Between the world of the techni cal, formal languages and the world of simpler, accessible natural languages. Structured as an article compendium, in this thesis we present four main contributions to address specific problems in the intersection between the fields of natural language processing and business process management.A l’era de la transformació digital, cada vegada més organitzacions comencen a pensar en termes de processos de negoci. Els processos són el nucli principal de tota empresa i, com a tals, han de ser fàcilment comprensibles per un ampli ventall de rols, tant perfils tècnics com no-tècnics. Quan s’adopta la transformació digital, és necessari que totes les parts involucrades estiguin ben informades sobre els protocols implantats com a part del procés de digitalització. Tot i això, la complexitat i biaixos de representació dels llenguatges de modelització que actualment conformen l’estat de l’art sovint en dificulten la seva com prensió. D’altra banda, les representacions basades en documentació usant llenguatge natural, accessibles per naturalesa i fàcilment comprensibles per tothom, moltes vegades són vistes com un problema pels perfils més tècnics a causa de la presència d’ambigüitats en els textos. L’objectiu d’aquesta tesi és precisament el de superar aquesta distància: La distància entre el món dels llenguatges tècnics i formals amb el dels llenguatges naturals, més accessibles i senzills. Amb una estructura de compendi d’articles, en aquesta tesi presentem quatre grans línies de recerca per adreçar problemes específics en aquesta intersecció entre les tecnologies d’anàlisi de llenguatge natural i la gestió dels processos de negoci.Postprint (published version
    corecore