139 research outputs found

    Operationalizing Declarative and Procedural Knowledge: A Benchmark on Logic Programming Petri Nets (LPPNs)

    Get PDF
    Modelling, specifying and reasoning about complex systems requires to process in an integrated fashion declarative and procedural aspects of the target domain. The paper reports on an experiment conducted with a propositional version of Logic Programming Petri Nets (LPPNs), a notation extending Petri Nets with logic programming constructs. Two semantics are presented: a denotational semantics that fully maps the notation to ASP via Event Calculus; and a hybrid operational semantics that process separately the causal mechanisms via Petri nets, and the constraints associated to objects and to events via Answer Set Programming (ASP). These two alternative specifications enable an empirical evaluation in terms of computational efficiency. Experimental results show that the hybrid semantics is more efficient w.r.t. sequences, whereas the two semantics follows the same behaviour w.r.t. branchings (although the denotational one performs better in absolute terms).Comment: draft version -- update

    Accounting Value Effects for Responsible Networking

    Get PDF

    No Labels? No Problem! Experiments with active learning strategies for multi-class classification in imbalanced low-resource settings

    Get PDF
    Labeling textual corpora in their entirety is infeasible in most practical situations, yet it is a very common need today in public and private organizations. In contexts with large unlabeled datasets, active learning methods may reduce the manual labeling effort by selecting samples deemed more informative for the learning process. The paper elaborates on a method for multi-class classification based on state-of-the-art NLP active learning techniques, performing various experiments in low-resource and imbalanced settings. In particular, we refer to a dataset of Dutch legal documents constructed with two levels of imbalance; we study the performance of task-adapting a pre-trained Dutch language model, BERTje, and of using active learning to fine-tune the model to the task, testing several selection strategies. We find that, on the constructed datasets, an entropy-based strategy slightly improves the F1, precision, and recall convergence rates; and that the improvements are most pronounced in the severely imbalanced dataset. These results show promise for active learning in low-resource imbalanced domains but also leave space for further improvement.</p
    • …
    corecore