82 research outputs found

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Abductive Design of BDI Agent-based Digital Twins of Organizations

    Get PDF
    For a Digital Twin - a precise, virtual representation of a physical counterpart - of a human-like system to be faithful and complete, it must appeal to a notion of anthropomorphism (i.e., attributing human behaviour to non-human entities) to imitate (1) the externally visible behaviour and (2) the internal workings of that system. Although the Belief-Desire-Intention (BDI) paradigm was not developed for this purpose, it has been used successfully in human modeling applications. In this sense, we introduce in this thesis the notion of abductive design of BDI agent-based Digital Twins of organizations, which builds on two powerful reasoning disciplines: reverse engineering (to recreate the visible behaviour of the target system) and goal-driven eXplainable Artificial Intelligence (XAI) (for viewing the behaviour of the target system through the lens of BDI agents). Precisely speaking, the overall problem we are trying to address in this thesis is to “Find a BDI agent program that best explains (in the sense of formal abduction) the behaviour of a target system based on its past experiences . To do so, we propose three goal-driven XAI techniques: (1) abductive design of BDI agents, (2) leveraging imperfect explanations and (3) mining belief-based explanations. The resulting approach suggests that using goal-driven XAI to generate Digital Twins of organizations in the form of BDI agents can be effective, even in a setting with limited information about the target system’s behaviour

    AI and legal personhood: a theoretical survey

    Get PDF
    I set out the pros and cons of conferring legal personhood on artificial intelligence systems (AIs), mainly under civil law. I provide functionalist arguments to justify this policy choice and identify the content that such a legal status might have. Although personhood entails holding one or more legal positions, I will focus on the distribution of liabilities arising from unpredictably illegal and harmful conduct. Conferring personhood on AIs might efficiently allocate risks and social costs, ensuring protection for victims, incentives for production, and technological innovation. I also consider other legal positions, e.g., the capacity to act, the ability to hold property, make contracts, and sue (and be sued). However, I contend that even assuming that conferring personhood on AIs finds widespread consensus, its implementation requires solving a coordination problem, determined by three asymmetries: technological, intra-legal systems, and inter-legal systems. I address the coordination problem through conceptual analysis and metaphysical explanation. I first frame legal personhood as a node of inferential links between factual preconditions and legal effects. Yet, this inferentialist reading does not account for the ‘background reasons’, i.e., it does not explain why we group divergent situations under legal personality and how extra-legal information is integrated into it. One way to account for this background is to adopt a neo-institutional perspective and update its ontology of legal concepts with further layers: the meta-institutional and the intermediate. Under this reading, the semantic referent of legal concepts is institutional reality. So, I use notions of analytical metaphysics, such as grounding and anchoring, to explain the origins and constituent elements of legal personality as an institutional kind. Finally, I show that the integration of conceptual and metaphysical analysis can provide the toolkit for finding an equilibrium around the legal-policy choices that are involved in including (or not including) AIs among legal persons

    Domain-general versus domain-specific learning mechanisms: Neurochemical mechanisms and relevance to autism

    Get PDF
    The theory that various features of autism spectrum disorders (ASD) can be explained by differences in the learning (or “predictive coding”) process is growing in popularity. However, extant studies have focused on the domain of sensory perception, i.e., learning what to expect in the visual or auditory domains. It is thus unclear whether such models are restricted to the perceptual domain, or whether they are outlining differences in domain- general learning processes. Consequently, how such theories can explain the social and motor features of ASD is currently unclear. The first part of the current thesis asks whether autistic adults exhibit differences, compared to non-autistic adults, with respect to social learning and motor learning. The second part of this thesis focuses in detail on one of these learning types - social learning. Here I investigate the neurochemical mechanisms that underpin social learning and ask whether they are dissociable from the neurochemical mechanisms that underpin learning from one’s own individual experience (individual learning). In integrating these results with the wider literature, I reflect upon the broader question of whether there are common domain-general learning mechanisms, or domain (e.g., social, motor, individual) specific learning “modules”. Together the studies presented in this thesis implicate the dopaminergic neurotransmitter system in both social and individual learning. Results support the view that there are domain-general neurochemical mechanisms that support various types of learning. These results do not, however, support the view that autistic adults exhibit differences in these domain-general learning processes. That is, our empirical work showed no differences in either social or motor learning when comparing autistic and non-autistic adults. These results do not add support for impaired predictive coding as a core deficit that can explain social and motor atypicalities in autism, but rather force us to think more critically about what overarching conclusions can be drawn from studies of predictive coding in autism within the perception domain

    Computational Theory of Mind for Human-Agent Coordination

    Get PDF
    In everyday life, people often depend on their theory of mind, i.e., their ability to reason about unobservable mental content of others to understand, explain, and predict their behaviour. Many agent-based models have been designed to develop computational theory of mind and analyze its effectiveness in various tasks and settings. However, most existing models are not generic (e.g., only applied in a given setting), not feasible (e.g., require too much information to be processed), or not human-inspired (e.g., do not capture the behavioral heuristics of humans). This hinders their applicability in many settings. Accordingly, we propose a new computational theory of mind, which captures the human decision heuristics of reasoning by abstracting individual beliefs about others. We specifically study computational affinity and show how it can be used in tandem with theory of mind reasoning when designing agent models for human-agent negotiation. We perform two-agent simulations to analyze the role of affinity in getting to agreements when there is a bound on the time to be spent for negotiating. Our results suggest that modeling affinity can ease the negotiation process by decreasing the number of rounds needed for an agreement as well as yield a higher benefit for agents with theory of mind reasoning.</p

    Deception

    Get PDF

    Logic-based Technologies for Multi-agent Systems: A Systematic Literature Review

    Get PDF
    Precisely when the success of artiïŹcial intelligence (AI) sub-symbolic techniques makes them be identiïŹed with the whole AI by many non-computerscientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI” – in particular, logic-based ones will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones

    GROVE: A computationally grounded model for rational intention revision in BDI agents

    Get PDF
    A fundamental aspect of Belief-Desire-Intention (BDI) agents is intention revision. Agents revise their intentions in order to maintain consistency between their intentions and beliefs, and consistency between intentions. A rational agent must also account for the optimality of their intentions in the case of revision. To that end I present GROVE, a model of rational intention revision for BDI agents. The semantics of a GROVE agent is defined in terms of constraints and preferences on possible future executions of an agent’s plans. I show that GROVE is weakly rational in the sense of Grant et al. and imposes more constraints on executions than the operational semantics for goal lifecycles proposed by Harland et al. As it may not be computationally feasible to consider all possible future executions, I propose a bounded version of GROVE that samples the set of future executions, and state conditions under which bounded GROVE commits to a rational execution
    • 

    corecore