2,561 research outputs found

    The Logic of the Method of Agent-Based Simulation in the Social Sciences: Empirical and Intentional Adequacy of Computer Programs

    Get PDF
    The classical theory of computation does not represent an adequate model of reality for simulation in the social sciences. The aim of this paper is to construct a methodological perspective that is able to conciliate the formal and empirical logic of program verification in computer science, with the interpretative and multiparadigmatic logic of the social sciences. We attempt to evaluate whether social simulation implies an additional perspective about the way one can understand the concepts of program and computation. We demonstrate that the logic of social simulation implies at least two distinct types of program verifications that reflect an epistemological distinction in the kind of knowledge one can have about programs. Computer programs seem to possess a causal capability (Fetzer, 1999) and an intentional capability that scientific theories seem not to possess. This distinction is associated with two types of program verification, which we call empirical and intentional verification. We demonstrate, by this means, that computational phenomena are also intentional phenomena, and that such is particularly manifest in agent-based social simulation. Ascertaining the credibility of results in social simulation requires a focus on the identification of a new category of knowledge we can have about computer programs. This knowledge should be considered an outcome of an experimental exercise, albeit not empirical, acquired within a context of limited consensus. The perspective of intentional computation seems to be the only one possible to reflect the multiparadigmatic character of social science in terms of agent-based computational social science. We contribute, additionally, to the clarification of several questions that are found in the methodological perspectives of the discipline, such as the computational nature, the logic of program scalability, and the multiparadigmatic character of agent-based simulation in the social sciences.Computer and Social Sciences, Agent-Based Simulation, Intentional Computation, Program Verification, Intentional Verification, Scientific Knowledge

    Semantics and Ontology:\ud On the Modal Structure of an Epistemic Theory of Meaning

    Get PDF
    In this paper I shall confront three basic questions.\ud First, the relevance of epistemic structures, as formalized\ud and dealt with by current epistemic logics, for a\ud general Theory of meaning. Here I acknowledge M. Dummett"s\ud idea that a systematic account of what is meaning of\ud an arbitrary language subsystem must especially take into\ud account the inferential components of meaning itself. That\ud is, an analysis of meaning comprehension processes,\ud given in terms of epistemic logics and semantics for epistemic\ud notions.\ud The second and third questions relate to the ontological\ud and epistemological framework for this approach.\ud Concerning the epistemological aspects of an epistemic\ud theory of meaning, the question is: how epistemic logics\ud can eventually account for the informative character of\ud meaning comprehension processes. "Information� seems\ud to be built in the very formal structure of epistemic processes,\ud and should be exhibited in modal and possibleworld\ud semantics for propositional knowledge and belief.\ud However, it is not yet clear what is e.g. a possible world.\ud That is: how it can be defined semantically, other than by\ud accessibility rules which merely define it by considering its\ud set-theoretic relations with other sets-possible worlds.\ud Therefore, it is not clear which is the epistemological status\ud of propositional information contained in the structural\ud aspects of possible world semantics. The problem here\ud seems to be what kind of meaning one attributes to the\ud modal notion of possibility, thus allowing semantical and\ud synctactical selectors for possibilities. This is a typically\ud Dummett-style problem.\ud The third question is linked with this epistemological\ud problem, since it is its ontological counterpart. It concerns\ud the limits of the logical space and of logical semantics for a\ud of meaning. That is, it is concerned with the kind of\ud structure described by inferential processes, thought, in a\ud fregean perspective, as pre-conditions of estentional\ud treatment of meaning itself. The second and third questions\ud relate to some observations in Wittgenstein"s Tractatus.\ud I shall also try to show how their behaviour limits the\ud explicative power of some semantics for epistemic logics\ud (Konolige"s and Levesque"s for knowledge and belief)

    Interestingness of traces in declarative process mining: The janus LTLPf Approach

    Get PDF
    Declarative process mining is the set of techniques aimed at extracting behavioural constraints from event logs. These constraints are inherently of a reactive nature, in that their activation restricts the occurrence of other activities. In this way, they are prone to the principle of ex falso quod libet: they can be satisfied even when not activated. As a consequence, constraints can be mined that are hardly interesting to users or even potentially misleading. In this paper, we build on the observation that users typically read and write temporal constraints as if-statements with an explicit indication of the activation condition. Our approach is called Janus, because it permits the specification and verification of reactive constraints that, upon activation, look forward into the future and backwards into the past of a trace. Reactive constraints are expressed using Linear-time Temporal Logic with Past on Finite Traces (LTLp f). To mine them out of event logs, we devise a time bi-directional valuation technique based on triplets of automata operating in an on-line fashion. Our solution proves efficient, being at most quadratic w.r.t. trace length, and effective in recognising interestingness of discovered constraints

    ‘Quine’s Meaning Nihilism: Revisiting Naturalism and Confirmation Method,’

    Get PDF
    The paper concentrates on an appreciation of W.V. Quine’s thought on meaning and how it escalates beyond the meaning holism and confirmation holism, thereby paving the way for a ‘meaning nihilism’ and ‘confirmation rejectionism’. My effort would be to see that how could the acceptance of radical naturalism in Quine’s theory of meaning escorts him to the indeterminacy thesis of meaning. There is an interesting shift from epistemology to language as Quine considers that a person who is aware of linguistic trick can be the master of referential language. Another important question is that how could Quine’s radical translation thesis reduce into semantic indeterminacy that is a consequence of his confirmation methord. Even I think that the notion and the analysis of meaning became hopelessly vague in Quine’s later work. I further argue on Quine’s position of meaning that I call, following Hilary Putnam, ‘meaning nihilism’. It seems to me that Quine had no belief like ‘meaning consists in’, or ‘meaning depends on’ something. Through this argument, I would like to challenge the confirmation holism that was foisted by Fodor on Quine’s thesis. My attempt would be to scrutinize Putnam’s point of view that Quine was neither a confirmation holist nor a meaning holist. I think that both Putnam and Quine denied the concept of constitutive connection of meaning as a second grade notion not only from the realm of semantic, but also from the perspective of epistemology. So, linguistic meaning cannot be formed by any sample of its uses. For Quine, the concept of meaning in metaphysics is heuristic and need not be taken seriously in any ‘science worthy’ literature

    Stratified Labelings for Abstract Argumentation

    Full text link
    We introduce stratified labelings as a novel semantical approach to abstract argumentation frameworks. Compared to standard labelings, stratified labelings provide a more fine-grained assessment of the controversiality of arguments using ranks instead of the usual labels in, out, and undecided. We relate the framework of stratified labelings to conditional logic and, in particular, to the System Z ranking functions

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Harnessing Higher-Order (Meta-)Logic to Represent and Reason with Complex Ethical Theories

    Get PDF
    The computer-mechanization of an ambitious explicit ethical theory, Gewirth's Principle of Generic Consistency, is used to showcase an approach for representing and reasoning with ethical theories exhibiting complex logical features like alethic and deontic modalities, indexicals, higher-order quantification, among others. Harnessing the high expressive power of Church's type theory as a meta-logic to semantically embed a combination of quantified non-classical logics, our work pushes existing boundaries in knowledge representation and reasoning. We demonstrate that intuitive encodings of complex ethical theories and their automation on the computer are no longer antipodes.Comment: 14 page

    Data in Business Process Models. A Preliminary Empirical Study

    Get PDF
    Traditional activity-centric process modeling languages treat data as simple black boxes acting as input or output for activities. Many alternate and emerging process modeling paradigms, such as case handling and artifact-centric process modeling, give data a more central role. This is achieved by introducing lifecycles and states for data objects, which is beneficial when modeling data-or knowledge-intensive processes. We assume that traditional activity-centric process modeling languages lack the capabilities to adequately capture the complexity of such processes. To verify this assumption we conducted an online interview among BPM experts. The results not only allow us to identify various profiles of persons modeling business processes, but also the problems that exist in contemporary modeling languages w.r.t. The modeling of business data. Overall, this preliminary empirical study confirms the necessity of data-awareness in process modeling notations in general
    • …
    corecore