109,985 research outputs found

    Logic of Negation-Complete Interactive Proofs (Formal Theory of Epistemic Deciders)

    Get PDF
    We produce a decidable classical normal modal logic of internalised negation-complete and thus disjunctive non-monotonic interactive proofs (LDiiP) from an existing logical counterpart of non-monotonic or instant interactive proofs (LiiP). LDiiP internalises agent-centric proof theories that are negation-complete (maximal) and consistent (and hence strictly weaker than, for example, Peano Arithmetic) and enjoy the disjunction property (like Intuitionistic Logic). In other words, internalised proof theories are ultrafilters and all internalised proof goals are definite in the sense of being either provable or disprovable to an agent by means of disjunctive internalised proofs (thus also called epistemic deciders). Still, LDiiP itself is classical (monotonic, non-constructive), negation-incomplete, and does not have the disjunction property. The price to pay for the negation completeness of our interactive proofs is their non-monotonicity and non-communality (for singleton agent communities only). As a normal modal logic, LDiiP enjoys a standard Kripke-semantics, which we justify by invoking the Axiom of Choice on LiiP's and then construct in terms of a concrete oracle-computable function. LDiiP's agent-centric internalised notion of proof can also be viewed as a negation-complete disjunctive explicit refinement of standard KD45-belief, and yields a disjunctive but negation-incomplete explicit refinement of S4-provability.Comment: Expanded Introduction. Added Footnote 4. Corrected Corollary 3 and 4. Continuation of arXiv:1208.184

    Reclaiming human machine nature

    Get PDF
    Extending and modifying his domain of life by artifact production is one of the main characteristics of humankind. From the first hominid, who used a wood stick or a stone for extending his upper limbs and augmenting his gesture strength, to current systems engineers who used technologies for augmenting human cognition, perception and action, extending human body capabilities remains a big issue. From more than fifty years cybernetics, computer and cognitive sciences have imposed only one reductionist model of human machine systems: cognitive systems. Inspired by philosophy, behaviorist psychology and the information treatment metaphor, the cognitive system paradigm requires a function view and a functional analysis in human systems design process. According that design approach, human have been reduced to his metaphysical and functional properties in a new dualism. Human body requirements have been left to physical ergonomics or "physiology". With multidisciplinary convergence, the issues of "human-machine" systems and "human artifacts" evolve. The loss of biological and social boundaries between human organisms and interactive and informational physical artifact questions the current engineering methods and ergonomic design of cognitive systems. New developpment of human machine systems for intensive care, human space activities or bio-engineering sytems requires grounding human systems design on a renewed epistemological framework for future human systems model and evidence based "bio-engineering". In that context, reclaiming human factors, augmented human and human machine nature is a necessityComment: Published in HCI International 2014, Heraklion : Greece (2014

    The Information-Flow Approach to Ontology-Based Semantic Integration

    No full text
    In this article we argue for the lack of formal foundations for ontology-based semantic alignment. We analyse and formalise the basic notions of semantic matching and alignment and we situate them in the context of ontology-based alignment in open-ended and distributed environments, like the Web. We then use the mathematical notion of information flow in a distributed system to ground three hypotheses that enable semantic alignment. We draw our exemplar applications of this work from a variety of interoperability scenarios including ontology mapping, theory of semantic interoperability, progressive ontology alignment, and situated semantic alignment

    Using logic programs to model an agent's epistemic state

    Get PDF
    The notion of rational agency was proposed by Russell [9] as an alternative characterization of intelligence agency. Loosely speaking, an agent is said to be rational if it perfomns the right actions according to the information it possesses and the goals it wants to achieve. Unfortunately, the enterprise of constructing a rational agent is a rather complex task. Although in the last few years there has been an intense flowering of interest in the subject, it is still in its early beginnings: several issues remain overlooked or addressed under too unrealistic assumptions. As slated by Pollock. [5], a rational agent should have models of itself and its surroundings, since it must be able to draw conclusions from this knowledge that compose its set of beliefs. Traditional approaches rely on multi-modal logics to represent the agent's epistemic state [7. l]. Given the expressive power of these formalisms, their use yields proper theoretical models. Nevertheless, the advantages of these specifications lend to be lost in the transition towards practical systems: there is a tenuous relation between the implementations based on these logics and their theoretical foundations [8]. Modal logics systems suffer from a number of drawbacks, notably the well-known logical omniscience problem [10]. This problem arises as a by-product of the necessitation rule and the K axiom, present in any normal modal system. Together, these ruIes imply two unrealistic conditions: an agent using this system must know all the valid formulas, and its beliefs should be closed under logical consecuence. These properties are overstrong for a resource-bounded reasoner lo achieve them. Therefore, the traaditional modal logic approach is not suitable for representing practical believers [11]. We intend to use logic programs as an alternative representation for the agent's epistemic state. This formalization avoids the aforementioned problems of modal logics, and admits a seamless transition between theory and practice. In the next section we detail our model and highlight its advantages. Next, sectiol1 3 prescnts sume conclusions and reports on the forthcoming work.Eje: Aspectos teĂłricos de inteligencia artificialRed de Universidades con Carreras en InformĂĄtica (RedUNCI

    Simulation and statistical model-checking of logic-based multi-agent system models

    Get PDF
    This thesis presents SALMA (Simulation and Analysis of Logic-Based Multi- Agent Models), a new approach for simulation and statistical model checking of multi-agent system models. Statistical model checking is a relatively new branch of model-based approximative verification methods that help to overcome the well-known scalability problems of exact model checking. In contrast to existing solutions, SALMA specifies the mechanisms of the simulated system by means of logical axioms based upon the well-established situation calculus. Leveraging the resulting first-order logic structure of the system model, the simulation is coupled with a statistical model-checker that uses a first-order variant of time-bounded linear temporal logic (LTL) for describing properties. This is combined with a procedural and process-based language for describing agent behavior. Together, these parts create a very expressive framework for modeling and verification that allows direct fine-grained reasoning about the agents’ interaction with each other and with their (physical) environment. SALMA extends the classical situation calculus and linear temporal logic (LTL) with means to address the specific requirements of multi-agent simulation models. In particular, cyber-physical domains are considered where the agents interact with their physical environment. Among other things, the thesis describes a generic situation calculus axiomatization that encompasses sensing and information transfer in multi agent systems, for instance sensor measurements or inter-agent messages. The proposed model explicitly accounts for real-time constraints and stochastic effects that are inevitable in cyber-physical systems. In order to make SALMA’s statistical model checking facilities usable also for more complex problems, a mechanism for the efficient on-the-fly evaluation of first-order LTL properties was developed. In particular, the presented algorithm uses an interval-based representation of the formula evaluation state together with several other optimization techniques to avoid unnecessary computation. Altogether, the goal of this thesis was to create an approach for simulation and statistical model checking of multi-agent systems that builds upon well-proven logical and statistical foundations, but at the same time takes a pragmatic software engineering perspective that considers factors like usability, scalability, and extensibility. In fact, experience gained during several small to mid-sized experiments that are presented in this thesis suggest that the SALMA approach seems to be able to live up to these expectations.In dieser Dissertation wird SALMA (Simulation and Analysis of Logic-Based Multi-Agent Models) vorgestellt, ein im Rahmen dieser Arbeit entwickelter Ansatz für die Simulation und die statistische Modellprüfung (Model Checking) von Multiagentensystemen. Der Begriff „Statistisches Model Checking” beschreibt modellbasierte approximative Verifikationsmethoden, die insbesondere dazu eingesetzt werden können, um den unvermeidlichen Skalierbarkeitsproblemen von exakten Methoden zu entgehen. Im Gegensatz zu bisherigen AnsĂ€tzen werden in SALMA die Mechanismen des simulierten Systems mithilfe logischer Axiome beschrieben, die auf dem etablierten Situationskalkül aufbauen. Die dadurch entstehende prĂ€dikatenlogische Struktur des Systemmodells wird ausgenutzt um ein Model Checking Modul zu integrieren, das seinerseits eine prĂ€dikatenlogische Variante der linearen temporalen Logik (LTL) verwendet. In Kombination mit einer prozeduralen und prozessorientierten Sprache für die Beschreibung von Agentenverhalten entsteht eine ausdrucksstarke und flexible Plattform für die Modellierung und Verifikation von Multiagentensystemen. Sie ermöglicht eine direkte und feingranulare Beschreibung der Interaktionen sowohl zwischen Agenten als auch von Agenten mit ihrer (physischen) Umgebung. SALMA erweitert den klassischen Situationskalkül und die lineare temporale Logik (LTL) um Elemente und Konzepte, die auf die spezifischen Anforderungen bei der Simulation und Modellierung von Multiagentensystemen ausgelegt sind. Insbesondere werden cyber-physische Systeme (CPS) unterstützt, in denen Agenten mit ihrer physischen Umgebung interagieren. Unter anderem wird eine generische, auf dem Situationskalkül basierende, Axiomatisierung von Prozessen beschrieben, in denen Informationen innerhalb von Multiagentensystemen transferiert werden – beispielsweise in Form von Sensor- Messwerten oder Netzwerkpaketen. Dabei werden ausdrücklich die unvermeidbaren stochastischen Effekte und Echtzeitanforderungen in cyber-physischen Systemen berücksichtigt. Um statistisches Model Checking mit SALMA auch für komplexere Problemstellungen zu ermöglichen, wurde ein Mechanismus für die effiziente Auswertung von prĂ€dikatenlogischen LTL-Formeln entwickelt. Insbesondere beinhaltet der vorgestellte Algorithmus eine Intervall-basierte ReprĂ€sentation des Auswertungszustands, sowie einige andere OptimierungsansĂ€tze zur Vermeidung von unnötigen Berechnungsschritten. Insgesamt war es das Ziel dieser Dissertation, eine Lösung für Simulation und statistisches Model Checking zu schaffen, die einerseits auf fundierten logischen und statistischen Grundlagen aufbaut, auf der anderen Seite jedoch auch pragmatischen Gesichtspunkten wie Benutzbarkeit oder Erweiterbarkeit genügt. TatsĂ€chlich legen erste Ergebnisse und Erfahrungen aus mehreren kleinen bis mittelgroßen Experimenten nahe, dass SALMA diesen Zielen gerecht wird
    • 

    corecore