16 research outputs found
μ計算と算術の階層構造の探索---ゲール・スチュワートゲームの視点より
Tohoku University横山啓太課
In search of a “true” logic of knowledge: the nonmonotonic perspective
AbstractModal logics are currently widely accepted as a suitable tool of knowledge representation, and the question what logics are better suited for representing knowledge is of particular importance. Usually, some axiom list is given, and arguments are presented justifying that suggested axioms agree with intuition. The question why the suggested axioms describe all the desired properties of knowledge remains answered only partially, by showing that the most obvious and popular additional axioms would violate the intuition.We suggest the general paradigm of maximal logics and demonstrate how it can work for nonmonotonic modal logics. Technically, we prove that each of the modal logics KD45, SW5, S4F and S4.2 is the strongest modal logic among the logics generating the same nonmonotonic logic. These logics have already found important applications in knowledge representation, and the obtained results contribute to the explanation of this fact
Logic and Commonsense Reasoning: Lecture Notes
MasterThese are the lecture notes of a course on logic and commonsense reasoning given to master students in philosophy of the University of Rennes 1. N.B.: Some parts of these lectures notes are sometimes largely based on or copied verbatim from publications of other authors. When this is the case, these parts are mentioned at the end of each chapter in the section “Further reading”
Multi-objective scheduling for real-time data warehouses
The issue of write-read contention is one of the most prevalent problems when deploying real-time data warehouses. With increasing load, updates are increasingly delayed and previously fast queries tend to be slowed down considerably. However, depending on the user requirements, we can improve the response time or the data quality by scheduling the queries and updates appropriately. If both criteria are to be considered simultaneously, we are faced with a so-called multi-objective optimization problem. We transformed this problem into a knapsack problem with additional inequalities and solved it efficiently. Based on our solution, we developed a scheduling approach that provides the optimal schedule with regard to the user requirements at any given point in time. We evaluated our scheduling in an extensive experimental study, where we compared our approach with the respective optimal schedule policies of each single optimization objective
Recommended from our members
Action Systems, Determinism and the Development of Secure Systems
This thesis addresses issues arising in the specification and development of secure systems, focusing in particular on aspects of confidentiality. Various confidentiality properties based on limiting the allowed flows of information in a system have previously been proposed. These definitions axe reviewed here and some of the problems inherent in their use axe outlined. Recent work by Roscoe [106] has. provided information flow definitions based on restricting the allowed nondeterminism within the system. These properties axe described in detail, with a range of examples provided to illustrate their use.This thesis is concerned with providing a new, pragmatic approach to the development of secure systems. Action systems axe chosen as a notation which incorporates both direct representation of system state useful for effective system modelling and the succession of events in a system essential for representation of information flow properties. A definition of nondeterminism
and formulations of the deterministic security properties axe developed for action systems. These axe shown to correspond to the original CSP event based definitions.The emphasis of this work is on the practical application of theoretical results. This is reflected in the case studies in which the preceding work is applied to realistic development situations. This allows the strengths and weaknesses of both the deterministic security conditions and the use of action systems to be assessed. The first study investigates security constraints applied to a distributed message-passing system. Ways of specifying security conditions and the effects of including them at different levels axe explored. The second case study follows through the specification and refinement of a distributed security kernel. A technique for the simplification of security
proofs is introduced
Design Recovery and Data Mining: A Methodology That Identifies Data-Cohesive Subsystems Based on Mining Association Rules.
Software maintenance is both a technical and an economic concern for organizations. Large software systems are difficult to maintain due to their intrinsic complexity, and their maintenance consumes between 50% and 90% of the cost of their complete life-cycle. An essential step in maintenance is reverse engineering, which focuses on understanding the system. This system understanding is critical to avoid the generation of undesired side effects during maintenance. The objective of this research is to investigate the potential of applying data mining to reverse engineering. This research was motivated by the following: (1) data mining can process large volumes of information, (2) data mining can elicit meaningful information without previous knowledge of the domain, (3) data mining can extract novel non-trivial relationships from a data set, and (4) data mining is automatable. These data mining features are used to help address the problem of understanding large legacy systems. This research produced a general method to apply data mining to reverse engineering, and a methodology for design recovery, called Identification of Subsystems based on Associations (ISA). ISA uses mined association rules from a database view of the subject system to guide a clustering process that produces a data-cohesive hierarchical subsystem decomposition of the system. ISA promotes object-oriented principles because each identified subsystem consists of a set of data repositories and the code (i.e., programs) that manipulates them. ISA is an automatic multi-step process, which uses the source code of the subject system and multiple parameters as its input. ISA includes two representation models (i.e., text-based and graphic-based representation models) to present the resulting subsystem decomposition. The automated environment RE-ISA implements the ISA methodology. RE-ISA was used to produce the subsystem decomposition of real-word software systems. Results show that ISA can automatically produce data-cohesive subsystem decompositions without previous knowledge of the subject system, and that ISA always generates the same results if the same parameters are utilized. This research provides evidence that data mining is a beneficial tool for reverse engineering and provides the foundation for defining methodologies that combine data mining and software maintenance