245 research outputs found

    Reflective inductive inference of recursive functions

    Get PDF
    AbstractIn this paper, we investigate reflective inductive inference of recursive functions. A reflective IIM is a learning machine that is additionally able to assess its own competence.First, we formalize reflective learning from arbitrary, and from canonical, example sequences. Here, we arrive at four different types of reflection: reflection in the limit, optimistic, pessimistic and exact reflection.Then, we compare the learning power of reflective IIMs with each other as well as with the one of standard IIMs for learning in the limit, for consistent learning of three different types, and for finite learning

    What to do about interlanguage

    Get PDF

    Reasoning in Description Logic Ontologies for Privacy Management

    Get PDF
    A rise in the number of ontologies that are integrated and distributed in numerous application systems may provide the users to access the ontologies with different privileges and purposes. In this situation, preserving confidential information from possible unauthorized disclosures becomes a critical requirement. For instance, in the clinical sciences, unauthorized disclosures of medical information do not only threaten the system but also, most importantly, the patient data. Motivated by this situation, this thesis initially investigates a privacy problem, called the identity problem, where the identity of (anonymous) objects stored in Description Logic ontologies can be revealed or not. Then, we consider this problem in the context of role-based access control to ontologies and extend it to the problem asking if the identity belongs to a set of known individuals of cardinality smaller than the number k. If it is the case that some confidential information of persons, such as their identity, their relationships or their other properties, can be deduced from an ontology, which implies that some privacy policy is not fulfilled, then one needs to repair this ontology such that the modified one complies with the policies and preserves the information from the original ontology as much as possible. The repair mechanism we provide is called gentle repair and performed via axiom weakening instead of axiom deletion which was commonly used in classical approaches of ontology repair. However, policy compliance itself is not enough if there is a possible attacker that can obtain relevant information from other sources, which together with the modified ontology still violates the privacy policies. Safety property is proposed to alleviate this issue and we investigate this in the context of privacy-preserving ontology publishing. Inference procedures to solve those privacy problems and additional investigations on the complexity of the procedures, as well as the worst-case complexity of the problems, become the main contributions of this thesis.:1. Introduction 1.1 Description Logics 1.2 Detecting Privacy Breaches in Information System 1.3 Repairing Information Systems 1.4 Privacy-Preserving Data Publishing 1.5 Outline and Contribution of the Thesis 2. Preliminaries 2.1 Description Logic ALC 2.1.1 Reasoning in ALC Ontologies 2.1.2 Relationship with First-Order Logic 2.1.3. Fragments of ALC 2.2 Description Logic EL 2.3 The Complexity of Reasoning Problems in DLs 3. The Identity Problem and Its Variants in Description Logic Ontologies 3.1 The Identity Problem 3.1.1 Description Logics with Equality Power 3.1.2 The Complexity of the Identity Problem 3.2 The View-Based Identity Problem 3.3 The k-Hiding Problem 3.3.1 Upper Bounds 3.3.2 Lower Bound 4. Repairing Description Logic Ontologies 4.1 Repairing Ontologies 4.2 Gentle Repairs 4.3 Weakening Relations 4.4 Weakening Relations for EL Axioms 4.4.1 Generalizing the Right-Hand Sides of GCIs 4.4.2 Syntactic Generalizations 4.5 Weakening Relations for ALC Axioms 4.5.1 Generalizations and Specializations in ALC w.r.t. Role Depth 4.5.2 Syntactical Generalizations and Specializations in ALC 5. Privacy-Preserving Ontology Publishing for EL Instance Stores 5.1 Formalizing Sensitive Information in EL Instance Stores 5.2 Computing Optimal Compliant Generalizations 5.3 Computing Optimal Safe^{\exists} Generalizations 5.4 Deciding Optimality^{\exists} in EL Instance Stores 5.5 Characterizing Safety^{\forall} 5.6 Optimal P-safe^{\forall} Generalizations 5.7 Characterizing Safety^{\forall\exists} and Optimality^{\forall\exists} 6. Privacy-Preserving Ontology Publishing for EL ABoxes 6.1 Logical Entailments in EL ABoxes with Anonymous Individuals 6.2 Anonymizing EL ABoxes 6.3 Formalizing Sensitive Information in EL ABoxes 6.4 Compliance and Safety for EL ABoxes 6.5 Optimal Anonymizers 7. Conclusion 7.1 Main Results 7.2 Future Work Bibliograph

    Ontology Reasoning with Deep Neural Networks

    Full text link
    The ability to conduct logical reasoning is a fundamental aspect of intelligent behavior, and thus an important problem along the way to human-level artificial intelligence. Traditionally, symbolic logic-based methods from the field of knowledge representation and reasoning have been used to equip agents with capabilities that resemble human logical reasoning qualities. More recently, however, there has been an increasing interest in using machine learning rather than symbolic logic-based formalisms to tackle these tasks. In this paper, we employ state-of-the-art methods for training deep neural networks to devise a novel model that is able to learn how to effectively perform logical reasoning in the form of basic ontology reasoning. This is an important and at the same time very natural logical reasoning task, which is why the presented approach is applicable to a plethora of important real-world problems. We present the outcomes of several experiments, which show that our model learned to perform precise ontology reasoning on diverse and challenging tasks. Furthermore, it turned out that the suggested approach suffers much less from different obstacles that prohibit logic-based symbolic reasoning, and, at the same time, is surprisingly plausible from a biological point of view

    Analyzing the Psychological and Social Contents of Evidence—Experimental Comparison between Guessing, Naturalistic Observation, and Systematic Analysis

    Full text link
    To improve inferences about psychological and social evidence contained in pictures and texts, a five-step algorithm—Systematic Analysis (SA)—was devised. It combines basic principles of interpretation in forensic science, providing a comprehensive record of signs of evidence. Criminal justice professionals evaluated the usefulness of SA. Effects of applying SA were tested experimentally with 41 subjects, compared to 39 subjects observing naturally (naturalistic observation) and 47 subjects guessing intuitively intuitive guessing group. After being trained in SA, prosecutors and police detectives (N = 217) attributed it a good usefulness for criminal investigation. Subjects (graduate students) using SA found significantly more details about four test cases than those observing naturally (Cohen’s d = 0.58). Subjects who learned SA well abducted significantly better hypotheses than those who observed naturally or who guessed intuitively. Internal validity of SA was a = 0.74. Applying SA improved observation significantly and reduced confirmation bias

    DBBRBF- Convalesce optimization for software defect prediction problem using hybrid distribution base balance instance selection and radial basis Function classifier

    Full text link
    Software is becoming an indigenous part of human life with the rapid development of software engineering, demands the software to be most reliable. The reliability check can be done by efficient software testing methods using historical software prediction data for development of a quality software system. Machine Learning plays a vital role in optimizing the prediction of defect-prone modules in real life software for its effectiveness. The software defect prediction data has class imbalance problem with a low ratio of defective class to non-defective class, urges an efficient machine learning classification technique which otherwise degrades the performance of the classification. To alleviate this problem, this paper introduces a novel hybrid instance-based classification by combining distribution base balance based instance selection and radial basis function neural network classifier model (DBBRBF) to obtain the best prediction in comparison to the existing research. Class imbalanced data sets of NASA, Promise and Softlab were used for the experimental analysis. The experimental results in terms of Accuracy, F-measure, AUC, Recall, Precision, and Balance show the effectiveness of the proposed approach. Finally, Statistical significance tests are carried out to understand the suitability of the proposed model.Comment: 32 pages, 24 Tables, 8 Figures

    A Novel Developed Supervised Machine Learning System For Classification And Prediction of Software Faults Using NASA Dataset

    Get PDF
    The software systems of modern computers are extremely complex and versatile. Therefore, it is essential to regularly detect and correct software design faults. In order to devote resources effectively towards the creation of trustworthy software, software companies are increasingly engaging in the practise of predicting fault-prone modules in advance of testing. These software fault prediction methods rely on the thoroughness with which prior software versions' fault as well as related code has been retrievedTime, energy, and money are all saved as a result. Increases the company's initial success and bottom line greatly by satisfying its clientele. Numerous academics have poured into this area throughout the years in an effort to raise the bar for all software. Nowadays, The most often used approaches in this field are those based on machine learning (ML). The field of ML seeks to perfect software capable of evolving as well as adapting in response to fresh data. This paper introduces a fresh approach for doing ML by bringing together a number of different expert systems. In order to reach agreement on which aspects of a software system need to be tested, the proposed multi-classifier model pools the strengths of the most effective classifiers. Several top-performing classifiers for defect prediction are put through their paces in an experiential evaluation. We test our method on 16 publicly available datasets from the NASA Metric Data Programme (MDP) repository at the promise repository. Parameters of confusion, recall, precision, recognition accuracy, etc., are evaluated and contrasted with existing schemes in a software analysis performed with the help of the python simulation tool with findings. The experimental outcomes demonstrate that by combining LGBM, XGBoost, and Voting classifiers, using a multi classifier approach, we are capable to significantly improve software fault prediction performance. The results of the investigation show that the suggested method will lead to better practical outcomes in the prediction of device failures

    The Ascent, 1964 December 18

    Get PDF
    Student newspaper of Daemen College (formerly Rosary Hill College)

    Techniques for text classification: Literature review and current trends

    Get PDF
    Automated classification of text into predefined categories has always been considered as a vital method to manage and process a vast amount of documents in digital forms that are widespread and continuously increasing. This kind of web information, popularly known as the digital/electronic information is in the form of documents, conference material, publications, journals, editorials, web pages, e-mail etc. People largely access information from these online sources rather than being limited to archaic paper sources like books, magazines, newspapers etc. But the main problem is that this enormous information lacks organization which makes it difficult to manage. Text classification is recognized as one of the key techniques used for organizing such kind of digital data. In this paper we have studied the existing work in the area of text classification which will allow us to have a fair evaluation of the progress made in this field till date. We have investigated the papers to the best of our knowledge and have tried to summarize all existing information in a comprehensive and succinct manner. The studies have been summarized in a tabular form according to the publication year considering numerous key perspectives. The main emphasis is laid on various steps involved in text classification process viz. document representation methods, feature selection methods, data mining methods and the evaluation technique used by each study to carry out the results on a particular dataset

    Usage-Based Storyboarding for Web Information Systems

    Get PDF
    On a high level of abstraction a Web Information System (WIS) can be described by a storyboard, which in an abstract way specifies who will be using the system, in which way and for which goals. While syntax and semantics of storyboarding has been well explored, its pragmatics has not. This paper contributes the first step towards closing this gap by analysing the usage of WISs. Starting from a classification of intentions we first present life cases, which capture observations of user behaviour in reality. We discuss the facets of life cases and present a semi-formal way for their documentation. Life cases can be used in a pragmatic way to specify a story space, which is an important component of a storyboard. In a second step we complement life cases by user models that are specified by various facets of actor profiles that are needed for them. We analyse actor profiles and present a semi-formal way for their documentation. We outline how these profiles can be used to specify actors, which are an important component of a storyboard. Finally, we analyse contexts and the way they impact on life cases, user models and the storyboard
    • …
    corecore