1,157 research outputs found

    ARTIFICIAL INTELLIGENCE DIALECTS OF THE BAYESIAN BELIEF REVISION LANGUAGE

    Get PDF
    Rule-based expert systems must deal with uncertain data, subjective expert opinions, and inaccurate decision rules. Computer scientists and psychologists have proposed and implemented a number of belief languages widely used in applied systems, and their normative validity is clearly an important question, both on practical as well on theoretical grounds. Several well-know belief languages are reviewed, and both previous work and new insights into their Bayesian interpretations are presented. In particular, the authors focus on three alternative belief-update models the certainty factors calculus, Dempster-Shafer simple support functions, and the descriptive contrast/inertia model. Important "dialectsâ of these languages are shown to be isomorphic to each other and to a special case of Bayesian inference. Parts of this analysis were carried out by other authors; these results were extended and consolidated using an analytic technique designed to study the kinship of belief languages in general.Information Systems Working Papers Serie

    Aligning the Qualitative Comparative Analysis (QCA) counterfactual approach with the practice of retroduction: some preliminary insights

    Get PDF
    This study offers fresh ontological insights by examining generative causality through the Qualitative Comparative Analysis (QCA) counterfactual lens, in conjunction with Critical Realism and the practice of retroduction. Specifically, it claims that Information Systems (IS) researchers could retroduce generative mechanisms by leveraging the QCA counterfactual approach to causation because retroduction is about conjecturing hypothetical mechanisms that would generate the outcome of interest in a counterfactual fashion. Drawing on an example of typological theorising, this study calls for a renewed effort in the use of retroduction in the study of IS phenomena. In addition, this study sheds new light on the overarching approach for conducting Critical Realist (case study) research. A number of theoretical, methodological, and practical implications are discussed

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Advances and Applications of DSmT for Information Fusion

    Get PDF
    This book is devoted to an emerging branch of Information Fusion based on new approach for modelling the fusion problematic when the information provided by the sources is both uncertain and (highly) conflicting. This approach, known in literature as DSmT (standing for Dezert-Smarandache Theory), proposes new useful rules of combinations

    The Unbalanced Classification Problem: Detecting Breaches in Security

    Get PDF
    This research proposes several methods designed to improve solutions for security classification problems. The security classification problem involves unbalanced, high-dimensional, binary classification problems that are prevalent today. The imbalance within this data involves a significant majority of the negative class and a minority positive class. Any system that needs protection from malicious activity, intruders, theft, or other types of breaches in security must address this problem. These breaches in security are considered instances of the positive class. Given numerical data that represent observations or instances which require classification, state of the art machine learning algorithms can be applied. However, the unbalanced and high-dimensional structure of the data must be considered prior to applying these learning methods. High-dimensional data poses a “curse of dimensionality” which can be overcome through the analysis of subspaces. Exploration of intelligent subspace modeling and the fusion of subspace models is proposed. Detailed analysis of the one-class support vector machine, as well as its weaknesses and proposals to overcome these shortcomings are included. A fundamental method for evaluation of the binary classification model is the receiver operating characteristic (ROC) curve and the area under the curve (AUC). This work details the underlying statistics involved with ROC curves, contributing a comprehensive review of ROC curve construction and analysis techniques to include a novel graphic for illustrating the connection between ROC curves and classifier decision values. The major innovations of this work include synergistic classifier fusion through the analysis of ROC curves and rankings, insight into the statistical behavior of the Gaussian kernel, and novel methods for applying machine learning techniques to defend against computer intrusion detection. The primary empirical vehicle for this research is computer intrusion detection data, and both host-based intrusion detection systems (HIDS) and network-based intrusion detection systems (NIDS) are addressed. Empirical studies also include military tactical scenarios
    corecore