79 research outputs found

    FICCS; A Fact Integrity Constraint Checking System

    Get PDF

    The application of organisational cybernetics to the design and diagnosis of financial performance management systems

    Get PDF
    The object of this study is the processes that govern the flow of financial resources around an organisation. This is addressed in the context of the need for organisations to survive and prosper in an uncertain and dynamic world. Specifically, interest is focussed upon the mechanisms responsible for its ability to respond in an appropriate way to environmental disturbances in the short term and adapt to changes in the pattern of environmental disturbances over the longer term. The aim is to identify how this process is carried out and what implications this might have for the efficient and effective design of an organisations and practices and procedures.These are fundamental issues for any sort of social organisations. However, over the last fifty years a body of knowledge has accumulated – often described as systems theory – which seeks to identify and codify the principles that underpin all forms of organisation, whether it is sociological, biological or psychological. Advocates of systems theory claim that invariant principles can be applied, and knowledge transferred, across phenomenological domains.In academia, the study of the mechanisms that govern the flow of financial resources has received considerable attention. The study of Management Control Systems (MCS) in general and budgeting in particular is one of the most densely populated fields of accounting academic research. There has, however, been a surprisingly limited amount published on the application of systems theory to financial control processes.The broad issues that this thesis seeks to address are therefore:• What principles and concepts from systems theory can be applied to study ofthe management of financial resources in organisations?• How might they contribute to knowledge and understanding of such systems?• How can they be used to design and operate systems in practice

    Inductive Pattern Formation

    Get PDF
    With the extended computational limits of algorithmic recursion, scientific investigation is transitioning away from computationally decidable problems and beginning to address computationally undecidable complexity. The analysis of deductive inference in structure-property models are yielding to the synthesis of inductive inference in process-structure simulations. Process-structure modeling has examined external order parameters of inductive pattern formation, but investigation of the internal order parameters of self-organization have been hampered by the lack of a mathematical formalism with the ability to quantitatively define a specific configuration of points. This investigation addressed this issue of quantitative synthesis. Local space was developed by the Poincare inflation of a set of points to construct neighborhood intersections, defining topological distance and introducing situated Boolean topology as a local replacement for point-set topology. Parallel development of the local semi-metric topological space, the local semi-metric probability space, and the local metric space of a set of points provides a triangulation of connectivity measures to define the quantitative architectural identity of a configuration and structure independent axes of a structural configuration space. The recursive sequence of intersections constructs a probabilistic discrete spacetime model of interacting fields to define the internal order parameters of self-organization, with order parameters external to the configuration modeled by adjusting the morphological parameters of individual neighborhoods and the interplay of excitatory and inhibitory point sets. The evolutionary trajectory of a configuration maps the development of specific hierarchical structure that is emergent from a specific set of initial conditions, with nested boundaries signaling the nonlinear properties of local causative configurations. This exploration of architectural configuration space concluded with initial process-structure-property models of deductive and inductive inference spaces. In the computationally undecidable problem of human niche construction, an adaptive-inductive pattern formation model with predictive control organized the bipartite recursion between an information structure and its physical expression as hierarchical ensembles of artificial neural network-like structures. The union of architectural identity and bipartite recursion generates a predictive structural model of an evolutionary design process, offering an alternative to the limitations of cognitive descriptive modeling. The low computational complexity of these models enable them to be embedded in physical constructions to create the artificial life forms of a real-time autonomously adaptive human habitat

    A data complexity and rewritability tetrachotomy of ontology-mediated queries with a covering axiom

    Get PDF
    Aiming to understand the data complexity of answering conjunctive queries mediated by an axiom stating that a class is covered by the union of two other classes, we show that deciding their first-order rewritability is PSPACE-hard and obtain a number of sufficient conditions for membership in AC0, L, NL, and P. Our main result is a complete syntactic AC0/NL/P/CONP tetrachotomy of path queries under the assumption that the covering classes are disjoint

    Natively probabilistic computation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2009.Includes bibliographical references (leaves 129-135).I introduce a new set of natively probabilistic computing abstractions, including probabilistic generalizations of Boolean circuits, backtracking search and pure Lisp. I show how these tools let one compactly specify probabilistic generative models, generalize and parallelize widely used sampling algorithms like rejection sampling and Markov chain Monte Carlo, and solve difficult Bayesian inference problems. I first introduce Church, a probabilistic programming language for describing probabilistic generative processes that induce distributions, which generalizes Lisp, a language for describing deterministic procedures that induce functions. I highlight the ways randomness meshes with the reflectiveness of Lisp to support the representation of structured, uncertain knowledge, including nonparametric Bayesian models from the current literature, programs for decision making under uncertainty, and programs that learn very simple programs from data. I then introduce systematic stochastic search, a recursive algorithm for exact and approximate sampling that generalizes a popular form of backtracking search to the broader setting of stochastic simulation and recovers widely used particle filters as a special case. I use it to solve probabilistic reasoning problems from statistical physics, causal reasoning and stereo vision. Finally, I introduce stochastic digital circuits that model the probability algebra just as traditional Boolean circuits model the Boolean algebra.(cont.) I show how these circuits can be used to build massively parallel, fault-tolerant machines for sampling and allow one to efficiently run Markov chain Monte Carlo methods on models with hundreds of thousands of variables in real time. I emphasize the ways in which these ideas fit together into a coherent software and hardware stack for natively probabilistic computing, organized around distributions and samplers rather than deterministic functions. I argue that by building uncertainty and randomness into the foundations of our programming languages and computing machines, we may arrive at ones that are more powerful, flexible and efficient than deterministic designs, and are in better alignment with the needs of computational science, statistics and artificial intelligence.by Vikash Kumar Mansinghka.Ph.D

    A tetrachotomy of ontology-mediated queries with a covering axiom

    Get PDF
    Our concern is the problem of efficiently determining the data complexity of answering queries mediated by descrip- tion logic ontologies and constructing their optimal rewritings to standard database queries. Originated in ontology- based data access and datalog optimisation, this problem is known to be computationally very complex in general, with no explicit syntactic characterisations available. In this article, aiming to understand the fundamental roots of this difficulty, we strip the problem to the bare bones and focus on Boolean conjunctive queries mediated by a simple cov- ering axiom stating that one class is covered by the union of two other classes. We show that, on the one hand, these rudimentary ontology-mediated queries, called disjunctive sirups (or d-sirups), capture many features and difficulties of the general case. For example, answering d-sirups is Π2p-complete for combined complexity and can be in AC0 or L-, NL-, P-, or coNP-complete for data complexity (with the problem of recognising FO-rewritability of d-sirups be- ing 2ExpTime-hard); some d-sirups only have exponential-size resolution proofs, some only double-exponential-size positive existential FO-rewritings and single-exponential-size nonrecursive datalog rewritings. On the other hand, we prove a few partial sufficient and necessary conditions of FO- and (symmetric/linear-) datalog rewritability of d- sirups. Our main technical result is a complete and transparent syntactic AC0 / NL / P / coNP tetrachotomy of d-sirups with disjoint covering classes and a path-shaped Boolean conjunctive query. To obtain this tetrachotomy, we develop new techniques for establishing P- and coNP-hardness of answering non-Horn ontology-mediated queries as well as showing that they can be answered in NL

    A tetrachotomy of ontology-mediated queries with a covering axiom

    Get PDF
    Our concern is the problem of efficiently determining the data complexity of answering queries mediated by descrip- tion logic ontologies and constructing their optimal rewritings to standard database queries. Originated in ontology- based data access and datalog optimisation, this problem is known to be computationally very complex in general, with no explicit syntactic characterisations available. In this article, aiming to understand the fundamental roots of this difficulty, we strip the problem to the bare bones and focus on Boolean conjunctive queries mediated by a simple cov- ering axiom stating that one class is covered by the union of two other classes. We show that, on the one hand, these rudimentary ontology-mediated queries, called disjunctive sirups (or d-sirups), capture many features and difficulties of the general case. For example, answering d-sirups is Π2p-complete for combined complexity and can be in AC0 or L-, NL-, P-, or coNP-complete for data complexity (with the problem of recognising FO-rewritability of d-sirups be- ing 2ExpTime-hard); some d-sirups only have exponential-size resolution proofs, some only double-exponential-size positive existential FO-rewritings and single-exponential-size nonrecursive datalog rewritings. On the other hand, we prove a few partial sufficient and necessary conditions of FO- and (symmetric/linear-) datalog rewritability of d- sirups. Our main technical result is a complete and transparent syntactic AC0 / NL / P / coNP tetrachotomy of d-sirups with disjoint covering classes and a path-shaped Boolean conjunctive query. To obtain this tetrachotomy, we develop new techniques for establishing P- and coNP-hardness of answering non-Horn ontology-mediated queries as well as showing that they can be answered in NL

    Proceedings of the fifth UK/BCS symposium on knowledge discovery and data mining

    Get PDF
    This is the proceedings of a one day symposium on Knowledge Discovery and Data Mining held at the Salford Lowry in 2009. The topics covered included some of the most important and exciting issues in the field. There were presentations on fundamental research topics such as how data mining is changing the very nature of scientific methods, the challenges of time series data mining, use of social network analysis for classification of messages, knowledge discovery from case data, and development of a unifying framework for feature selection methods. There were also presentations describing the lessons learned from real world case studies in detecting financial crime, profiling electricity usage, image processing, credit scoring, and predicting internet shopping pattern
    • …
    corecore