2,598 research outputs found

    A nearly-linear computational-cost scheme for the forward dynamics of an N-body pendulum

    Get PDF
    The dynamic equations of motion of an n-body pendulum with spherical joints are derived to be a mixed system of differential and algebraic equations (DAE's). The DAE's are kept in implicit form to save arithmetic and preserve the sparsity of the system and are solved by the robust implicit integration method. At each solution point, the predicted solution is corrected to its exact solution within given tolerance using Newton's iterative method. For each iteration, a linear system of the form J delta X = E has to be solved. The computational cost for solving this linear system directly by LU factorization is O(n exp 3), and it can be reduced significantly by exploring the structure of J. It is shown that by recognizing the recursive patterns and exploiting the sparsity of the system the multiplicative and additive computational costs for solving J delta X = E are O(n) and O(n exp 2), respectively. The formulation and solution method for an n-body pendulum is presented. The computational cost is shown to be nearly linearly proportional to the number of bodies

    Notes on notions around operational research

    Get PDF

    Introduction to the Literature on Semantics

    Get PDF
    An introduction to the literature on semantics. Included are pointers to the literature on axiomatic semantics, denotational semantics, operational semantics, and type theory

    Introduction to the Literature on Programming Language Design

    Get PDF
    This is an introduction to the literature on programming language design and related topics. It is intended to cite the most important work, and to provide a place for students to start a literature search

    Decision Support Systems Research –Most Cited Articles and Books

    Get PDF

    Introduction to the Literature On Programming Language Design

    Get PDF
    This is an introduction to the literature on programming language design and related topics. It is intended to cite the most important work, and to provide a place for students to start a literature search

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Quality improvement through the identification of controllable and uncontrollable factors in software development

    Get PDF
    The software engineering community has moved from corrective methods to preventive methods shifting the emphasis from product quality improvement to process quality improvement. Inspections at the end of the production line have been replaced by design walkthroughs and built-in quality assurance techniques throughout the development lifecycle. Process models such as the Spiral, V, W and X-Models provide the principles and techniques for process improvement which, in turn, produces product improvement. Factors that affect the quality of software need to be identified and controlled to ensure predictable and measurable software. In this paper we identify controllable and uncontrollable factors and provide empirical results from a large industrial survey, as well as conclusions relating to the models and metamodels for the estimation, measurement and control of the totality of features and characteristics of software

    The Sketch of a Polymorphic Symphony

    Full text link
    In previous work, we have introduced functional strategies, that is, first-class generic functions that can traverse into terms of any type while mixing uniform and type-specific behaviour. In the present paper, we give a detailed description of one particular Haskell-based model of functional strategies. This model is characterised as follows. Firstly, we employ first-class polymorphism as a form of second-order polymorphism as for the mere types of functional strategies. Secondly, we use an encoding scheme of run-time type case for mixing uniform and type-specific behaviour. Thirdly, we base all traversal on a fundamental combinator for folding over constructor applications. Using this model, we capture common strategic traversal schemes in a highly parameterised style. We study two original forms of parameterisation. Firstly, we design parameters for the specific control-flow, data-flow and traversal characteristics of more concrete traversal schemes. Secondly, we use overloading to postpone commitment to a specific type scheme of traversal. The resulting portfolio of traversal schemes can be regarded as a challenging benchmark for setups for typed generic programming. The way we develop the model and the suite of traversal schemes, it becomes clear that parameterised + typed strategic programming is best viewed as a potent combination of certain bits of parametric, intensional, polytypic, and ad-hoc polymorphism
    • …
    corecore