269,320 research outputs found

    Colour normalisation to reduce inter-patient and intra-patient variability in microaneurysm detection in colour retinal images

    Get PDF
    Images of the human retina vary considerably in their appearance depending on the skin pigmentation (amount of melanin) of the subject. Some form of normalisation of colour in retinal images is required for automated analysis of images if good sensitivity and specificity at detecting lesions is to be achieved in populations involving diverse races. Here we describe an approach to colour normalisation by shade-correction intra-image and histogram normalisation inter-image. The colour normalisation is assessed by its effect on the automated detection of microaneurysms in retinal images. It is shown that the Na¨ıve Bayes classifier used in microaneurysm detection benefits from the use of features measured over colour normalised images

    Uniform Proofs of Normalisation and Approximation for Intersection Types

    Full text link
    We present intersection type systems in the style of sequent calculus, modifying the systems that Valentini introduced to prove normalisation properties without using the reducibility method. Our systems are more natural than Valentini's ones and equivalent to the usual natural deduction style systems. We prove the characterisation theorems of strong and weak normalisation through the proposed systems, and, moreover, the approximation theorem by means of direct inductive arguments. This provides in a uniform way proofs of the normalisation and approximation theorems via type systems in sequent calculus style.Comment: In Proceedings ITRS 2014, arXiv:1503.0437

    Normalisation Control in Deep Inference via Atomic Flows

    Get PDF
    We introduce `atomic flows': they are graphs obtained from derivations by tracing atom occurrences and forgetting the logical structure. We study simple manipulations of atomic flows that correspond to complex reductions on derivations. This allows us to prove, for propositional logic, a new and very general normalisation theorem, which contains cut elimination as a special case. We operate in deep inference, which is more general than other syntactic paradigms, and where normalisation is more difficult to control. We argue that atomic flows are a significant technical advance for normalisation theory, because 1) the technique they support is largely independent of syntax; 2) indeed, it is largely independent of logical inference rules; 3) they constitute a powerful geometric formalism, which is more intuitive than syntax

    Garside and quadratic normalisation: a survey

    Full text link
    Starting from the seminal example of the greedy normal norm in braid monoids, we analyse the mechanism of the normal form in a Garside monoid and explain how it extends to the more general framework of Garside families. Extending the viewpoint even more, we then consider general quadratic normalisation procedures and characterise Garside normalisation among them.Comment: 30 page

    Normalization of the Background Independent Open String Field Theory Action

    Get PDF
    It has been shown recently that the background independent open string field theory provides an exact description of the tachyon condensation on unstable D-branes of bosonic string theory. In this analysis the overall normalisation of the action was chosen so that it reproduces the conjectured relations involving tachyon condensation. In this paper we fix this normalisation by comparing the on-shell three tachyon amplitude computed from the background independent open string field theory with the same amplitude computed from the cubic open string field theory, which in turn agrees with the result of the first quantised theory. We find that this normalisation factor is in precise agreement with the one required for verifying the conjectured properties of the tachyon potential.Comment: LaTeX file, 8 pages, references adde

    A Theory of Explicit Substitutions with Safe and Full Composition

    Full text link
    Many different systems with explicit substitutions have been proposed to implement a large class of higher-order languages. Motivations and challenges that guided the development of such calculi in functional frameworks are surveyed in the first part of this paper. Then, very simple technology in named variable-style notation is used to establish a theory of explicit substitutions for the lambda-calculus which enjoys a whole set of useful properties such as full composition, simulation of one-step beta-reduction, preservation of beta-strong normalisation, strong normalisation of typed terms and confluence on metaterms. Normalisation of related calculi is also discussed.Comment: 29 pages Special Issue: Selected Papers of the Conference "International Colloquium on Automata, Languages and Programming 2008" edited by Giuseppe Castagna and Igor Walukiewic
    corecore