19 research outputs found

    Hypergrammars: An extension of macrogrammars

    Get PDF
    A new class of generative grammars called hypergrammars is introduced. They are described as a natural extension of Fischer's macrogrammars. Three modes of derivation, inside-out, outside-in, and unrestricted are considered, and the classes of languages so defined are compared with other known classes. It is shown that the outside-in hyper-languages are the same as the outside-in macrolanguages but that inside-out hyperlanguages are the same as Fischer's quoted languages. Various closure properties are considered as well as generalizations of the original definitions. Three new hierarchies of languages each embedded in the class of quoted languages are discovered. It is claimed that this new approach to Fischer's work is more understandable and also mathematically elegant

    A First Course in Formal Language Theory

    No full text

    The computation of nearly minimal Steiner trees in graphs

    No full text
    The computation of a minimal Steiner tree for a general weighted graph is known to be NP-hard. Except for very simple cases, it is thus computationally impracticable to use an algorithm which produces an exact solution. This paper describes a heuristic algorithm which runs in polynomial time and produces a near minimal solution. Experimental results show that the algorithm performs satisfactorily in the rectilinear case. The paper provides an interesting case study of NP-hard problems and of the important technique of heuristic evaluation

    Measure based metrics for aggregated data

    No full text

    Preface

    No full text

    1.25-Approximation Algorithm for Steiner Tree Problem with Distances 1 and 2

    No full text
    We give a 1.25 approximation algorithm for the Steiner Tree Problem with distances one and two, improving on the best known bound for that problem.

    Non-linear dimensionality reduction for privacy-preserving data classification

    No full text
    Many techniques have been proposed to protect the privacy of data outsourced for analysis by external parties. However, most of these techniques distort the underlying data properties, and therefore, hinder data mining algorithms from discovering patterns. The aim of Privacy-Preserving Data Mining (PPDM) is to generate a data-friendly transformation that maintains both the privacy and the utility of the data. We have proposed a novel privacy-preserving framework based on non-linear dimensionality reduction (i.e. non-metric multidimensional scaling) to perturb the original data. The perturbed data exhibited good utility in terms of distance-preservation between objects. This was tested on a clustering task with good results. In this paper, we test our novel PPDM approach on a classification task using a k-Nearest Neighbour (k-NN) classification algorithm. We compare the classification results obtained from both the original and the perturbed data and find them to be much same particularly for the few lower dimensions. We show that, for distance-based classification, our approach preserves the utility of the data while hiding the private details
    corecore