1,194 research outputs found

    The Douglas-Peucker algorithm for line simplification: Re-evaluation through visualization

    Get PDF
    The primary aim of this paper is to illustrate the value of visualization in cartography and to indicate that tools for the generation and manipulation of realistic images are of limited value within this application. This paper demonstrates the value of visualization within one problem in cartography, namely the generalisation of lines. It reports on the evaluation of the Douglas-Peucker algorithm for line simplification. Visualization of the simplification process and of the results suggest that the mathematical measures of performance proposed by some other researchers are inappropriate, misleading and questionable

    Cartographic Algorithms: Problems of Implementation and Evaluation and the Impact of Digitising Errors

    Get PDF
    Cartographic generalisation remains one of the outstanding challenges in digital cartography and Geographical Information Systems (GIS). It is generally assumed that computerisation will lead to the removal of spurious variability introduced by the subjective decisions of individual cartographers. This paper demonstrates through an in‐depth study of a line simplification algorithm that computerisation introduces its own sources of variability. The algorithm, referred to as the Douglas‐Peucker algorithm in cartographic literature, has been widely used in image processing, pattern recognition and GIS for some 20 years. An analysis of this algorithm and study of some implementations in wide use identify the presence of variability resulting from the subjective decisions of software implementors. Spurious variability in software complicates the processes of evaluation and comparison of alternative algorithms for cartographic tasks. No doubt, variability in implementation could be removed by rigorous study and specification of algorithms. Such future work must address the presence of digitising error in cartographic data. Our analysis suggests that it would be difficult to adapt the Douglas‐Peucker algorithm to cope with digitising error without altering the method. Copyright © 1991, Wiley Blackwell. All rights reserve

    Line generalisation by repeated elimination of points

    Get PDF
    This paper presents a new approach to line generalisation which uses the concept of ‘effective area’ for progressive simplification of a line by point elimination. Two coastlines are used to compare the performance of this, with that of the widely used Douglas-Peucker, algorithm. The results from the area-based algorithm compare favourably with manual generalisation of the same lines. It is capable of achieving both imperceptible minimal simplifications and caricatural generalisations. By careful selection of cutoff values, it is possible to use the same algorithm for scale-dependent and scale-independent generalisations. More importantly, it offers scope for modelling cartographic lines as consisting of features within features so that their geometric manipulation may be modified by application- and/or user-defined rules and weights. The paper examines the merits and limitations of the algorithm and the opportunities it offers for further research and progress in the field of line generalisation. © 1993 Maney Publishing

    Testing Indicators of Translation Expertise in an Intralingual Task

    Get PDF
    Massey/Ehrensberger-Dow (2014) showed that the focused use of external resources, more frequent but shorter pauses, and fast text production speed correlated with the level of translation experience in participants translating a text from English into German. This paper aims to: (1) investigate whether these indicators distinguish professional translators from trainees and language students who translated from English into Polish, and (2) test which indicators are also present in an intralingual task, i.e., when paraphrasing a text. Additionally, task duration and the quality of the target texts produced by the three groups are compared with a view to expand the list of indicators of translation expertise. The data discussed here come from the ParaTrans research project in which professional translators, translation trainees and language students translated and paraphrased comparable texts. The results confirm that the less frequent use of external resources, shorter problem-solving pauses, fast text production and high quality target texts are strong indicators of expertise in translation. The number of problem-solving pauses was the only parameter found to distinguish professionals from trainees and language students in the paraphrasing task. This suggests that translation expertise perceived as a general construct can be seen as encompassing task expertise: the ability to reformulate meaning (transferable to a paraphrasing task) and the domain knowledge expertise inclusive of the ability to efficiently use bilingual knowledge when producing a translatio

    Neonatal Diagnostics: Toward Dynamic Growth Charts of Neuromotor Control

    Get PDF
    © 2016 Torres, Smith, Mistry, Brincker and Whyatt. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY).The current rise of neurodevelopmental disorders poses a critical need to detect risk early in order to rapidly intervene. One of the tools pediatricians use to track development is the standard growth chart. The growth charts are somewhat limited in predicting possible neurodevelopmental issues. They rely on linear models and assumptions of normality for physical growth data – obscuring key statistical information about possible neurodevelopmental risk in growth data that actually has accelerated, non-linear rates-of-change and variability encompassing skewed distributions. Here, we use new analytics to profile growth data from 36 newborn babies that were tracked longitudinally for 5 months. By switching to incremental (velocity-based) growth charts and combining these dynamic changes with underlying fluctuations in motor performance – as the transition from spontaneous random noise to a systematic signal – we demonstrate a method to detect very early stunting in the development of voluntary neuromotor control and to flag risk of neurodevelopmental derail.Peer reviewedFinal Published versio

    The clumpiness of molecular clouds: HCO+ (3--2) survey near Herbig-Haro objects

    Get PDF
    Some well-studied Herbig Haro objects have associated with them one or more cold, dense, and quiescent clumps of gas. We propose that such clumps near an HH object can be used as a general measure of clumpiness in the molecular cloud that contains that HH object. Our aim is to make a survey of clumps around a sample of HH objects, and to use the results to make an estimate of the clumpiness in molecular clouds. All known cold, dense, and quiescent clumps near HH objects are anomalously strong HCO+ emitters. Our method is, therefore, to search for strong HCO+ emission as an indicator of a clump near to an HH object. The searches were made using JCMT and SEST in the HCO+ 3-2 and also H13CO+ 1-0 lines, with some additional searches for methanol and sulphur monoxide lines. The sources selected were a sample of 22 HH objects in which no previous HCO+ emission had been detected. We find that half of the HH objects have clumps detected in the HCO+ 3-2 line and that all searches in H13CO$+ 1-0 lines show evidence of clumpiness. All condensations have narrow linewidths and are evidently unaffected dynamically by the HH jet shock. We conclude that the molecular clouds in which these HH objects are found must be highly heterogeneous on scales of less than 0.1 pc. An approximate calculation based on these results suggests that the area filling factor of clumps affected by HH objects is on the order of 10%. These clumps have gas number densities larger than 3e4 cm-2.Comment: 11 pages, 14 figures. Accepted for publication in Astronomy and Astrophysic
    • 

    corecore