3,098 research outputs found

    Polygonal Representation of Digital Curves

    Get PDF

    Probabilistic convexity measure

    Full text link

    A novel framework for making dominant point detection methods non-parametric

    Get PDF
    Most dominant point detection methods require heuristically chosen control parameters. One of the commonly used control parameter is maximum deviation. This paper uses a theoretical bound of the maximum deviation of pixels obtained by digitization of a line segment for constructing a general framework to make most dominant point detection methods non-parametric. The derived analytical bound of the maximum deviation can be used as a natural bench mark for the line fitting algorithms and thus dominant point detection methods can be made parameter-independent and non-heuristic. Most methods can easily incorporate the bound. This is demonstrated using three categorically different dominant point detection methods. Such non-parametric approach retains the characteristics of the digital curve while providing good fitting performance and compression ratio for all the three methods using a variety of digital, non-digital, and noisy curves

    Methods for Ellipse Detection from Edge Maps of Real Images

    Get PDF

    Digital Image Processing

    Get PDF
    This book presents several recent advances that are related or fall under the umbrella of 'digital image processing', with the purpose of providing an insight into the possibilities offered by digital image processing algorithms in various fields. The presented mathematical algorithms are accompanied by graphical representations and illustrative examples for an enhanced readability. The chapters are written in a manner that allows even a reader with basic experience and knowledge in the digital image processing field to properly understand the presented algorithms. Concurrently, the structure of the information in this book is such that fellow scientists will be able to use it to push the development of the presented subjects even further

    Contribuciones sobre métodos óptimos y subóptimos de aproximaciones poligonales de curvas 2-D

    Get PDF
    Esta tesis versa sobre el an álisis de la forma de objetos 2D. En visión articial existen numerosos aspectos de los que se pueden extraer información. Uno de los más usados es la forma o el contorno de esos objetos. Esta característica visual de los objetos nos permite, mediante el procesamiento adecuado, extraer información de los objetos, analizar escenas, etc. No obstante el contorno o silueta de los objetos contiene información redundante. Este exceso de datos que no aporta nuevo conocimiento debe ser eliminado, con el objeto de agilizar el procesamiento posterior o de minimizar el tamaño de la representación de ese contorno, para su almacenamiento o transmisión. Esta reducción de datos debe realizarse sin que se produzca una pérdida de información importante para representación del contorno original. Se puede obtener una versión reducida de un contorno eliminando puntos intermedios y uniendo los puntos restantes mediante segmentos. Esta representación reducida de un contorno se conoce como aproximación poligonal. Estas aproximaciones poligonales de contornos representan, por tanto, una versión comprimida de la información original. El principal uso de las mismas es la reducción del volumen de información necesario para representar el contorno de un objeto. No obstante, en los últimos años estas aproximaciones han sido usadas para el reconocimiento de objetos. Para ello los algoritmos de aproximaci ón poligonal se han usado directamente para la extracci ón de los vectores de caracter ísticas empleados en la fase de aprendizaje. Las contribuciones realizadas por tanto en esta tesis se han centrado en diversos aspectos de las aproximaciones poligonales. En la primera contribución se han mejorado varios algoritmos de aproximaciones poligonales, mediante el uso de una fase de preprocesado que acelera estos algoritmos permitiendo incluso mejorar la calidad de las soluciones en un menor tiempo. En la segunda contribución se ha propuesto un nuevo algoritmo de aproximaciones poligonales que obtiene soluciones optimas en un menor espacio de tiempo que el resto de métodos que aparecen en la literatura. En la tercera contribución se ha propuesto un algoritmo de aproximaciones que es capaz de obtener la solución óptima en pocas iteraciones en la mayor parte de los casos. Por último, se ha propuesto una versi ón mejorada del algoritmo óptimo para obtener aproximaciones poligonales que soluciona otro problema de optimización alternativo.This thesis focus on the analysis of the shape of objects. In computer vision there are several sources from which we can extract information. One of the most important source of information is the shape or contour of objects. This visual characteristic can be used to extract information, analyze the scene, etc. However, the contour of the objects contains redundant information. This redundant data does not add new information and therefore, must be deleted in order to minimize the processing burden and reducing the amount of data to represent that shape. This reduction of data should be done without losing important information to represent the original contour. A reduced version of a contour can be obtained by deleting some points of the contour and linking the remaining points by using line segments. This reduced version of a contour is known as polygonal approximation in the literature. Therefore, these polygonal approximation represent a compressed version of the original information. The main use of polygonal approximations is to reduce the amount of information needed to represent the contour of an object. However, in recent years polygonal approximations have been used to recognize objects. For this purpose, the feature vectors have been extracted from the polygonal approximations. The contributions proposed in this thesis have focused on several aspects of polygonal approximations. The rst contribution has improved several algorithms to obtain polygonal approximations, by adding a new stage of preprocessing which boost the whole method. The quality of the solutions obtained has also been improved and the computation time reduced. The second contribution proposes a novel algorithm which obtains optimal polygonal approximations in a shorter time than the optimal methods found in the literature. The third contribution proposes a new method which may obtain the optimal solution after few iterations in most cases. Finally, an improved version of the optimal polygonal approximation algorithm has been proposed to solve an alternative optimization problem

    Cohort aggregation modelling for complex forest stands: Spruce-aspen mixtures in British Columbia

    Full text link
    Mixed-species growth models are needed as a synthesis of ecological knowledge and for guiding forest management. Individual-tree models have been commonly used, but the difficulties of reliably scaling from the individual to the stand level are often underestimated. Emergent properties and statistical issues limit their effectiveness. A more holistic modelling of aggregates at the whole stand level is a potentially attractive alternative. This work explores methodology for developing biologically consistent dynamic mixture models where the state is described by aggregate stand-level variables for species or age/size cohorts. The methods are demonstrated and tested with a two-cohort model for spruce-aspen mixtures named SAM. The models combine single-species submodels and submodels for resource partitioning among the cohorts. The partitioning allows for differences in competitive strength among species and size classes, and for complementarity effects. Height growth reduction in suppressed cohorts is also modelled. SAM fits well the available data, and exhibits behaviors consistent with current ecological knowledge. The general framework can be applied to any number of cohorts, and should be useful as a basis for modelling other mixed-species or uneven-aged stands.Comment: Accepted manuscript, to appear in Ecological Modellin

    Computational advances in gravitational microlensing: a comparison of CPU, GPU, and parallel, large data codes

    Full text link
    To assess how future progress in gravitational microlensing computation at high optical depth will rely on both hardware and software solutions, we compare a direct inverse ray-shooting code implemented on a graphics processing unit (GPU) with both a widely-used hierarchical tree code on a single-core CPU, and a recent implementation of a parallel tree code suitable for a CPU-based cluster supercomputer. We examine the accuracy of the tree codes through comparison with a direct code over a much wider range of parameter space than has been feasible before. We demonstrate that all three codes present comparable accuracy, and choice of approach depends on considerations relating to the scale and nature of the microlensing problem under investigation. On current hardware, there is little difference in the processing speed of the single-core CPU tree code and the GPU direct code, however the recent plateau in single-core CPU speeds means the existing tree code is no longer able to take advantage of Moore's law-like increases in processing speed. Instead, we anticipate a rapid increase in GPU capabilities in the next few years, which is advantageous to the direct code. We suggest that progress in other areas of astrophysical computation may benefit from a transition to GPUs through the use of "brute force" algorithms, rather than attempting to port the current best solution directly to a GPU language -- for certain classes of problems, the simple implementation on GPUs may already be no worse than an optimised single-core CPU version.Comment: 11 pages, 4 figures, accepted for publication in New Astronom
    corecore