427 research outputs found

    A novel framework for making dominant point detection methods non-parametric

    Get PDF
    Most dominant point detection methods require heuristically chosen control parameters. One of the commonly used control parameter is maximum deviation. This paper uses a theoretical bound of the maximum deviation of pixels obtained by digitization of a line segment for constructing a general framework to make most dominant point detection methods non-parametric. The derived analytical bound of the maximum deviation can be used as a natural bench mark for the line fitting algorithms and thus dominant point detection methods can be made parameter-independent and non-heuristic. Most methods can easily incorporate the bound. This is demonstrated using three categorically different dominant point detection methods. Such non-parametric approach retains the characteristics of the digital curve while providing good fitting performance and compression ratio for all the three methods using a variety of digital, non-digital, and noisy curves

    Time Warp Edit Distance with Stiffness Adjustment for Time Series Matching

    Full text link
    In a way similar to the string-to-string correction problem we address time series similarity in the light of a time-series-to-time-series-correction problem for which the similarity between two time series is measured as the minimum cost sequence of "edit operations" needed to transform one time series into another. To define the "edit operations" we use the paradigm of a graphical editing process and end up with a dynamic programming algorithm that we call Time Warp Edit Distance (TWED). TWED is slightly different in form from Dynamic Time Warping, Longest Common Subsequence or Edit Distance with Real Penalty algorithms. In particular, it highlights a parameter which drives a kind of stiffness of the elastic measure along the time axis. We show that the similarity provided by TWED is a metric potentially useful in time series retrieval applications since it could benefit from the triangular inequality property to speed up the retrieval process while tuning the parameters of the elastic measure. In that context, a lower bound is derived to relate the matching of time series into down sampled representation spaces to the matching into the original space. Empiric quality of the TWED distance is evaluated on a simple classification task. Compared to Edit Distance, Dynamic Time Warping, Longest Common Subsequnce and Edit Distance with Real Penalty, TWED has proven to be quite effective on the considered experimental task

    Quad Meshing

    Get PDF
    Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi-regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this State of the Art Report, we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrization, and remeshing

    A new thresholding approach for automatic generation of polygonal approximations

    Get PDF
    The present paper proposes a new algorithm for automatic generation of polygonal approximations of 2D closed contours based on a new thresholding method. The new proposal computes the signi cance level of the contour points using a new symmetric version of the well-known Ramer, Douglas - Peucker method, and then a new Adaptive method is applied to threshold the normalized signi cance level of the contour points to generate the polygonal approximation. The experiments have shown that the new algorithm has good performance for generating polygonal approximations of 2D closed contours. Futhermore, the new algorithm does not require any parameter to be tuned

    Contribuciones sobre métodos óptimos y subóptimos de aproximaciones poligonales de curvas 2-D

    Get PDF
    Esta tesis versa sobre el an álisis de la forma de objetos 2D. En visión articial existen numerosos aspectos de los que se pueden extraer información. Uno de los más usados es la forma o el contorno de esos objetos. Esta característica visual de los objetos nos permite, mediante el procesamiento adecuado, extraer información de los objetos, analizar escenas, etc. No obstante el contorno o silueta de los objetos contiene información redundante. Este exceso de datos que no aporta nuevo conocimiento debe ser eliminado, con el objeto de agilizar el procesamiento posterior o de minimizar el tamaño de la representación de ese contorno, para su almacenamiento o transmisión. Esta reducción de datos debe realizarse sin que se produzca una pérdida de información importante para representación del contorno original. Se puede obtener una versión reducida de un contorno eliminando puntos intermedios y uniendo los puntos restantes mediante segmentos. Esta representación reducida de un contorno se conoce como aproximación poligonal. Estas aproximaciones poligonales de contornos representan, por tanto, una versión comprimida de la información original. El principal uso de las mismas es la reducción del volumen de información necesario para representar el contorno de un objeto. No obstante, en los últimos años estas aproximaciones han sido usadas para el reconocimiento de objetos. Para ello los algoritmos de aproximaci ón poligonal se han usado directamente para la extracci ón de los vectores de caracter ísticas empleados en la fase de aprendizaje. Las contribuciones realizadas por tanto en esta tesis se han centrado en diversos aspectos de las aproximaciones poligonales. En la primera contribución se han mejorado varios algoritmos de aproximaciones poligonales, mediante el uso de una fase de preprocesado que acelera estos algoritmos permitiendo incluso mejorar la calidad de las soluciones en un menor tiempo. En la segunda contribución se ha propuesto un nuevo algoritmo de aproximaciones poligonales que obtiene soluciones optimas en un menor espacio de tiempo que el resto de métodos que aparecen en la literatura. En la tercera contribución se ha propuesto un algoritmo de aproximaciones que es capaz de obtener la solución óptima en pocas iteraciones en la mayor parte de los casos. Por último, se ha propuesto una versi ón mejorada del algoritmo óptimo para obtener aproximaciones poligonales que soluciona otro problema de optimización alternativo.This thesis focus on the analysis of the shape of objects. In computer vision there are several sources from which we can extract information. One of the most important source of information is the shape or contour of objects. This visual characteristic can be used to extract information, analyze the scene, etc. However, the contour of the objects contains redundant information. This redundant data does not add new information and therefore, must be deleted in order to minimize the processing burden and reducing the amount of data to represent that shape. This reduction of data should be done without losing important information to represent the original contour. A reduced version of a contour can be obtained by deleting some points of the contour and linking the remaining points by using line segments. This reduced version of a contour is known as polygonal approximation in the literature. Therefore, these polygonal approximation represent a compressed version of the original information. The main use of polygonal approximations is to reduce the amount of information needed to represent the contour of an object. However, in recent years polygonal approximations have been used to recognize objects. For this purpose, the feature vectors have been extracted from the polygonal approximations. The contributions proposed in this thesis have focused on several aspects of polygonal approximations. The rst contribution has improved several algorithms to obtain polygonal approximations, by adding a new stage of preprocessing which boost the whole method. The quality of the solutions obtained has also been improved and the computation time reduced. The second contribution proposes a novel algorithm which obtains optimal polygonal approximations in a shorter time than the optimal methods found in the literature. The third contribution proposes a new method which may obtain the optimal solution after few iterations in most cases. Finally, an improved version of the optimal polygonal approximation algorithm has been proposed to solve an alternative optimization problem

    Efficient dominant point detection based on discrete curve structure

    Get PDF
    International audienceIn this paper, we investigate the problem of dominant point detection on digital curves which consists in finding points with local maximum curvature. Thanks to previous studies of the decomposition of curves into sequence of discrete structures [5–7], namely maximal blurred segments of width [13], an initial algorithm has been proposed in [14] to detect dominant points. However, an heuristic strategy is used to identify the dominant points. We now propose a modified algorithm without heuristics but a simple measure of angle. In addition, an application of polygonal simplification is as well proposed to reduce the number of detected dominant points by associating a weight to each of them. The experimental results demonstrate the e and robustness of the proposed method

    Contribución al reconocimiento de objetos 2D mediante aproximaciones poligonales

    Get PDF
    En la presente Tesis Doctoral se realizan aportaciones novedosas en las etapas de descripción e interpretación del proceso de reconocimiento de objetos bidimensionales. Se proponen nuevas técnicas de umbralización unimodal aplicadas a la generación de aproximaciones poligonales. Estas técnicas se han comparado con las estrategias de umbralización clásicas propuestas por Rosin. Se propone un nuevo método que obtiene aproximaciones poligonales de manera no supervisada; es decir, no paramétrica. Este método incorpora una etapa de umbralización unimodal. Se ha realizado un análisis exhaustivo del método propuesto para diseñar nuevas versiones, según la combinación de las características de algunas de sus etapas. Se han considerado dos estrategias principales: estrategia de división de puntos y estrategia de fusión de puntos. Se han comparado las nuevas versiones propuestas con respecto al método original y se han obtenido nuevas versiones que representan una mejora considerable, mejorando también a todas las estrategias clásicas analizadas. Se ha incorporado una fase final de optimización, que se basa en el método propuesto por Masood. Posteriormente, se ha realizado un estudio comparativo para seleccionar la versión más eficiente de cada estrategia, así como la versión que obtiene el mejor resultado de entre todas ellas. Las versiones optimizadas mejoran al algoritmo original propuesto y a todas las demás versiones analizadas. En resumen, se ha desarrollado un nuevo método heurístico que permite generar aproximaciones poligonales eficientes de forma no supervisada. Este método puede ser utilizado en aplicaciones de tiempo real, superando las dificultades que presentan los algoritmos óptimos, que requieren de una carga computacional mayor.This doctoral thesis introduces original contributions to the description and interpretation stages of the bidimensional object recognition process. A new unimodal thresholding approach has been proposed in order to generate poligonal approximations of bidimensional contours. These techniques have been compared with the classic thresholding techniques proposed by Rosin. A new unsupervised method has been proposed. This method obtains poligonal approximations automatically and also includes the new unimodal thresholding approach proposed. An exhausted analysis has been developed in order to design new versions of the proposed method, according to the combination of its different characteristics. Two strategies have been considered: point division (split) and point fusion (merge). All the versions have been compared with the original method and some of them proof to obtain a measurable improvement. Also, the new versions improve all the classic approaches that have been analyzed A final optimization stage has been incorporated. This optimization is based in the algorithm proposed by Masood. A comparative study has been developed and the best optimized method has been chosen. The optimized versions improve the original method proposed and all the versions analyzed. To summarise, a new heuristic method has been developed. This approach can obtain efficient polygonal approximations automatically, uses a new unimodal thresholding algorithm and includes a final optimization stage. This method can be used in real time applications, exceeding the difficulties suffered by optimal algorithms, that need a higher computational load

    Fast and robust dominant point detection on digital curves

    Get PDF
    International audienceA new and fast method for dominant point detection and polygonal representation of a discrete curve is proposed. Starting from results of discrete geometry, the notion of maximal blurred segment of width v has been proposed, well adapted to possibly noisy and/or not connected curves. For a given width, the dominant points of a curve C are deduced from the sequence of maximal blurred segments of C in O(nlog^2n) time. Comparisons with other methods of the literature prove the efficiency of our approach

    Procedural feature generation for volumetric terrains using voxel grammars

    Get PDF
    © 2018 Terrain generation is a fundamental requirement of many computer graphics simulations, including computer games, flight simulators and environments in feature films. There has been a considerable amount of research in this domain, which ranges between fully automated and semi-automated methods. Voxel representations of 3D terrains can create rich features that are not found in other forms of terrain generation techniques, such as caves and overhangs. In this article, we introduce a semi-automated method of generating features for volumetric terrains using a rule-based procedural generation system. Features are generated by selecting subsets of a voxel grid as input symbols to a grammar, composed of user-created operators. This results in overhangs and caves generated from a set of simple rules. The feature generation runs on the CPU and the GPU is utilised to extract a robust mesh from the volumetric dataset
    corecore