217 research outputs found

    Contribuciones sobre métodos óptimos y subóptimos de aproximaciones poligonales de curvas 2-D

    Get PDF
    Esta tesis versa sobre el an álisis de la forma de objetos 2D. En visión articial existen numerosos aspectos de los que se pueden extraer información. Uno de los más usados es la forma o el contorno de esos objetos. Esta característica visual de los objetos nos permite, mediante el procesamiento adecuado, extraer información de los objetos, analizar escenas, etc. No obstante el contorno o silueta de los objetos contiene información redundante. Este exceso de datos que no aporta nuevo conocimiento debe ser eliminado, con el objeto de agilizar el procesamiento posterior o de minimizar el tamaño de la representación de ese contorno, para su almacenamiento o transmisión. Esta reducción de datos debe realizarse sin que se produzca una pérdida de información importante para representación del contorno original. Se puede obtener una versión reducida de un contorno eliminando puntos intermedios y uniendo los puntos restantes mediante segmentos. Esta representación reducida de un contorno se conoce como aproximación poligonal. Estas aproximaciones poligonales de contornos representan, por tanto, una versión comprimida de la información original. El principal uso de las mismas es la reducción del volumen de información necesario para representar el contorno de un objeto. No obstante, en los últimos años estas aproximaciones han sido usadas para el reconocimiento de objetos. Para ello los algoritmos de aproximaci ón poligonal se han usado directamente para la extracci ón de los vectores de caracter ísticas empleados en la fase de aprendizaje. Las contribuciones realizadas por tanto en esta tesis se han centrado en diversos aspectos de las aproximaciones poligonales. En la primera contribución se han mejorado varios algoritmos de aproximaciones poligonales, mediante el uso de una fase de preprocesado que acelera estos algoritmos permitiendo incluso mejorar la calidad de las soluciones en un menor tiempo. En la segunda contribución se ha propuesto un nuevo algoritmo de aproximaciones poligonales que obtiene soluciones optimas en un menor espacio de tiempo que el resto de métodos que aparecen en la literatura. En la tercera contribución se ha propuesto un algoritmo de aproximaciones que es capaz de obtener la solución óptima en pocas iteraciones en la mayor parte de los casos. Por último, se ha propuesto una versi ón mejorada del algoritmo óptimo para obtener aproximaciones poligonales que soluciona otro problema de optimización alternativo.This thesis focus on the analysis of the shape of objects. In computer vision there are several sources from which we can extract information. One of the most important source of information is the shape or contour of objects. This visual characteristic can be used to extract information, analyze the scene, etc. However, the contour of the objects contains redundant information. This redundant data does not add new information and therefore, must be deleted in order to minimize the processing burden and reducing the amount of data to represent that shape. This reduction of data should be done without losing important information to represent the original contour. A reduced version of a contour can be obtained by deleting some points of the contour and linking the remaining points by using line segments. This reduced version of a contour is known as polygonal approximation in the literature. Therefore, these polygonal approximation represent a compressed version of the original information. The main use of polygonal approximations is to reduce the amount of information needed to represent the contour of an object. However, in recent years polygonal approximations have been used to recognize objects. For this purpose, the feature vectors have been extracted from the polygonal approximations. The contributions proposed in this thesis have focused on several aspects of polygonal approximations. The rst contribution has improved several algorithms to obtain polygonal approximations, by adding a new stage of preprocessing which boost the whole method. The quality of the solutions obtained has also been improved and the computation time reduced. The second contribution proposes a novel algorithm which obtains optimal polygonal approximations in a shorter time than the optimal methods found in the literature. The third contribution proposes a new method which may obtain the optimal solution after few iterations in most cases. Finally, an improved version of the optimal polygonal approximation algorithm has been proposed to solve an alternative optimization problem

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    The segmentation issue: general stopping criteria and specific design considerations for practical application of evolutionary algorithms

    Get PDF
    Segmentation is a tool presented for representation and approximation of data, according to a set of appropriate models. These procedures have applications to many different domains, such as time series analysis, polygonal approximation, Air Traffic Control,... Different heuristic and metaheuristic proposals have been introduced to deal with this issue. This thesis provides a novel multiobjective evolutionary method, analyzing the required general tools for the application evolutionary algorithms to real problems and the specific modifications required over the different steps of general proposals to adapt them to the segmentation domain. An introduction to the domain is presented by means of the design of a specific heuristic for segmentation of Air Traffic Control (ATC) data. This domain has a series of characteristics which make it difficult to be faced with traditional techniques: noisy data and a large number of measurements. The proposal works on two phases, using a pre-segmentation which introduces available domain information and applying a standard technique over this initial technique's results. Its results according to the presented domain, tested with a set of eight different representative trajectories, show competitive advantages compared to general approaches, which oversegmentate noisy data and, in some cases, exhibit poor scalability. This heuristic proposal shows the costly process of adapting available approaches and designing specific ones, along with the multi-objective nature of the problem, which requires the use of quality indicators for a proper comparison process. Applying evolutionary algorithms to segmentation provides several advantages, highlighting the fact that the problem dependance of heuristics make it costly to adapt these heuristics to new domains, as introduced by the designed heuristic to ATC. However, the practical application of these algorithms requires the study of a topic which has received little research effort from the community: stopping criteria. An evolutionary approach should contain a dynamic procedure which can determine when stagnation has taken place and stop the algorithm accordingly (as opposed to a-priori cost budgets, either in function evaluations or generations, which are usually applied for test datasets). Stopping criteria have been faced for single and multi-objective cases in this thesis. Single-objective stopping criteria have been approached proposing an active role of the stopping criteria, actively increasing the diversity in the variable space while tracking the updates in the fitness function. Thus, the algorithm reuses the information obtained for the stopping decision and feeds it to a stopping prevention mechanism in order to prevent problematic situations such as early convergence. The presented algorithm has been tested according to a set of 27 different functions, with different characteristics regarding their dimensionality, search space, local minima... The results show that the introduced mechanisms enhance the robustness of the results, due to the improved exploration and the early convergence prevention. Multi-objective stopping criteria are faced with the use of progress indicators (comparison measures of the quality of the evolution results at different generations) and an associated data gathering tool. The final proposal uses three different progress indicators, (hypervolume, epsilon and Mutual Dominance Rate) and considers them jointly according to a decision fusion architecture. The stagnation analysis is based on the least squares regression parameters of the indicators values, including a normality analysis as well. The online nature of these algorithms is highlighted, preventing the recomputation of the indicators values which were present in other available alternatives, and also focusing on the simplicity of the final proposal, in order to reduce the cost of introducing it into available algorithms. The proposal has been tested with instances of the DTLZ algorithm family, obtaining satisfactory stops with a standard set of configuration values for the technique. However, there is a lack of quantitative measures to determine the objective quality of a stop and to properly compare its value to other alternatives. The multi-objective nature of the segmentation problem is analyzed to propose a multiobjective evolutionary algorithm (MOEA) to deal with it. This nature is analyzed according to a selection of available approaches, highlighting the difficulties which had to be faced in the parameter configuration in order to guide the processes to the desired solution values. A multi-objective a-posteriori approach such as the one presented allows the decision maker to choose from the front of possible final solutions the one which suits him best, simplifying this process. The presented approach chooses SPEA2 as its underlying MOEA, analyzing different representation and initialization proposals. The results have been validated against a representative set of heuristic and metaheuristic techniques, using three widely extended curves from the polygonal approximation domain (chromosome, leaf and semicircle), obtaining statistically better results for almost all the different test cases. This initial MOEA approach had unresolved issues, such as the archiving technique complexity order, and also lacked the proper specific design considerations to adapt it to the application domain. These issues have been faced according to different improvements. First of all, an alternative representation is proposed, including partial fitness information and associated fitness-aware transformation operators (transformation operators which compute children fitness values according to their changes and the parents partial values). A novel archiving procedure is introduced according to the bi-objective nature of the domain, being one of them discrete. This leads to a relaxed Pareto dominance check, named epsilon glitches. Multi-objective local search versions of the traditional algorithms are proposed and tested for the initialization of the algorithm, along with the stopping criterion proposal which has also been adapted to the problem characteristics. The archive size in this case is big enough to contain all the different individuals in the optimal front, such that quality assessment is simplified and a simpler mechanism can be introduced to detect stagnation, according to the improvements in each of the possible individuals. The final evolutionary proposal is scalable, requires few configuration parameters and introduces an efficient dynamic stopping criterion. Its results have been tested against the original technique and the set of heuristic and metaheuristic techniques previously used, including the three original curves and also more complex versions of them (obtained with an introduced generation mechanism according to these original shapes). Even though the stopping results are very satisfactory, the obtained results are slightly worse than the original MOEA for the three simpler problem instances with the established configuration parameters (as was expected, due to the computational effort of the a-priori established number of generations and population size, based on the analysis of the algorithm's results). However, the comparison versus the alternative techniques stills shows the same statistically better results, and its reduced computational cost allows its application to a wider set of problems.La segmentación es una técnica creada para la representación y la aproximación de conjuntos de datos a través de un conjunto de modelos apropiados. Estos procedimientos tienen aplicaciones para múltiples dominios distintos, como el análisis de series temporales, la aproximación poligonal o el Control de Tráfico Aéreo. Se han hecho múltiples propuestas tanto de carácter heurístico como metaheurístico para lidiar con este problema. Esta tesis proporciona un nuevo método evolutivo multiobjetivo, analizando las herramientas generales necesarias para la aplicación de algoritmos evolutivos a problemas reales y las modificaciones específicas necesarias sobre los distintos pasos de las propuestas genéricas para adaptarlos al dominio de la segmentación. Se presenta una introducción al dominio mediante el diseño de una heurística específica para la segmentación de datos procedentes del Control de Tráfico Aéreo (CTA). Este dominio tiene una serie de características que dificultan la aplicación de técnicas tradicionales: datos con ruido y un gran número de muestras. La propuesta realizada funciona de acuerdo a dos fases, utilizando una presegmentación que introduce información del dominio disponible para posteriormente aplicar una técnica estándar sobre los resultados de esta técnica inicial. Sus resultados para el dominio presentado, probado con un conjunto de ocho trayectorias representativas distintas, presentan ventajas competitivas frente a los enfoques generales, que sobresegmentan los datos con ruido y, en algunos casos, presentan una mala escalabilidad. Esta propuesta heurística muestra el costoso proceso que implica adaptar los enfoques existentes o el diseño de otros nuevos, junto a la naturaleza multiobjectivo del problema, que precisa del uso de indicadores de calidad para realizar un proceso de comparación apropiado. La aplicación de algoritmos evolutivos a la segmentación tiene múltiples ventajas, destacando el hecho de la dependencia existente entre las heurísticas y el problema específico para el que han sido diseñadas, lo que hace que su adaptación a nuevos dominios sea costosa, como se ha introducido a través de la propuesta heurística para CTA. A pesar de ello, la aplicación práctica de estos algoritmos requiere el estudio de una faceta que ha recibido poca atención por parte de la comunidad desde el punto de vista de la investigación: los criterios de parada. Un enfoque evolutivo debería tener una técnica dinámica que pueda detectar cuando se ha producido el estancamiento del proceso, y parar el algoritmo de acuerdo a ello (de manera opuesta a los criterios a-priori que establecen un coste predeterminado, expresado como número de evaluaciones o de generaciones, y que son habitualmente aplicados para los conjuntos de datos de prueba). Los criterios de parada se han afrontado tanto desde el caso de un único objetivo como desde el caso multiobjectivo en esta tesis. Los criterios de parada para un único objetivo se han abordado proponiendo un rol activo para el criterio, aumentando la diversidad en el espacio de variables de una manera activa, mientras se monitorizan los cambios en la función objetivo. De esta manera, el algoritmo reutiliza la información obtenida para la decisión de parada y la inserta en un mecanismo de prevención de la parada con la finalidad de prevenir situaciones problemáticas como la convergencia temprana. El algoritmo presentado se ha probado sobre un conjunto de 27 funciones distintas, con diferentes características respecto a su dimensionalidad, espacio de búsqueda, mínimos locales... Los resultados muestran que los mecanismos introducidos mejoran la robustez de los resultados, haciendo uso de la exploración mejorada y la prevención de la convergencia temprana. Los criterios de parada multiobjetivo se han planteado con el uso de indicadores de avance (medidas comparativas de la calidad de los resultados de la evolución en diferentes generaciones) y una herramienta de recolección de datos asociada. La propuesta final utiliza tres indicadores de avance distintos (hypervolumen, epsilon y ratio de dominancia mutua) y los considera de una manera conjunta de acuerdo a una arquitectura de fusión de decisiones. El análisis del estancamiento se basa en los parámetros de una regresión de mínimos cuadrados sobre los valores de los indicadores, incluyendo asimismo un análisis de normalidad. Se recalca la naturaleza online de estos algoritmos, evitando el recálculo de los valores de los indicadores que estaba presente en otras alternativas disponibles, y también focalizándose en la simplicidad de la propuesta final, de manera que se facilite el proceso de introducir el criterio en los algoritmos existentes. La propuesta ha sido probada con instancias de la familia de algoritmos DTLZ, obteniendo resultados de parada satisfactorios con un conjunto de valores de configuración estándar para la técnica. Sin embargo, existe una falta de medidas cuantitativas para determinar la calidad objetiva de una parada, así como para comparar de manera apropiada su valor frente al de otras alternativas. La naturaleza multiobjetivo del problema de segmentación se ha analizado para proponer un algoritmo evolutivo multiobjetivo (AEMO) para resolverlo. Esta naturaleza ha sido analizada de acuerdo a una selección de los enfoques disponibles, destacando las dificultades que se tienen que afrontar en la configuración de los parámetros de cara a guiar el proceso hacia los valores de solución deseados. Un enfoque multiobjetivo a-posteriori como el que se ha presentado permite al responsable elegir del frente de posibles soluciones finales aquella que encaja mejor, simplificando este proceso. El enfoque presentado ha elegido SPEA2 como algoritmo de base, analizando diferentes propuestas de inicialización y representación. Los resultados se han validado frente a un conjunto significativo de técnicas heurísticas y metaheurísticas, utilizando tres curvas ampliamente extendidas en el dominio de la segmentación poligonal (cromosoma, hoja y semicírculo), obteniendo resultados estadísticamente mejores para la casi totatilidad de los casos de prueba. Esta propuesta inicial de AEMO presentaba una serie de problemas sin resolver, como el orden de complejidad de la técnica de almacenaje, y además carecía de las consideraciones específicas de diseño para su adaptación al dominio de aplicación. Estos problemas se han afrontado de acuerdo a diferentes mejoras. Por un lado, se ha propuesto una representación alternativa, incluyendo información parcial de la función objetivo y operadores de transformación informados (operadores de transformación que calculan los valores de la función objetivo de los hijos de acuerdo a los cambios realizados y los valores parciales de los padres). Una nueva técnica de almacenaje se ha introducido de acuerdo a la naturaleza biobjetivo del dominio, siendo uno de ellos además discreto. Esta naturaleza ha llevado a la aplicación de una forma relajada de dominancia de Pareto, que hemos denominado pulsos épsilon. Versiones multiobjetivo de los algoritmos tradicionales de búsqueda local han sido propuestas y probadas para la inicialización del algoritmo, junto con la propuesta de criterio de parada, que también ha sido adaptada a las características del problema. En este caso, el tamaño del almacén es suficientemente grande como para almacenar todos los individuos del frente óptimo, de manera que las técnicas de análisis de calidad de los frentes se simplifican, y un mecanismo más sencillo puede ser introducido para detectar el estancamiento, de acuerdo a las mejoras en cada uno de los individuos posibles. La propuesta evolutiva final es escalable, requiere pocos parámetros de configuración e introduce un criterio de parada dinámico y eficiente. Sus resultados se han probado frente a la técnica original y el conjunto de técnicas heurísticas y metaheurísticas previamente utilizadas, incluyendo las tres curvas originales y versiones más complejas de las mismas (obtenidas con un mecanismo de generación incluido de acuerdo a estas tres formas originales). A pesar de que los resultados de parada son muy satisfactorios, los resultados obtenidos son ligeramente peores que el AEMO original para las tres instancias del problema más simples, utilizando el conjunto de parámetros de configuración establecidos (como cabía esperar, dado el coste computacional del número de generaciones y tamaño de la población establecidos a priori, basados en el análisis de los resultados del algoritmo). En cualquier caso, la comparación frente a las técnicas alternativas todavía presenta los mismos resultados estadísticamente mejores, y las mejoras en el coste computacional permiten su aplicación a un mayor conjunto de problemas.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: Pedro Isasi Viñuela.- Secretario: Rafael Martínez Tomás.- Vocal: Javier Segovia Pére

    High-speed robot control in complex environments

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1988.Bibliography: leaves 201-206.by Wyatt S. Newman.Ph.D

    Computer Vision Problems in 3D Plant Phenotyping

    Get PDF
    In recent years, there has been significant progress in Computer Vision based plant phenotyping (quantitative analysis of biological properties of plants) technologies. Traditional methods of plant phenotyping are destructive, manual and error prone. Due to non-invasiveness and non-contact properties as well as increased accuracy, imaging techniques are becoming state-of-the-art in plant phenotyping. Among several parameters of plant phenotyping, growth analysis is very important for biological inference. Automating the growth analysis can result in accelerating the throughput in crop production. This thesis contributes to the automation of plant growth analysis. First, we present a novel system for automated and non-invasive/non-contact plant growth measurement. We exploit the recent advancements of sophisticated robotic technologies and near infrared laser scanners to build a 3D imaging system and use state-of-the-art Computer Vision algorithms to fully automate growth measurement. We have set up a gantry robot system having 7 degrees of freedom hanging from the roof of a growth chamber. The payload is a range scanner, which can measure dense depth maps (raw 3D coordinate points in mm) on the surface of an object (the plant). The scanner can be moved around the plant to scan from different viewpoints by programming the robot with a specific trajectory. The sequence of overlapping images can be aligned to obtain a full 3D structure of the plant in raw point cloud format, which can be triangulated to obtain a smooth surface (triangular mesh), enclosing the original plant. We show the capability of the system to capture the well known diurnal pattern of plant growth computed from the surface area and volume of the plant meshes for a number of plant species. Second, we propose a technique to detect branch junctions in plant point cloud data. We demonstrate that using these junctions as feature points, the correspondence estimation can be formulated as a subgraph matching problem, and better matching results than state-of-the-art can be achieved. Also, this idea removes the requirement of a priori knowledge about rotational angles between adjacent scanning viewpoints imposed by the original registration algorithm for complex plant data. Before, this angle information had to be approximately known. Third, we present an algorithm to classify partially occluded leaves by their contours. In general, partial contour matching is a NP-hard problem. We propose a suboptimal matching solution and show that our method outperforms state-of-the-art on 3 public leaf datasets. We anticipate using this algorithm to track growing segmented leaves in our plant range data, even when a leaf becomes partially occluded by other plant matter over time. Finally, we perform some experiments to demonstrate the capability and limitations of the system and highlight the future research directions for Computer Vision based plant phenotyping

    Automatic contouring by piecewise quadratic approximation.

    Get PDF

    Scale-based surface understanding using diffusion smoothing

    Get PDF
    The research discussed in this thesis is concerned with surface understanding from the viewpoint of recognition-oriented, scale-related processing based on surface curvatures and diffusion smoothing. Four problems below high level visual processing are investigated: 1) 3-dimensional data smoothing using a diffusion process; 2) Behaviour of shape features across multiple scales, 3) Surface segmentation over multiple scales; and 4) Symbolic description of surface features at multiple scales. In this thesis, the noisy data smoothing problem is treated mathematically as a boundary value problem of the diffusion equation instead of the well-known Gaussian convolution, In such a way, it provides a theoretical basis to uniformly interpret the interrelationships amongst diffusion smoothing, Gaussian smoothing, repeated averaging and spline smoothing. It also leads to solving the problem with a numerical scheme of unconditional stability, which efficiently reduces the computational complexity and preserves the signs of curvatures along the surface boundaries. Surface shapes are classified into eight types using the combinations of the signs of the Gaussian curvature K and mean curvature H, both of which change at different scale levels. Behaviour of surface shape features over multiple scale levels is discussed in terms of the stability of large shape features, the creation, remaining and fading of small shape features, the interaction between large and small features and the structure of behaviour of the nested shape features in the KH sign image. It provides a guidance for tracking the movement of shape features from fine to large scales and for setting up a surface shape description accordingly. A smoothed surface is partitioned into a set of regions based on curvature sign homogeneity. Surface segmentation is posed as a problem of approximating a surface up to the degree of Gaussian and mean curvature signs using the depth data alone How to obtain feasible solutions of this under-determined problem is discussed, which includes the surface curvature sign preservation, the reason that a sculptured surface can be segmented with the KH sign image alone and the selection of basis functions of surface fitting for obtaining the KH sign image or for region growing. A symbolic description of the segmented surface is set up at each scale level. It is composed of a dual graph and a geometrical property list for the segmented surface. The graph describes the adjacency and connectivity among different patches as the topological-invariant properties that allow some object's flexibility, whilst the geometrical property list is added to the graph as constraints that reduce uncertainty. With this organisation, a tower-like surface representation is obtained by tracking the movement of significant features of the segmented surface through different scale levels, from which a stable description can be extracted for inexact matching during object recognition

    Temporal integration of loudness as a function of level

    Get PDF

    Signal constellation and carrier recovery technique for voice-band modems

    Get PDF
    corecore