26 research outputs found

    Three dimensional extension of Bresenham’s algorithm with Voronoi diagram

    Get PDF
    Bresenham’s algorithm for plotting a two-dimensional line segment is elegant and efficient in its deployment of mid-point comparison and integer arithmetic. It is natural to investigate its three-dimensional extensions. In so doing, this paper uncovers the reason for little prior work. The concept of the mid-point in a unit interval generalizes to that of nearest neighbours involving a Voronoi diagram. Algorithmically, there are challenges. While a unit interval in two-dimension becomes a unit square in three-dimension, “squaring” the number of choices in Bresenham’s algorithm is shown to have difficulties. In this paper, the three-dimensional extension is based on the main idea of Bresenham’s algorithm of minimum distance between the line and the grid points. The structure of the Voronoi diagram is presented for grid points to which the line may be approximated. The deployment of integer arithmetic and symmetry for the three-dimensional extension of the algorithm to raise the computation efficiency are also investigated

    Remuestreo estructurado de contornos de huecos en superficies 3d de objetos de forma libre utilizando bresenham

    Get PDF
    La etapa de integración dentro del proceso de reconstrucción tridimensional de objetos de forma libre, requiere de la descripción, análisis y corrección de huecos en superficies 3D. Ciertas evaluaciones cuantitativas en este tema implican contar con conjuntos de datos espaciados de forma regular o contenidos en estructuras que garanticen dicha propiedad, por ejemplo voxels, octrees o mallas estructuradas. Lograr lo anterior requiere un proceso de re-muestreo de los puntos que conforman el contorno del hueco en la superficie 3D. En este trabajo se describe un método para obtener conjuntos estructurados de puntos, a partir de los datos de contornos de huecos en objetos de forma libre. El método inicia con el ajuste de una curva NURBS al conjunto inicial de puntos con el fin de asegurar la suavidad del contorno, de lo cual se obtiene un conjunto de puntos ajustados. Finalmente se utiliza el algoritmo de discretización de Bresenham para obtener el conjunto de puntos estructurados. Los resultados obtenidos muestran que el método desarrollado asegura que el conjunto final de puntos estructurados preserven la forma del contorno original con altos niveles de detalle

    Localização e mapeamento eficiente para robótica : algoritmos e ferramentas

    Get PDF
    Doutoramento conjunto em InformáticaUm dos problemas fundamentais em robótica é a capacidade de estimar a pose de um robô móvel relativamente ao seu ambiente. Este problema é conhecido como localização robótica e a sua exatidão e eficiência têm um impacto direto em todos os sistemas que dependem da localização. Nesta tese, abordamos o problema da localização propondo um algoritmo baseado em scan matching com otimização robusta de mínimos quadrados não lineares em manifold com a utilização de um campo de verosimilhança contínuo como modelo de perceção. Esta solução oferece uma melhoria percetível na eficiência computacional sem perda de exatidão. Associado à localização está o problema de criar uma representação geométrica (ou mapa) do meio ambiente recorrendo às medidas disponíveis, um problema conhecido como mapeamento. No mapeamento a representação geométrica mais popular é a grelha volumétrica que discretiza o espaço em volumes cúbicos de igual tamanho. A implementação direta de uma grelha volumétrica oferece acesso direto e rápido aos dados mas requer uma quantidade substancial de memória. Portanto, propõe-se uma estrutura de dados híbrida, com divisão esparsa do espaço combinada com uma subdivisão densa do espaço que oferece tempos de acesso eficientes com alocações de memória reduzidas. Além disso, também oferece um mecanismo integrado de compressão de dados para reduzir ainda mais o uso de memória e uma estrutura de partilha de dados implícita que duplica dados, de forma eficiente, quando necessário recorrendo a uma estratégia copy-on-write. A implementação da solução descrita é disponibilizada na forma de uma biblioteca de software que oferece um framework para a criação de modelos baseados em grelhas volumétricas, e.g. grelhas de ocupação. Como existe uma separação entre o modelo e a gestão de espaço, todas as funcionalidades da abordagem esparsa-densa estão disponíveis para qualquer modelo implementado com o framework. O processo de mapeamento é um problema complexo considerando que localização e mapeamento são resolvidos simultaneamente. Este problema, conhecido como localização e mapeamento simultâneo (SLAM), tem tendência a de consumir recursos consideráveis à medida que a exigência na qualidade do mapeamento aumenta. De modo a contribuir para o aumento da eficiência, esta tese apresenta duas solução de SLAM. Na primeira abordagem, o algoritmo de localização é adaptado ao mapeamento incremental que, em combinação com o framework esparso-denso, oferece uma solução de SLAM online computacionalmente eficiente. O resultados obtidos são comparados com outras soluções disponíveis na literatura recorrendo a um benchmark de SLAM. Os resultados obtidos demonstram que a nossa solução oferece uma boa eficiência sem comprometer a exatidão. A segunda abordagem combina o nosso SLAM online com um filtro de partículas Rao-Blackwellized para propor uma solução de full SLAM com um grau elevado de eficiência computacional. A solução inclui propostas de distribuição melhorada com refinamento de pose através de scan matching, re-amostragem adaptativa com pesos de amostragem suavizados, partilha eficiente de dados entre partículas da mesma geração e suporte para multi-threading.One of the most basic perception problems in robotics is the ability to estimate the pose of a mobile robot relative to the environment. This problem is known as mobile robot localization and its accuracy and efficiency has a direct impact in all systems than depend on localization. In this thesis, we address the localization problem by proposing an algorithm based on scan matching with robust non-linear least squares optimization on a manifold that relies on a continuous likelihood field as measurement model. This solution offers a noticeable improvement in computational efficiency without losing accuracy. Associated with localization is the problem of creating the geometric representation (or map) of the environment using the available measurements, a problem known as mapping. In mapping, the most popular geometric representation is the volumetric grid that quantizes space into cubic volumes of equal size. The regular volumetric grid implementation offers direct and fast access to data but requires a substantial amount of allocated memory. Therefore, in this thesis, we propose a hybrid data structure with sparse division of space combined with dense subdivision of space that offers efficient access times with reduced memory allocation. Additionally, it offers an online data compression mechanism to further reduce memory usage and an implicit data sharing structure that efficiently duplicates data when needed using a thread safe copy-on-write strategy. The implementation of the solution is available as a software library that provides a framework to create models based on volumetric grids, e.g. occupancy grids. The separation between the model and space management makes all features of the sparse-dense approach available to every model implemented with the framework. The process of mapping is a complex problem, considering that localization and mapping have to be solved simultaneously. This problem, known as simultaneous localization and mapping (SLAM), has the tendency to consume considerable resources as the mapping quality requirements increase. As an effort to increase the efficiency of SLAM, this thesis presents two SLAM solutions. The first proposal adapts our localization algorithm to incremental mapping that, in combination with the sparse-dense framework, provides a computationally efficient online SLAM solution. Using a SLAM benchmark, the obtained results are compared with other solutions found in the literature. The comparison shows that our solution provides good efficiency without compromising accuracy. The second approach combines our online SLAM with a Rao-Blackwellized particle filter to propose a highly computationally efficient full SLAM solution. It includes an improved proposal distribution with scan matching pose refinement, adaptive resampling with smoothed importance weight, efficient sharing of data between sibling particles and multithreading support

    An interactive approach to SLAM

    Get PDF

    Fast Volume Rendering and Deformation Algorithms

    Full text link
    Volume rendering is a technique for simultaneous visualization of surfaces and inner structures of objects. However, the huge number of volume primitives (voxels) in a volume, leads to high computational cost. In this dissertation I developed two algorithms for the acceleration of volume rendering and volume deformation. The first algorithm accelerates the ray casting of volume. Previous ray casting acceleration techniques like space-leaping and early-ray-termination are only efficient when most voxels in a volume are either opaque or transparent. When many voxels are semi-transparent, the rendering time will increase considerably. Our new algorithm improves the performance of ray casting of semi-transparently mapped volumes by exploiting the opacity coherency in object space, leading to a speedup factor between 1.90 and 3.49 in rendering semi-transparent volumes. The acceleration is realized with the help of pre-computed coherency distances. We developed an efficient algorithm to encode the coherency information, which requires less than 12 seconds for data sets with about 8 million voxels. The second algorithm is for volume deformation. Unlike the traditional methods, our method incorporates the two stages of volume deformation, i.e. deformation and rendering, into a unified process. Instead to deform each voxel to generate an intermediate deformed volume, the algorithm follows inversely deformed rays to generate the desired deformation. The calculations and memory for generating the intermediate volume are thus saved. The deformation continuity is achieved by adaptive ray division which matches the amplitude of local deformation. We proposed approaches for shading and opacit adjustment which guarantee the visual plausibility of deformation results. We achieve an additional deformation speedup factor of 2.34~6.58 by incorporating early-ray-termination, space-leaping and the coherency acceleration technique in the new deformation algorithm

    Visual SLAM using straight lines

    Get PDF
    The present thesis is focuses on the problem of Simultaneous Localisation and Mapping (SLAM) using only visual data (VSLAM). This means to concurrently estimate the position of a moving camera and to create a consistent map of the environment. Since implementing a whole VSLAM system is out of the scope of a degree thesis, the main aim is to improve an existing visual SLAM system by complementing the commonly used point features with straight line primitives. This enables more accurate localization in environments with few feature points, like corridors. As a foundation for the project, ScaViSLAM by Strasdat et al. is used, which is a state-of-the-art real-time visual SLAM framework. Since it currently only supports Stereo and RGB-D systems, implementing a Monocular approach will be researched as well as an integration of it as a ROS package in order to deploy it on a mobile robot. For the experimental results, the Care-O-bot service robot developed by Fraunhofer IPA will be used

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes

    Context Exploitation in Data Fusion

    Get PDF
    Complex and dynamic environments constitute a challenge for existing tracking algorithms. For this reason, modern solutions are trying to utilize any available information which could help to constrain, improve or explain the measurements. So called Context Information (CI) is understood as information that surrounds an element of interest, whose knowledge may help understanding the (estimated) situation and also in reacting to that situation. However, context discovery and exploitation are still largely unexplored research topics. Until now, the context has been extensively exploited as a parameter in system and measurement models which led to the development of numerous approaches for the linear or non-linear constrained estimation and target tracking. More specifically, the spatial or static context is the most common source of the ambient information, i.e. features, utilized for recursive enhancement of the state variables either in the prediction or the measurement update of the filters. In the case of multiple model estimators, context can not only be related to the state but also to a certain mode of the filter. Common practice for multiple model scenarios is to represent states and context as a joint distribution of Gaussian mixtures. These approaches are commonly referred as the join tracking and classification. Alternatively, the usefulness of context was also demonstrated in aiding the measurement data association. Process of formulating a hypothesis, which assigns a particular measurement to the track, is traditionally governed by the empirical knowledge of the noise characteristics of sensors and operating environment, i.e. probability of detection, false alarm, clutter noise, which can be further enhanced by conditioning on context. We believe that interactions between the environment and the object could be classified into actions, activities and intents, and formed into structured graphs with contextual links translated into arcs. By learning the environment model we will be able to make prediction on the target\u2019s future actions based on its past observation. Probability of target future action could be utilized in the fusion process to adjust tracker confidence on measurements. By incorporating contextual knowledge of the environment, in the form of a likelihood function, in the filter measurement update step, we have been able to reduce uncertainties of the tracking solution and improve the consistency of the track. The promising results demonstrate that the fusion of CI brings a significant performance improvement in comparison to the regular tracking approaches
    corecore