15 research outputs found

    Efficiently parallelised algorithm to find isoptic surface of polyhedral meshes

    Get PDF
    The isoptic surface of a three-dimensional shape is defined in [1] as the generalization of isoptics of curves. The authors of the paper also presented an algorithm to determine isoptic surfaces of convex meshes. In [9] new searching algorithms are provided to find points of the isoptic surface of a triangulated model in E³. The new algorithms work for concave shapes as well. In this paper, we present a faster, simpler, and efficiently parallelised version of the algorithm of [9] that can be used to search for the points of the isoptic surface of a given closed polyhedral mesh, taking advantage of the computing capabilities of the high-performance graphics cards and using the benefits of nested parallelism. For the simultaneous computations, the NVIDIA’s Compute Unified Device Architecture (CUDA) was used. Our experiments show speedups up to 100 times using the new parallel algorithm

    Bigger Buffer k-d Trees on Multi-Many-Core Systems

    Get PDF
    A buffer k-d tree is a k-d tree variant for massively-parallel nearest neighbor search. While providing valuable speed-ups on modern many-core devices in case both a large number of reference and query points are given, buffer k-d trees are limited by the amount of points that can fit on a single device. In this work, we show how to modify the original data structure and the associated workflow to make the overall approach capable of dealing with massive data sets. We further provide a simple yet efficient way of using multiple devices given in a single workstation. The applicability of the modified framework is demonstrated in the context of astronomy, a field that is faced with huge amounts of data

    Bigger Buffer k-d Trees on Multi-Many-Core Systems

    Get PDF
    A buffer k-d tree is a k-d tree variant for massively-parallel nearest neighbor search. While providing valuable speed-ups on modern many-core devices in case both a large number of reference and query points are given, buffer k-d trees are limited by the amount of points that can fit on a single device. In this work, we show how to modify the original data structure and the associated workflow to make the overall approach capable of dealing with massive data sets. We further provide a simple yet efficient way of using multiple devices given in a single workstation. The applicability of the modified framework is demonstrated in the context of astronomy, a field that is faced with huge amounts of data

    Efficiently parallelised algorithm to find isoptic surface of polyhedral meshes

    Get PDF
    The isoptic surface of a three-dimensional shape is defined in [1] as the generalization of isoptics of curves. The authors of the paper also presented an algorithm to determine isoptic surfaces of convex meshes. In [9] new searching algorithms are provided to find points of the isoptic surface of a triangulated model in E³. The new algorithms work for concave shapes as well. In this paper, we present a faster, simpler, and efficiently parallelised version of the algorithm of [9] that can be used to search for the points of the isoptic surface of a given closed polyhedral mesh, taking advantage of the computing capabilities of the high-performance graphics cards and using the benefits of nested parallelism. For the simultaneous computations, the NVIDIA’s Compute Unified Device Architecture (CUDA) was used. Our experiments show speedups up to 100 times using the new parallel algorithm

    Astrophysical data mining with GPU. A case study: Genetic classification of globular clusters

    Get PDF
    We present a multi-purpose genetic algorithm, designed and implemented with GPGPU/CUDA parallel computing technology. The model was derived from our CPU serial implementation, named GAME (Genetic Algorithm Model Experiment). It was successfully tested and validated on the detection of candidate Globular Clusters in deep, wide-field, single band HST images. The GPU version of GAME will be made available to the community by integrating it into the web application DAMEWARE (DAta Mining Web Application REsource, http://dame.dsf.unina.it/beta_info.html), a public data mining service specialized on massive astrophysical data. Since genetic algorithms are inherently parallel, the GPGPU computing paradigm leads to a speedup of a factor of 200× in the training phase with respect to the CPU based version

    Sistema de digitalização 3D usando super-resolução em imagens RGBD

    Get PDF
    Orientador : Prof. Dr. Luciano SilvaCo-orientadora : Profª. Drª. Olga R. P. BellonDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 10/09/2013Inclui referênciasResumo: Com o advento de novos sensores de profundidade de baixo custo e com o aumento do poder de processamento paralelo das placas gr_a_cas, houve um aumento signi_cativo em pesquisas na _area de reconstru_c~ao 3D em tempo real. No grupo de pesquisa IMAGO, existe um sistema de reconstru_c~ao 3D para a preserva _c~ao digital, adaptado aos scanners a laser de alta resolu_c~ao. Visando aumentar a exibilidade deste sistema, o objetivo deste trabalho _e a amplia_c~ao do atual pipeline de reconstru_c~ao 3D do IMAGO para permitir a cria_c~ao de modelos utilizando os novos sensores em tempo real. Outro objetivo _e a aplica_c~ao de um m_etodo para o tratamento das imagens de baixa qualidade desses sensores, proporcionando modelos reconstru__dos a partir das novas imagens de melhor resolu_c~ao. A principal meta da preserva_c~ao digital _e a _delidade tanto na geometria quanto na textura do modelo _nal, o tempo e custo computacional s~ao objetivos secund_arios. Portanto, o novo pipeline se resume a tr^es etapas: a modelagem geom_etrica em tempo real, a super-resolu_c~ao e a reconstru_c~ao 3D de alto custo. O objetivo da primeira _e proporcionar a captura completa e o armazenamento de todas as imagens, ambos em tempo real, usando o modelo atualizado apenas para guiar o usu_ario. Na segunda etapa, aumentamos a qualidade e resolu_c~ao das imagens capturadas para a cria_c~ao de um modelo mais _dedigno na etapa _nal, a etapa de reconstru_c~ao 3D utilizando o atual sistema do IMAGO.Abstract: With the advent of new low-cost depth sensors and with the increasing parallel processing power of graphics cards, there was a signi_cant increase in research involving the _eld of real-time 3D reconstruction. In the IMAGO research group, there is a 3D reconstruction system for digital preservation, applied to high resolution laser scanners. To increase the exibility of the mentioned system, our goal is to contributes to the expansion of IMAGO's current 3D reconstruction pipeline to enable the creation of models using new real-time depth sensors. Another objective is the employment of a method that process the sensor's low resolution images, providing reconstructed models using higher resolution images. The aim of digital preservation is the accuracy in both geometry and texture for the _nal model, the computational time and cost are secondary goals. Therefore, the new pipeline is summarized in three steps: a real-time geometric modeling, a super-resolution technique, and high-cost geometric modeling. The goal of the _rst step is to provide a complete capture and image storage, using the real-time model to guide the user. In the second step, we increase the quality and resolution of the captured images to create smooth and accurate models in the 3D reconstruction step using IMAGO's current system
    corecore