64 research outputs found

    Automated Digital Machining for Parallel Processors

    Get PDF
    When a process engineer creates a tool path a number of fixed decisions are made that inevitably produce sub-optimal results. This is because it is impossible to process all of the tradeoffs before generating the tool path. The research presents a methodology to support a process engineers attempt to generate optimal tool paths by performing automated digital machining and analysis. This methodology automatically generates and evaluates tool paths based on parallel processing of digital part models and generalized cutting geometry. Digital part models are created by voxelizing STL files and the resulting digital part surfaces are obtained based on casting rays into the part model. Tool paths are generated based on a general path template and updated based on generalized tool geometry and part surface information. The material removed by the generalized cutter as it follows the path is used to obtain path metrics. The paths are evaluated based on the path metrics of material removal rate, machining time, and amount of scallop. This methodology is a parallel processing accelerated framework suitable for generating tool paths in parallel enabling the process engineer to rank and select the best tool path for the job

    Programação de máquinas-ferramentas de 3 eixos assistida por editor para modelagem e simulador gráfico 3D de peças

    Get PDF
    Orientador: Prof. Dr. Hélio PedriniDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 30/06/2008Bibliografia: fls. 83-87Área de concentraçao: InformáticaResumoResumo: A atividade de planejamento de processos é de grande importância para a fabricação mecânica, pois possibilita a racionalização das decisões a fim de obter eficazmente a peca a ser usinada de acordo com as especificações de projeto. Dessa forma, a redução do tempo de produção e dos custos com material e mão-de-obra torna-se uma questão fundamental. Esforços internacionais tem sido realizados para promover maior integração entre projeto, processo e fabricação. Uma iniciativa e a utilização de um conjunto de informações, conhecidas como features, para descrever a forma e os atributos de uma peca. As máquinas-ferramentas tradicionais executam comandos escritos em linguagem G/M, os quais correspondem aos movimentos de eixos da maquina e funções das ferramentas. O uso de features permite a usinagem da peça por meio de uma sequencia de operações de alto nível de abstração para a remoção de material. Este trabalho, inserido em um projeto multidisciplinar nas áreas de Ciência da Computação e Engenharia Mecânica, descreve a metodologia para desenvolvimento de um protótipo que visa auxiliar o processo de usinagem em máquinas-ferramentas de 3 eixos por meio do emprego de features. Os principais módulos que compõem o protótipo são o de edição da peca, simulação do modelo e transmissão do código para a maquina-ferramenta. O modulo de edição permite a inserção dos parâmetros geométricos relacionados com as features. Após a edição, o modelo pode ser visualizado e avaliado pelo usuário. Este modelo permite ainda a geração do código (programa) a ser interpretado pela maquina-ferramenta. A validação do programa e facilitada com o auxílio de um simulador gráfico. Os recursos de simulação propostos baseiam-se na representação gráfica da trajetória das ferramentas de corte. Após a verificação do programa, o usuário pode transmiti-lo para o comando da maquina-ferramenta. Para a transferência de dados, adotou-se o protocolo RS-232C entre as portas seriais do computador e da máquina-ferramenta. O modelo da peca e descrito pela combinação de duas técnicas de representação, a geometria solida construtiva e a enumeração de ocupação espacial, a partir de um conjunto de primitivas gráficas como cubos, paralelepípedos e cilindros. As primitivas são combinadas para formar um novo objeto solido por meio de uma sequencia ordenada de operações Booleanas. Uma estrutura hierárquica e usada para controlar a aplicação das operações Booleanas

    Kolmiulotteisten tietokoneavusteisten mallien yksinkertaistaminen renderoinnin nopeuttamiseksi

    Get PDF
    Visualization of three-dimensional (3D) computer-aided design model is an integral part of the design process. Large assemblies such as plant or building designs contain a substantial amount of geometric data. New constraints for visualization performance and the amount of geometric data are set by the advent of mobile devices and virtual reality headsets. Our goal is to improve visualization performance and reduce memory consumption by simplifying 3D models while retaining the output simplification quality stable regardless of the geometric complexity of the input mesh. We research the current state of 3D mesh simplification methods that use geometry decimation. We design and implement our own data structure for geometry decimation. Based on the existing research, we select and use an edge decimation method for model simplification. In order to free the user from configuring edge decimation level per model by hand, and to retain a stable quality of the simplification output, we propose a threshold parameter, \textit{edge decimation cost threshold}. The threshold is calculated by multiplying the length of the model’s bounding box diagonal with a user-defined scale parameter. Our results show that the edge decimation cost threshold works as expected. The geometry decimation algorithm manages to simplify models with round surfaces with an excellent simplification rate. Based on the edge decimation cost threshold, the algorithm terminates the geometry decimation for models that have a large number of planar surfaces. Without the threshold, the simplification leads to large geometric errors quickly. The visualization performance improvement from the simplification scales almost at the same rate as the simplification rate

    Registration using Graphics Processor Unit

    Get PDF
    Data point set registration is an important operation in coordinate metrology. Registration is the operation by which sampled point clouds are aligned with a CAD model by a 4X4 homogeneous transformation (e.g., rotation and translation). This alignment permits validation of the produced artifact\u27s geometry. State-of-the-art metrology systems are now capable of generating thousands, if not millions, of data points during an inspection operation, resulting in increased computational power to fully utilize these larger data sets. The registration process is an iterative nonlinear optimization operation having an execution time directly related to the number of points processed and CAD model complexity. The objective function to be minimized by this optimization is the sum of the square distances between each point in the point cloud and the closest surface in the CAD model. A brute force approach to registration, which is often used, is to compute the minimum distance between each point and each surface in the CAD model. As point cloud sizes and CAD model complexity increase, this approach becomes intractable and inefficient. Highly efficient numerical and analytical gradient based algorithms exist and their goal is to convergence to an optimal solution in minimum time. This thesis presents a new approach to efficiently perform the registration process by employing readily available computer hardware, the graphical processor unit (GPU). The data point set registration time for the GPU shows a significant improvement (around 15-20 times) over typical CPU performance. Efficient GPU programming decreases the complexity of the steps and improves the rate of convergence of the existing algorithms. The experimental setup reveals the exponential increasing nature of the CPU and the linear performance of the GPU in various aspects of an algorithm. The importance of CPU in the GPU programming is highlighted. The future implementations disclose the possible extensions of a GPU for higher order and complex coordinate metrology algorithms

    Incremental volume rendering using hierarchical compression

    Get PDF
    Includes bibliographical references.The research has been based on the thesis that efficient volume rendering of datasets, contained on the Internet, can be achieved on average personal workstations. We present a new algorithm here for efficient incremental rendering of volumetric datasets. The primary goal of this algorithm is to give average workstations the ability to efficiently render volume data received over relatively low bandwidth network links in such a way that rapid user feedback is maintained. Common limitations of workstation rendering of volume data include: large memory overheads, the requirement of expensive rendering hardware, and high speed processing ability. The rendering algorithm presented here overcomes these problems by making use of the efficient Shear-Warp Factorisation method which does not require specialised graphics hardware. However the original Shear-Warp algorithm suffers from a high memory overhead and does not provide for incremental rendering which is required should rapid user feedback be maintained. Our algorithm represents the volumetric data using a hierarchical data structure which provides for the incremental classification and rendering of volume data. This exploits the multiscale nature of the octree data structure. The algorithm reduces the memory footprint of the original Shear-Warp Factorisation algorithm by a factor of more than two, while maintaining good rendering performance. These factors make our octree algorithm more suitable for implementation on average desktop workstations for the purposes of interactive exploration of volume models over a network. This dissertation covers the theory and practice of developing the octree based Shear-Warp algorithms, and then presents the results of extensive empirical testing. The results, using typical volume datasets, demonstrate the ability of the algorithm to achieve high rendering rates for both incremental rendering and standard rendering while reducing the runtime memory requirements

    Ray Tracing Gems

    Get PDF
    This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU

    FASTER DISPLAY OF MECHANICAL ASSEMBLIES BY DETERMINATION OF PART VISIBILITY

    Get PDF
    We present algorithms that greatly decrease the time it takes to display a large number of 3-D mechanical part assemblies by removing all interior parts that cannot be viewed from any viewing angle. The algorithms are based on the minimum axis-aligned bounding box of each part, which avoids complicated computations often needed to determine the interactions of the geometry of the parts. The major contribution of this work is the use of exterior traces of cross sections of the bounding boxes to determine the parts' visibility. It is shown that the processing time increases almost linearly with the number of parts in an assembly of parts. A test on an assembly composed of 490 parts shows that the algorithms decrease the display time by a factor of two while only incorrectly identifying two of these parts as invisible when they should have been identified as visible

    Radiance interpolants for interactive scene editing and ray tracing

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 189-197).by Kavita Bala.Ph.D

    Représentation hiérarchique et efficace des sources lumineuses dans le cadre du rendu d'images

    Full text link
    Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal
    corecore