53 research outputs found

    Computing axes of rotation for 4-axis CNC milling machine by calculating global visibility map from slice geometry

    Get PDF
    This thesis presents a new method to compute a global visibility map (GVM) in order to determine feasible axes of rotation for 4-axis CNC machining. The choice of the 4th-axis is very important because it directly determines the critical manufacturing components; visibility, accessibility and machinability of the part. As opposed to the considerable work in GVM computation, this thesis proposes an innovative approximation approach to compute GVM by utilizing slice geometry. One advantage of the method is that it is feature-free, thus avoiding feature extraction and identification. In addition, the method is computationally efficient, and can be easily parallelized in order to vastly increase speed. In this thesis, we further present a full implementation of the approach as a critical function in an automated process planning system for rapid prototyping

    Computing tool accessibility of polyhedral models for toolpath planning in multi-axis machining

    Get PDF
    This dissertation focuses on three new methods for calculating visibility and accessibility, which contribute directly to the precise planning of setup and toolpaths in a Computer Numerical Control (CNC) machining process. They include 1) an approximate visibility determination method; 2) an approximate accessibility determination method and 3) a hybrid visibility determination method with an innovative computation time reduction strategy. All three methods are intended for polyhedral models. First, visibility defines the directions of rays from which a surface of a 3D model is visible. Such can be used to guide machine tools that reach part surfaces in material removal processes. In this work, we present a new method that calculates visibility based on 2D slices of a polyhedron. Then we show how visibility results determine a set of feasible axes of rotation for a part. This method effectively reduces a 3D problem to a 2D one and is embarrassingly parallelizable in nature. It is an approximate method with controllable accuracy and resolution. The method’s time complexity is linear to both the number of polyhedron’s facets and number of slices. Lastly, due to representing visibility as geodesics, this method enables a quick visible region identification technique which can be used to locate the rough boundary of true visibility. Second, tool accessibility defines the directions of rays from which a surface of a 3D model is accessible by a machine tool (a tool’s body is included for collision avoidance). In this work, we present a method that computes a ball-end tool’s accessibility as visibility on the offset surface. The results contain all feasible orientations for a surface instead of a Boolean answer. Such visibility-to-accessibility conversion is also compatible with various kinds of facet-based visibility methods. Third, we introduce a hybrid method for near-exact visibility. It incorporates an exact visibility method and an approximate visibility method aiming to balance computation time and accuracy. The approximate method is used to divide the visibility space into three subspaces; the visibility of two of them are fully determined. The exact method is then used to determine the exact visibility boundary in the subspace whose visibility is undetermined. Since the exact method can be used alone to determine visibility, this method can be viewed as an efficiency improvement for it. Essentially, this method reduces the processing time for exact computation at the cost of introducing approximate computation overhead. It also provides control over the ratio of exact-approximate computation

    Virtual light fields for global illumination in computer graphics

    Get PDF
    This thesis presents novel techniques for the generation and real-time rendering of globally illuminated environments with surfaces described by arbitrary materials. Real-time rendering of globally illuminated virtual environments has for a long time been an elusive goal. Many techniques have been developed which can compute still images with full global illumination and this is still an area of active flourishing research. Other techniques have only dealt with certain aspects of global illumination in order to speed up computation and thus rendering. These include radiosity, ray-tracing and hybrid methods. Radiosity due to its view independent nature can easily be rendered in real-time after pre-computing and storing the energy equilibrium. Ray-tracing however is view-dependent and requires substantial computational resources in order to run in real-time. Attempts at providing full global illumination at interactive rates include caching methods, fast rendering from photon maps, light fields, brute force ray-tracing and GPU accelerated methods. Currently, these methods either only apply to special cases, are incomplete exhibiting poor image quality and/or scale badly such that only modest scenes can be rendered in real-time with current hardware. The techniques developed in this thesis extend upon earlier research and provide a novel, comprehensive framework for storing global illumination in a data structure - the Virtual Light Field - that is suitable for real-time rendering. The techniques trade off rapid rendering for memory usage and precompute time. The main weaknesses of the VLF method are targeted in this thesis. It is the expensive pre-compute stage with best-case O(N^2) performance, where N is the number of faces, which make the light propagation unpractical for all but simple scenes. This is analysed and greatly superior alternatives are presented and evaluated in terms of efficiency and error. Several orders of magnitude improvement in computational efficiency is achieved over the original VLF method. A novel propagation algorithm running entirely on the Graphics Processing Unit (GPU) is presented. It is incremental in that it can resolve visibility along a set of parallel rays in O(N) time and can produce a virtual light field for a moderately complex scene (tens of thousands of faces), with complex illumination stored in millions of elements, in minutes and for simple scenes in seconds. It is approximate but gracefully converges to a correct solution; a linear increase in resolution results in a linear increase in computation time. Finally a GPU rendering technique is presented which can render from Virtual Light Fields at real-time frame rates in high resolution VR presentation devices such as the CAVETM

    3D Mesh Simplification. A survey of algorithms and CAD model simplification tests

    Get PDF
    Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.Siirretty Doriast

    Large-scale Geometric Data Decomposition, Processing and Structured Mesh Generation

    Get PDF
    Mesh generation is a fundamental and critical problem in geometric data modeling and processing. In most scientific and engineering tasks that involve numerical computations and simulations on 2D/3D regions or on curved geometric objects, discretizing or approximating the geometric data using a polygonal or polyhedral meshes is always the first step of the procedure. The quality of this tessellation often dictates the subsequent computation accuracy, efficiency, and numerical stability. When compared with unstructured meshes, the structured meshes are favored in many scientific/engineering tasks due to their good properties. However, generating high-quality structured mesh remains challenging, especially for complex or large-scale geometric data. In industrial Computer-aided Design/Engineering (CAD/CAE) pipelines, the geometry processing to create a desirable structural mesh of the complex model is the most costly step. This step is semi-manual, and often takes up to several weeks to finish. Several technical challenges remains unsolved in existing structured mesh generation techniques. This dissertation studies the effective generation of structural mesh on large and complex geometric data. We study a general geometric computation paradigm to solve this problem via model partitioning and divide-and-conquer. To apply effective divide-and-conquer, we study two key technical components: the shape decomposition in the divide stage, and the structured meshing in the conquer stage. We test our algorithm on vairous data set, the results demonstrate the efficiency and effectiveness of our framework. The comparisons also show our algorithm outperforms existing partitioning methods in final meshing quality. We also show our pipeline scales up efficiently on HPC environment

    Cloud geometry for passive remote sensing

    Get PDF
    An important cause for disagreements between current climate models is lack of understanding of cloud processes. In order to test and improve the assumptions of such models, detailed and large scale observations of clouds are necessary. Passive remote sensing methods are well-established to obtain cloud properties over a large observation area in a short period of time. In case of the visible to near infrared part of the electromagnetic spectrum, a quick measurement process is achieved by using the sun as high-intensity light source to illuminate a cloud scene and by taking simultaneous measurements on all pixels of an imaging sensor. As the sun as light source can not be controlled, it is not possible to measure the time light travels from source to cloud to sensor, which is how active remote sensing determines distance information. But active light sources do not provide enough radiant energy to illuminate a large scene, which would be required to observe it in an instance. Thus passive imaging remains an important remote sensing method. Distance information and accordingly cloud surface location information is nonetheless crucial information: cloud fraction and cloud optical thickness largely determines the cloud radiative effect and cloud height primarily influences a cloud's influence on the Earth's thermal radiation budget. In combination with ever increasing spatial resolution of passive remote sensing methods, accurate cloud surface location information becomes more important, as the largest source of retrieval uncertainties at this spatial scale, influences of 3D radiative transfer effects, can be reduced using this information. This work shows how the missing location information is derived from passive remote sensing. Using all sensors of the improved hyperspectral and polarization resolving imaging system specMACS, a unified dataset, including classical hyperspectral measurements as well as cloud surface location information and derived properties, is created. This thesis shows how RGB cameras are used to accurately derive cloud surface geometry using stereo techniques, complementing the passive remote sensing of cloud microphysics on board the German High-Altitude Long-Range research aircraft (HALO). Measured surface locations are processed into a connected surface representation, which in turn is used to assign height and location to other passive remote sensing observations. Furthermore, cloud surface orientation and a geometric shadow mask are derived, supplementing microphysical retrieval methods. The final system is able to accurately map visible cloud surfaces while flying above cloud fields. The impact of the new geometry information on microphysical retrieval uncertainty is studied using theoretical radiative transfer simulations and measurements. It is found that in some cases, information about surface orientation allows to improve classical cloud microphysical retrieval methods. Furthermore, surface information helps to identify measurement regions where a good microphysical retrieval quality is expected. By excluding likely biased regions, the overall microphysical retrieval uncertainty can be reduced. Additionally, using the same instrument payload and based on knowledge of the 3D cloud surface, new approaches for the retrieval of cloud droplet radius exploiting measurements of parts of the polarized angular scattering phase function become possible. The necessary setup and improvements of the hyperspectral and polarization resolving measurement system specMACS, which have been developed throughout four airborne field campaigns using the HALO research aircraft are introduced in this thesis.Ein wichtiger Grund für Unterschiede zwischen aktuellen Klimamodellen sind nicht ausreichend verstandene Wolkenprozesse. Um die zugrundeliegenden Annahmen dieser Modelle zu testen und zu verbessern ist es notwendig detaillierte und großskalige Beobachtungen von Wolken durch zu führen. Methoden der passiven Fernerkundung haben sich für die schnelle Erfassung von Wolkeneigenschaften in einem großen Beobachtungsgebiet etabliert. Für den sichtbaren bis nahinfraroten Bereich des elektromagnetischen Spektrums kann eine schnelle Messung erreicht werden, in dem die Sonne als starke Lichtquelle genutzt wird und die Wolkenszene durch simultane Messung über alle Pixel eines Bildsensors erfasst wird. Da die Sonne als Lichtquelle nicht gesteuert werden kann, ist es nicht möglich die Zeit zu messen die von einem Lichtstrahl für den Weg von der Quelle zur Wolke und zum Sensor benötigt wird, so wie es bei aktiven Verfahren zur Distanzbestimmung üblich ist. Allerdings können aktive Lichtquellen nicht genügend Energie bereitstellen um eine große Szene gut genug zu beleuchten um diese Szene in einem kurzen Augenblick vollständig zu erfassen. Aus diesem Grund werden passive bildgebende Verfahren weiterhin eine wichtige Methode zur Fernerkundung bleiben. Trotzdem ist der Abstand zur beobachteten Wolke und damit der Ort der Wolke eine entscheidende Information: Wolkenbedeckungsgrad und die optische Dicke einer Wolke bestimmen einen Großteil des Strahlungseffektes von Wolken und die Höhe der Wolken ist der Haupteinflussfaktor von Wolken auf die thermische Strahlungsbilanz der Erde. Einhergehend mit der weiterhin zunehmenden Auflösung von passiven Fernerkundungsmethoden werden genaue Informationen über den Ort von Wolkenoberflächen immer wichtiger. Dreidimensionale Strahlungstransporteffekte werden auf kleineren räumlichen Skalen zum dominierenden Faktor für Fehler in Messverfahren für Wolkenmikrophysik. Dieser Einfluss auf die Messverfahren kann durch die Nutzung von Informationen über die Lage der Wolken reduziert und die Ergebnisse somit verbessert werden. Diese Arbeit zeigt, wie die fehlenden Ortsinformationen aus passiven Fernerkundungsmethoden gewonnen werden können. Damit kann ein vereinheitlichter Datensatz aller Sensoren des verbesserten specMACS-Systems für hyperspektrale und polarisationsaufgelöste Bilderfassung erstellt werden, in dem außer den gemessenen Strahlungsdichten auch die Positionen der beobachteten Wolkenoberflächen und daraus abgeleitete Größen enthalten sind. In dieser Arbeit wird gezeigt, wie RGB-Kameras genutzt werden, um mit Hilfe stereographischer Techniken die Geometrie der beobachteten Wolken ab zu leiten und so die Möglichkeiten zur passiven Fernerkundung auf dem Forschungsflugzeug HALO zu erweitern. Aus den so gemessenen Positionen der Wolkenoberflächen wird eine geschlossene Darstellung der Wolkenoberflächen berechnet. Dies ermöglicht es die Daten aus anderen passiven Fernerkundungsmethoden um Höhe und Ort der Messung zu erweitern. Außerdem ist es so möglich die Orientierung der Wolkenoberflächen und eine Schattenmaske auf Grund der nun bekannten Beobachtungsgeometrie zu berechnen. Das fertige System ist in der Lage, die sichtbaren Wolkenoberflächen aus Daten von einem Überflug zu rekonstruieren. Mit Hilfe theoretischer Strahlungstransportsimulationen und Messungen wird der Einfluss der neu gewonnenen Informationen auf bestehende Rekonstruktionsmethoden für Wolkenmikrophysik untersucht. In manchen Fällen helfen die neu gewonnenen Informationen direkt die Ergebnisse dieser Methoden zu verbessern und in jedem Fall ermöglichen es die Positionsdaten Bereiche zu identifizieren für die bekannt ist, dass bisherige Rekonstruktionsmethoden nicht funktionieren. Durch Ausschluss solcher Bereiche wird der Gesamtfehler von Mirkophysikrekonstruktionen weiterhin reduziert. Das aktuelle specMACS System ermöglicht auch polarisationsaufgelöste Messungen, wodurch eine sehr genaue Bestimmung der Wolkentropfengrößen möglich wird. Die nun verfügbaren Positionsdaten der Wolkenoberflächen helfen die Genauigkeit dieses Verfahrens deutlich zu verbessern. Die notwendigen Auf- und Umbauten des hyperspektralen und polarisationsauflösenden Messsystems specMACS, die während vier Flugzeuggestützer Messkampagnen auf dem Forschungsflugzeug HALO entwickelt wurden sind in dieser Arbeit beschrieben
    corecore