1,941 research outputs found

    AROMA: Automatic Generation of Radio Maps for Localization Systems

    Full text link
    WLAN localization has become an active research field recently. Due to the wide WLAN deployment, WLAN localization provides ubiquitous coverage and adds to the value of the wireless network by providing the location of its users without using any additional hardware. However, WLAN localization systems usually require constructing a radio map, which is a major barrier of WLAN localization systems' deployment. The radio map stores information about the signal strength from different signal strength streams at selected locations in the site of interest. Typical construction of a radio map involves measurements and calibrations making it a tedious and time-consuming operation. In this paper, we present the AROMA system that automatically constructs accurate active and passive radio maps for both device-based and device-free WLAN localization systems. AROMA has three main goals: high accuracy, low computational requirements, and minimum user overhead. To achieve high accuracy, AROMA uses 3D ray tracing enhanced with the uniform theory of diffraction (UTD) to model the electric field behavior and the human shadowing effect. AROMA also automates a number of routine tasks, such as importing building models and automatic sampling of the area of interest, to reduce the user's overhead. Finally, AROMA uses a number of optimization techniques to reduce the computational requirements. We present our system architecture and describe the details of its different components that allow AROMA to achieve its goals. We evaluate AROMA in two different testbeds. Our experiments show that the predicted signal strength differs from the measurements by a maximum average absolute error of 3.18 dBm achieving a maximum localization error of 2.44m for both the device-based and device-free cases.Comment: 14 pages, 17 figure

    An optical navigation filter simulator for a CubeSat mission to Didymos binary asteroid system

    Get PDF
    AIDA (Asteroid Impact and Deflection Assessment), is a joint NASA-ESA mission that will operate within 65803 Didymos binary system and whose main purpose is to experiment and investigate the kinetic impact technique for the deviation of the asteroid trajectories in space. HERA, the "mother" satellite designed by ESA, will aim to collect data about the chemical-physical composition of the binary system and about the characteristics of the impact between DART, the bullet-satellite realized and run by NASA, and the minor of the two celestial bodies that compose Didymos, which should occur around October 2022. HERA satellite will carry high-level technology onboard, including some CubeSats. This panorama also includes the DustCube mission, a project proposal for a CubeSat, whose main objective is to assist HERA in the acquisition of data. This thesis, as part of the DustCube project, aims at investigating the autonomous navigation of the CubeSat within the Didymos system, in particular through the development of a navigation filter based on optical observables. By making use of images gathered by a couple of infrared cameras, both LoS and range measurements are retrieved and fed to an Extended Kalman Filter. Results show that, even if implementing a reduced dynamical model within the filter, the expected position accuracy is below the requested 10 meters

    Integral transport based deterministic brachytherapy dose calculations

    Get PDF
    Brachytherapy usually refers to a category of procedures where radiation sources are implanted into the tumor or into its close vicinity to deliver high radiation doses into the tumor for a therapy. A feedback about the dose rate distribution during the surgery is desired to guarantee the dose coverage and the success of a treatment. Calculation speed is crucial in this attempt and traditional Monte Carlo methods are not suitable for this purpose.;We developed a deterministic algorithm for computing three dimensional brachytherapy dose distributions. The deterministic algorithm has been based on the integral transport equation. The integral transport equation is solved by the Neumann Series and spatial, angular and energy discretizations. Source scheme and transport scheme are the two steps to calculate the scattered photon flux. The algorithm provided us a capability of computing dose distributions for multiple isotropic point and/or volumetric sources in a homogenous/heterogeneous medium. Two seed models, model 2301 and model 6711 were studied. The algorithm results for these two seeds have been benchmarked against the results from literatures and MCNP calculation. We have included fluorescence radiation physics to handle the silver fluorescence in seed model 6711. We designed and implemented a parallel algorithm to speed up the calculations. The introduction of parallel computing capability provided a means to compute the dose distribution for multiple seeds in a simultaneous manner.;We can compute dose distributions for a large set of seeds without resorting to the superposition methods, which is current technique to calculate overall dose distribution from multiple seeds. This provided a way to study strong heterogeneity and shadow effects induced by the presence of multiple seeds in an interstitial brachytherapy implant. By studying a 81 seeds problem, we observed that superposition of single seeds dose distribution can lead to at least 5~10% overestimation around implanted seeds. This provided more evidence that the shadow effect has significant impacts on the dose distribution and should be considered in the brachytherapy dosimetry

    Morphodynamic Equilibria of Embayed Beach Systems

    Get PDF
    Embayed beaches are well-known for the prominent curvature of their shorelines and are often observed in states of dynamic or static equilibrium. These equilibrium states are typically assumed to be influenced by headland geometry, cellular circulation patterns, wave obliquity at the shoreline and diffraction in and around the shadow zone. One of the main aims of this study is to gain a comprehensive understanding of the role of (i) wave forcing, (ii) environmental conditions and (iii) the geological setting in long-term embayed beach evolution. In doing so, a state-of-the-art morphodynamic model was used to simulate the evolution of a schematic embayment under idealized wave forcing conditions. Wave forcing is varied between a mixture of time-invariant and time-varying cases. Environmental conditions are varied by changing sediment size, tidal amplitude and mean wave height. The geological setting is varied by changing the angle of obliquity of the waves and the bay width. Several wave climate variables influence the distribution of wave energy throughout the bay and in the shadow zone: wave direction, directional spreading and wave height. Diffraction is shown to be dominant only when the incoming wave conditions are both directionally narrow-banded and highly oblique. Nevertheless, time varying wave directions (as little as 6%) can account for shoreline curvature in the shadow zone. Changes in environmental conditions and geological setting generally affect the rate of development of the bay as well as the equilibrium size of the bay. For example: increased tidal amplitude enhances the size of the shadow zone due to modulation of the wave energy in this area, and wider bays require an exponentially larger period of time to attain equilibrium. Progressive weakening of the residual long-shore current and sediment transport as the bay develops is shown to be consistently related to long-term, non-uniform shoreline cutback (beach rotation). Hence, the curvature of the shoreline planform is primarily due to weakened shoreline erosion processes resulting from beach rotation. The research aims are extended to investigate seasonal and event-driven changes based on real-world cases. In doing so, the model has been used to reconstruct the medium-term, quasi-equilibrium morphology of a bay by discretising the measured wave climate variability in terms of wave heights and directions into several representative wave conditions. The effect of extreme events appeared to be balanced by average forcing conditions occurring over a longer period of time. Additionally, the equilibrium bathymetry is largely determined by wave direction variability; therefore it is necessary to have a high level of directional resolution in order to obtain accurate results. A nine-month field campaign was conducted at embayed beaches of Tairua and Pauanui, Coromandel Peninsula, New Zealand. During this period the beaches exhibited highly dynamic behaviour in response to storm events characterised by non-uniform cross-shore sediment movement within the bay. The residual sediment transport pathways between the surf zone and the adjacent beach and shoaling zones will be determined in future

    Computational Modeling of Airborne Noise Demonstrated Via Benchmarks, Supersonic Jet, and Railway Barrier

    Get PDF
    In the last several years, there has been a growing demand for mobility to cope with the increasing population. All kinds of transportation have responded to this demand by expanding their networks and introducing new ideas. Rail transportation introduced the idea of high-speed trains and air transportation introduced the idea of high-speed civil transport (HSCT). In this expanding world, the noise legislation is felt to inhibit these plans. Accurate computational methods for noise prediction are in great demand. In the current research, two computational methods are developed to predict noise propagation in air. The first method is based on the finite differencing technique on generalized curvilinear coordinates and it is used to solve linear and nonlinear Euler equations. The dispersion-relation-preserving scheme is adopted for spatial discretization. For temporal integration, either the dispersion-relation-preserving scheme or the low-dispersion-and-dissipation Runge-Kutta scheme is used. Both characteristic and asymptotic nonreflective boundary conditions are studied. Ghost points are employed to satisfy the wall boundary condition. A number of benchmark problems are solved to validate different components of the present method. These include initial pulse in free space, initial pulse reflected from a flat or curved wall, time-periodic train of waves reflected from a flat wall, and oscillatory sink flow. The computed results are compared with the analytical solutions and good agreements are obtained. Using the method developed, the noise of Mach 2.1, perfectly expanded, two-dimensional supersonic jet is computed. The Reynolds-averaged Navier-Stokes equations are solved for the jet mean flow. The instability waves, which are used to excite the jet, are obtained from the solution of the compressible Rayleigh equation. Then, the linearized Euler equations are solved for jet noise. To improve computational efficiency, flow-adapted grid and a multi-block time integration technique are developed. The computations are compared with the experimental results for both the mean flow and the jet noise. Good agreement is obtained. The method proved to be fast and efficient. The second computational method is based on the boundary element technique. The Helmholtz equation is solved for the sound field around a railway noise barrier. Linear elements are used to discretize the barrier surface. Frequency-dependent grids are employed for efficiency. The train noise is represented by a point source located above the nearest rail. The source parameters are estimated from a typical field measurement of train noise spectrum. Both elevated and ground-level train decks are considered. The performance of the noise barrier at low and high frequencies is investigated. Moreover, A-weighted sound pressure levels are calculated. The computed results are successfully compared with field measurements

    Fresnel Interferometric Imager: ground-based prototype

    Full text link
    The Fresnel Interferometric Imager is a space-based astronomical telescope project yielding milli-arc second angular resolution and high contrast images with loose manufacturing constraints. This optical concept involves diffractive focusing and formation flying: a first "primary optics" space module holds a large binary Fresnel Array, and a second "focal module" holds optical elements and focal instruments that allow for chromatic dispersion correction. We have designed a reduced-size Fresnel Interferometric Imager prototype and made optical tests in our lab, in order to validate the concept for future space missions. The Primary module of this prototype consists of a square, 8 cm side, 23 m focal length Fresnel array. The focal module is composed of a diaphragmed small telescope used as "field lens", a small cophased diverging Fresnel Zone Lens (FZL) that cancels the dispersion and a detector. An additional module collimates the artificial targets of various shapes, sizes and dynamic ranges to be imaged. In this paper, we describe the experimental setup, different designs of the primary Fresnel array, and the cophased Fresnel Zone Lens that achieves rigorous chromatic correction. We give quantitative measurements of the diffraction limited performances and dynamic range on double sources. The tests have been performed in the visible domain, lambda = 400 - 700 nm. In addition, we present computer simulations of the prototype optics based on Fresnel propagation, that corroborate the optical tests. This numerical tool has been used to simulate the large aperture Fresnel arrays that could be sent to space with diameters of 3 to 30 m, foreseen to operate from Lyman-alpha (121 nm) to mid I.R. (25 microns).Comment: 10 pages, 13 figures; accepted for publication in Applied Optic

    Validation of the immersed boundary surface method in computational fluid dynamics

    Get PDF
    Cilj ovog rada je predstaviti teorijsku i praktičnu pozadinu metode uronjene granice implementirane u foam-extend 4.1, odnosno njene prednosti i nedostatke. Glavni cilj metode uronjene granice je pojednostavljenje izrade mreža u računalnoj dinamici fluida, što može dovesti do značajnog smanjenja količine ljudskog rada koji se mora uložiti pri pripremanju simulacija u računalnoj dinamici fluida, pogotovo kod simulacija sa složenim geometrijama. Također, metoda uronjene granice može donijeti određene prednosti kod simulacija s pomičnim mrežama, u vidu smanjenja računalne zahtjevnosti takvih simulacija. Glavni nedostatak metode uronjene granice je smanjenje točnosti rješenja na uronjenim granicama (površinama simuliranih objekata). Metoda uronjene granice implementirana u foam-extend 4.1 je ovdje validirana na trima slučajevima: unutarnje strujanje u 2-D slučaju u cijevi sa naglim proširenjem, vanjsko strujanje oko Onera M6 krila i strujanje u Francisovoj turbini, što je pogotovo zanimljiv slučaj za metodu uronjene granice. Rezultati simulacija izvedenih uporabom metodom uronjene granice su uspoređeni sa rezultatima simulacija izvedenim konvencionalnim načinom izrade mreže. Rezultati simulacija su zadovoljavajući, odnosno, smanjenje točnosti rješenja na uronjenim granicama je dovoljno maleno da implementaciju metode uronjene granice u foam-extend 4.1 možemo ocjeniti kao dobru.The aim of this thesis is to describe the Immersed Boundary Method version implemented in foam-extend 4.1, both its advantages and shortcomings. The main goal of the Immersed Boundary Method is to simplify the mesh generation process in Computational Fluid Dynamics, which can lead to drastic reductions of human time needed for setting up simulations, especially for simulations with complex geometries. Additionally, it can offer certain advantages in simulations with moving meshes, as it can decrease the computational requirements of such cases. The main shortcoming of the Immersed Boundary Method is loss of solution accuracy on immersed boundaries (surfaces of simulated objects). The foam-extend 4.1 Immersed Boundary Method is here validated on three cases: internal 2-D flow over a backward facing step, external flow around the Onera M6 wing, and the flow in a model Francis turbine, which is an especially interesting case, concerning the Immersed Boundary Method. The results of the Immersed Boundary Method simulations are compared to the results of equivalent body-fitted (conventional) simulations. The simulation results are generally satisfactory, as the loss of accuracy was modest enough to assess the foam-extend 4.1 implementation of the Immersed Boundary Method as successful

    Hybrid multi-objective trajectory optimization of low-thrust space mission design

    Get PDF
    Mención Internacional en el título de doctorThe overall goal of this dissertation is to develop multi-objective optimization algorithms for computing low-thrust trajectories. The thesis is motivated by the increasing number of space projects that will benefit from low-thrust propulsion technologies to gain unprecedented scientific, economic and social return. The low-cost design of such missions and the inclusion of concurrent engineering practices during the preliminary design phase demand advanced tools to rapidly explore different solutions and to benchmark them with respect to multiple conicting criteria. However, the determination of optimal low-thrust transfers is a challenging task and remains an active research field that seeks performance improvements. This work contributes to increase the efficiency of searching wide design spaces, reduce the amount of necessary human involvement, and enhance the capabilities to include complex operational constraints. To that end, the general low-thrust trajectory optimization problem is stated as a multi-objective Hybrid Optimal Control Problem. This formulation allows to simultaneously optimize discrete decisionmaking processes, discrete dynamics, and the continuous low-thrust steering law. Within this framework, a sequential two-step solution approach is devised for two different scenarios. The first problem considers the optimization of low-thrust multi-gravity assist trajectories. The proposed solution procedure starts by assuming a planar shape-based model for the interplanetary trajectory. A multi-objective heuristic algorithm combined with a gradient-based solver optimize the parameters de_ning the shape of the trajectory, the number and sequence of the gravity assists, the departure and arrival dates, and the launch excess velocity. In the second step, candidate solutions are deemed as initial guesses to solve the Nonlinear Programming Problem resulting from applying a direct collocation transcription scheme. In this step, the sequence of planetary gravity assists is known and provided by the heuristic search, dynamics is three-dimensional, and the steering law is not predefined. Operational constraints to comply with launch asymptote declination limits and fixed reorientation times during the transfer apply. The presented approach is tested on a rendezvous mission to Ceres, on a yby mission to Jupiter, and on a rendezvous mission to Pluto. Pareto-optimal solutions in terms of time of ight and propellant mass consumed (or alternatively delivered mass) are obtained. Results outperform those found in the literature in terms of optimality while showing the effectiveness of the proposed methodology to generate quick performance estimates. The second problem considers the simultaneous optimization of fully electric, fully chemical and combined chemical-electric orbit raising transfers between Earth's orbits is considered. In the first step of the solution approach, the control law of the electric engine is parameterized by a Lyapunov function. A multi-objective heuristic algorithm selects the optimal propulsion system, the transfer type, the low-thrust control history, as well as the number, orientation, and magnitude of the chemical firings. Earth's shadow, oblateness and Van-Allen radiation effects are included. In the second step, candidate solutions are deemed as initial guesses to solve the Nonlinear Programming Problem resulting from applying a direct collocation scheme. Operational constraints to avoid the GEO ring in combination to slew rate limits and slot phasing constraints are included. The proposed approach is applied to two transfer scenarios to GEO orbit. Pareto-optimal solutions trading of propellant mass, time of ight and solar-cell degradation are obtained. It is identified that the application of operational restrictions causes minor penalties in the objective function. Additionally, the analysis highlights the benefits that combined chemical-electric platforms may provide for future GEO satellites.El objetivo principal de esta trabajo es desarrollar algoritmos de optimización multi-objetivo para la obtención de trayectorias espaciales con motores de bajo empuje. La tesis está motivada por el creciente número de misiones que se van a beneficiar del uso de estas tecnologías para conseguir beneficios científicos, económicos y sociales sin precedentes. El diseño de bajo coste de dichas misiones ligado a los principios de ingeniería concurrente requieren herramientas computacionales avanzadas que exploren rápidamente distintas soluciones y las comparen entre sí respecto a varios criterios. Sin embargo, esta tarea permanece como un campo de investigación activo que busca continuamente mejoras de rendimiento durante el proceso. Este trabajo contribuye a aumentar la eficiencia cuando espacio de diseño es amplio, a reducir la participación humana requerida y a mejorar las capacidades para incluir restricciones operacionales complejas. Para este fin, el problema general de optimización de trayectorias de bajo empuje se presenta como un problema híbrido de control óptimo. Esta formulación permite optimizar al mismo tiempo procesos de toma de decisiones, dinámica discreta y la ley de control del motor. Dentro de este marco, se idea un algoritmo secuencial de dos pasos para dos escenarios diferentes. El primer problema considera la optimización de trayectorias de bajo empuje con múltiples y-bys. El proceso de solución propuesto comienza asumiendo un modelo plano y shape-based para la trayectoria interplanetaria. Un algoritmo de optimización heurístico y multi-objetivo combinado con un resolvedor basado en gradiente optimizan los parámetros de la espiral que definen la forma de la trayectoria, el número y la secuencia de las maniobras gravitacionales, las fechas de salida y llegada, y la velocidad de lanzamiento. En el segundo paso, las soluciones candidatas se usan como estimación inicial para resolver el problema de optimización no lineal que resulta de aplicar un método de transcripción directa. En este paso, las secuencia de y-bys es conocida y determinada por el paso anterior, la dinámica es tridimensional, y la ley de control no está prefinida. Además, se pueden aplicar restricciones operacionales relacionadas con las declinación de la asíntota de salida e imponer tiempos de reorientación fijos. Este enfoque es probado en misiones a Ceres, a Júpiter y a Plutón. Se obtienen soluciones óptimas de Pareto en función del tiempo de vuelo y la masa de combustible consumida (o la masa entregada). Los resultados obtenidos mejoran los disponibles en la literatura en términos de optimalidad, a la vez que reflejan la efectividad de la metodología a propuesta para generar estimaciones rápidas. El segundo problema considera la optimización simultanea de transferencias entre órbitas terrestres que usan propulsión eléctrica, química o una combinación de ambas. En el primer paso del método de solución, la ley de control del motor eléctrico se parametriza por una función de Lyapunov. Un algoritmo de optimización heurístico y multi-objetivo selecciona el sistema propulsivo óptimo, el tipo de transferencia, la ley de control del motor de bajo empuje, así como el número, la orientación y la magnitud de los impulsos químicos. Se incluyen los efectos de la sombra y de la no esfericidad de la Tierra, además de la radiación de Van-Allen. En el segundo paso, las soluciones candidatas se usan como estimación inicial para resolver el problema de optimización no lineal que resulta de aplicar un método de transcripción directa. El método de solución propuesto se aplica a dos transferencias a GEO diferentes. Se obtienen soluciones óptimas de Pareto con respecto a la masa de combustible, el tiempo de vuelo y la degradación de las células solares. Se identifican que la aplicación de las restricciones operacionales penaliza mínimamente la función objetivo. Además, los análisis presentados destacan los beneficios que la propulsión química y eléctrica combinada proporcionarían a los satélites en GEO.Programa de Doctorado en Mecánica de Fluidos por la Universidad Carlos III de Madrid; la Universidad de Jaén; la Universidad de Zaragoza; la Universidad Nacional de Educación a Distancia; la Universidad Politécnica de Madrid y la Universidad Rovira i Virgili.Presidente: Rafael Vázquez Valenzuela.- Secretario: Claudio Bombardelli.- Vocal: Bruce A. Conwa

    Realistic Visualization of Animated Virtual Cloth

    Get PDF
    Photo-realistic rendering of real-world objects is a broad research area with applications in various different areas, such as computer generated films, entertainment, e-commerce and so on. Within photo-realistic rendering, the rendering of cloth is a subarea which involves many important aspects, ranging from material surface reflection properties and macroscopic self-shadowing to animation sequence generation and compression. In this thesis, besides an introduction to the topic plus a broad overview of related work, different methods to handle major aspects of cloth rendering are described. Material surface reflection properties play an important part to reproduce the look & feel of materials, that is, to identify a material only by looking at it. The BTF (bidirectional texture function), as a function of viewing and illumination direction, is an appropriate representation of reflection properties. It captures effects caused by the mesostructure of a surface, like roughness, self-shadowing, occlusion, inter-reflections, subsurface scattering and color bleeding. Unfortunately a BTF data set of a material consists of hundreds to thousands of images, which exceeds current memory size of personal computers by far. This work describes the first usable method to efficiently compress and decompress a BTF data for rendering at interactive to real-time frame rates. It is based on PCA (principal component analysis) of the BTF data set. While preserving the important visual aspects of the BTF, the achieved compression rates allow the storage of several different data sets in main memory of consumer hardware, while maintaining a high rendering quality. Correct handling of complex illumination conditions plays another key role for the realistic appearance of cloth. Therefore, an upgrade of the BTF compression and rendering algorithm is described, which allows the support of distant direct HDR (high-dynamic-range) illumination stored in environment maps. To further enhance the appearance, macroscopic self-shadowing has to be taken into account. For the visualization of folds and the life-like 3D impression, these kind of shadows are absolutely necessary. This work describes two methods to compute these shadows. The first is seamlessly integrated into the illumination part of the rendering algorithm and optimized for static meshes. Furthermore, another method is proposed, which allows the handling of dynamic objects. It uses hardware-accelerated occlusion queries for the visibility determination. In contrast to other algorithms, the presented algorithm, despite its simplicity, is fast and produces less artifacts than other methods. As a plus, it incorporates changeable distant direct high-dynamic-range illumination. The human perception system is the main target of any computer graphics application and can also be treated as part of the rendering pipeline. Therefore, optimization of the rendering itself can be achieved by analyzing human perception of certain visual aspects in the image. As a part of this thesis, an experiment is introduced that evaluates human shadow perception to speedup shadow rendering and provides optimization approaches. Another subarea of cloth visualization in computer graphics is the animation of the cloth and avatars for presentations. This work also describes two new methods for automatic generation and compression of animation sequences. The first method to generate completely new, customizable animation sequences, is based on the concept of finding similarities in animation frames of a given basis sequence. Identifying these similarities allows jumps within the basis sequence to generate endless new sequences. Transmission of any animated 3D data over bandwidth-limited channels, like extended networks or to less powerful clients requires efficient compression schemes. The second method included in this thesis in the animation field is a geometry data compression scheme. Similar to the BTF compression, it uses PCA in combination with clustering algorithms to segment similar moving parts of the animated objects to achieve high compression rates in combination with a very exact reconstruction quality.Realistische Visualisierung von animierter virtueller Kleidung Das photorealistisches Rendering realer Gegenstände ist ein weites Forschungsfeld und hat Anwendungen in vielen Bereichen. Dazu zählen Computer generierte Filme (CGI), die Unterhaltungsindustrie und E-Commerce. Innerhalb dieses Forschungsbereiches ist das Rendern von photorealistischer Kleidung ein wichtiger Bestandteil. Hier reichen die wichtigen Aspekte, die es zu berücksichtigen gilt, von optischen Materialeigenschaften über makroskopische Selbstabschattung bis zur Animationsgenerierung und -kompression. In dieser Arbeit wird, neben der Einführung in das Thema, ein weiter Überblick über ähnlich gelagerte Arbeiten gegeben. Der Schwerpunkt der Arbeit liegt auf den wichtigen Aspekten der virtuellen Kleidungsvisualisierung, die oben beschrieben wurden. Die optischen Reflektionseigenschaften von Materialoberflächen spielen eine wichtige Rolle, um das so genannte look & feel von Materialien zu charakterisieren. Hierbei kann ein Material vom Nutzer identifiziert werden, ohne dass er es direkt anfassen muss. Die BTF (bidirektionale Texturfunktion)ist eine Funktion die abhängig von der Blick- und Beleuchtungsrichtung ist. Daher ist sie eine angemessene Repräsentation von Reflektionseigenschaften. Sie enthält Effekte wie Rauheit, Selbstabschattungen, Verdeckungen, Interreflektionen, Streuung und Farbbluten, die durch die Mesostruktur der Oberfläche hervorgerufen werden. Leider besteht ein BTF Datensatz eines Materials aus hunderten oder tausenden von Bildern und sprengt damit herkömmliche Hauptspeicher in Computern bei weitem. Diese Arbeit beschreibt die erste praktikable Methode, um BTF Daten effizient zu komprimieren, zu speichern und für Echtzeitanwendungen zum Visualisieren wieder zu dekomprimieren. Die Methode basiert auf der Principal Component Analysis (PCA), die Daten nach Signifikanz ordnet. Während die PCA die entscheidenen visuellen Aspekte der BTF erhält, können mit ihrer Hilfe Kompressionsraten erzielt werden, die es erlauben mehrere BTF Materialien im Hauptspeicher eines Consumer PC zu verwalten. Dies erlaubt ein High-Quality Rendering. Korrektes Verwenden von komplexen Beleuchtungssituationen spielt eine weitere, wichtige Rolle, um Kleidung realistisch erscheinen zu lassen. Daher wird zudem eine Erweiterung des BTF Kompressions- und Renderingalgorithmuses erläutert, die den Einsatz von High-Dynamic Range (HDR) Beleuchtung erlaubt, die in environment maps gespeichert wird. Um die realistische Erscheinung der Kleidung weiter zu unterstützen, muss die makroskopische Selbstabschattung integriert werden. Für die Visualisierung von Falten und den lebensechten 3D Eindruck ist diese Art von Schatten absolut notwendig. Diese Arbeit beschreibt daher auch zwei Methoden, diese Schatten schnell und effizient zu berechnen. Die erste ist nahtlos in den Beleuchtungspart des obigen BTF Renderingalgorithmuses integriert und für statische Geometrien optimiert. Die zweite Methode behandelt dynamische Objekte. Dazu werden hardwarebeschleunigte Occlusion Queries verwendet, um die Sichtbarkeitsberechnung durchzuführen. Diese Methode ist einerseits simpel und leicht zu implementieren, anderseits ist sie schnell und produziert weniger Artefakte, als vergleichbare Methoden. Zusätzlich ist die Verwendung von veränderbarer, entfernter HDR Beleuchtung integriert. Das menschliche Wahrnehmungssystem ist das eigentliche Ziel jeglicher Anwendung in der Computergrafik und kann daher selbst als Teil einer erweiterten Rendering Pipeline gesehen werden. Daher kann das Rendering selbst optimiert werden, wenn man die menschliche Wahrnehmung verschiedener visueller Aspekte der berechneten Bilder analysiert. Teil der vorliegenden Arbeit ist die Beschreibung eines Experimentes, das menschliche Schattenwahrnehmung untersucht, um das Rendern der Schatten zu beschleunigen. Ein weiteres Teilgebiet der Kleidungsvisualisierung in der Computergrafik ist die Animation der Kleidung und von Avataren für Präsentationen. Diese Arbeit beschreibt zwei neue Methoden auf diesem Teilgebiet. Einmal ein Algorithmus, der für die automatische Generierung neuer Animationssequenzen verwendet werden kann und zum anderen einen Kompressionsalgorithmus für eben diese Sequenzen. Die automatische Generierung von völlig neuen, anpassbaren Animationen basiert auf dem Konzept der Ähnlichkeitssuche. Hierbei werden die einzelnen Schritte von gegebenen Basisanimationen auf Ähnlichkeiten hin untersucht, die zum Beispiel die Geschwindigkeiten einzelner Objektteile sein können. Die Identifizierung dieser Ähnlichkeiten erlaubt dann Sprünge innerhalb der Basissequenz, die dazu benutzt werden können, endlose, neue Sequenzen zu erzeugen. Die Übertragung von animierten 3D Daten über bandbreitenlimitierte Kanäle wie ausgedehnte Netzwerke, Mobilfunk oder zu sogenannten thin clients erfordert eine effiziente Komprimierung. Die zweite, in dieser Arbeit vorgestellte Methode, ist ein Kompressionsschema für Geometriedaten. Ähnlich wie bei der Kompression von BTF Daten wird die PCA in Verbindung mit Clustering benutzt, um die animierte Geometrie zu analysieren und in sich ähnlich bewegende Teile zu segmentieren. Diese erkannten Segmente lassen sich dann hoch komprimieren. Der Algorithmus arbeitet automatisch und erlaubt zudem eine sehr exakte Rekonstruktionsqualität nach der Dekomprimierung
    corecore