373 research outputs found
Recommended from our members
Spatially Augmented Reality on Dynamic, Deformable Surfaces and its Applications
Spatially Augmented Reality (SAR), also known as projection mapping, uses multiple projectors to illuminate surfaces of arbitrary shape and size and create seamless, large-scale displays. Traditional SAR assumes that the projection surface is static and rigid. This restriction was partially addressed by Dynamic-SAR, where the surface is rigid and of known shape but can be moved around. However, no prior work has addressed SAR using multiple projectors on deformable surfaces, where the shape is unknown and constantly changing. Thus, multi-projector SAR on deformable surfaces introduces several challenges, including projector-camera calibration on a deformable surface, real-time surface shape recovery and real-time multi-projector warping and blending. My thesis is the first attempt to develop a comprehensive framework for achieving seamless multi-projector displays on deformable surfaces. Furthermore, I will also be presenting its applications in the medical domain to enable remote surgical guidance by using SAR to illuminate surgical stencils on a physical surgical site precisely
Markerless deformation capture of hoverfly wings using multiple calibrated cameras
This thesis introduces an algorithm for the automated deformation capture of hoverfly
wings from multiple camera image sequences. The algorithm is capable of extracting
dense surface measurements, without the aid of fiducial markers, over an arbitrary number
of wingbeats of hovering flight and requires limited manual initialisation. A novel motion
prediction method, called the ânormalised stroke modelâ, makes use of the similarity of adjacent
wing strokes to predict wing keypoint locations, which are then iteratively refined in
a stereo image registration procedure. Outlier removal, wing fitting and further refinement
using independently reconstructed boundary points complete the algorithm. It was tested
on two hovering data sets, as well as a challenging flight manoeuvre. By comparing the
3-d positions of keypoints extracted from these surfaces with those resulting from manual
identification, the accuracy of the algorithm is shown to approach that of a fully manual
approach. In particular, half of the algorithm-extracted keypoints were within 0.17mm of
manually identified keypoints, approximately equal to the error of the manual identification
process. This algorithm is unique among purely image based flapping flight studies in the
level of automation it achieves, and its generality would make it applicable to wing tracking
of other insects
Beaming Displays
Existing near-eye display designs struggle to balance between multiple trade-offs such as form factor, weight, computational
requirements, and battery life. These design trade-offs are major obstacles on the path towards an all-day usable near-eye display.
In this work, we address these trade-offs by, paradoxically, removing the display from near-eye displays. We present the beaming
displays, a new type of near-eye display system that uses a projector and an all passive wearable headset. We modify an off-the-shelf
projector with additional lenses. We install such a projector to the environment to beam images from a distance to a passive wearable
headset. The beaming projection system tracks the current position of a wearable headset to project distortion-free images with
correct perspectives. In our system, a wearable headset guides the beamed images to a userâs retina, which are then perceived as an
augmented scene within a userâs field of view. In addition to providing the system design of the beaming display, we provide a physical
prototype and show that the beaming display can provide resolutions as high as consumer-level near-eye displays. We also discuss the
different aspects of the design space for our proposal
Makeup Lamps: Live Augmentation of Human Faces via Projection
We propose the first system for live dynamic augmentation of human faces. Using projectorâbased illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency â an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned highâspeed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and nonârigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions
Realistic Visualization of Animated Virtual Cloth
Photo-realistic rendering of real-world objects is a broad research area with applications in various different areas, such as computer generated films, entertainment, e-commerce and so on. Within photo-realistic rendering, the rendering of cloth is a subarea which involves many important aspects, ranging from material surface reflection properties and macroscopic self-shadowing to animation sequence generation and compression. In this thesis, besides an introduction to the topic plus a broad overview of related work, different methods to handle major aspects of cloth rendering are described. Material surface reflection properties play an important part to reproduce the look & feel of materials, that is, to identify a material only by looking at it. The BTF (bidirectional texture function), as a function of viewing and illumination direction, is an appropriate representation of reflection properties. It captures effects caused by the mesostructure of a surface, like roughness, self-shadowing, occlusion, inter-reflections, subsurface scattering and color bleeding. Unfortunately a BTF data set of a material consists of hundreds to thousands of images, which exceeds current memory size of personal computers by far. This work describes the first usable method to efficiently compress and decompress a BTF data for rendering at interactive to real-time frame rates. It is based on PCA (principal component analysis) of the BTF data set. While preserving the important visual aspects of the BTF, the achieved compression rates allow the storage of several different data sets in main memory of consumer hardware, while maintaining a high rendering quality. Correct handling of complex illumination conditions plays another key role for the realistic appearance of cloth. Therefore, an upgrade of the BTF compression and rendering algorithm is described, which allows the support of distant direct HDR (high-dynamic-range) illumination stored in environment maps. To further enhance the appearance, macroscopic self-shadowing has to be taken into account. For the visualization of folds and the life-like 3D impression, these kind of shadows are absolutely necessary. This work describes two methods to compute these shadows. The first is seamlessly integrated into the illumination part of the rendering algorithm and optimized for static meshes. Furthermore, another method is proposed, which allows the handling of dynamic objects. It uses hardware-accelerated occlusion queries for the visibility determination. In contrast to other algorithms, the presented algorithm, despite its simplicity, is fast and produces less artifacts than other methods. As a plus, it incorporates changeable distant direct high-dynamic-range illumination. The human perception system is the main target of any computer graphics application and can also be treated as part of the rendering pipeline. Therefore, optimization of the rendering itself can be achieved by analyzing human perception of certain visual aspects in the image. As a part of this thesis, an experiment is introduced that evaluates human shadow perception to speedup shadow rendering and provides optimization approaches. Another subarea of cloth visualization in computer graphics is the animation of the cloth and avatars for presentations. This work also describes two new methods for automatic generation and compression of animation sequences. The first method to generate completely new, customizable animation sequences, is based on the concept of finding similarities in animation frames of a given basis sequence. Identifying these similarities allows jumps within the basis sequence to generate endless new sequences. Transmission of any animated 3D data over bandwidth-limited channels, like extended networks or to less powerful clients requires efficient compression schemes. The second method included in this thesis in the animation field is a geometry data compression scheme. Similar to the BTF compression, it uses PCA in combination with clustering algorithms to segment similar moving parts of the animated objects to achieve high compression rates in combination with a very exact reconstruction quality.Realistische Visualisierung von animierter virtueller Kleidung Das photorealistisches Rendering realer GegenstĂ€nde ist ein weites Forschungsfeld und hat Anwendungen in vielen Bereichen. Dazu zĂ€hlen Computer generierte Filme (CGI), die Unterhaltungsindustrie und E-Commerce. Innerhalb dieses Forschungsbereiches ist das Rendern von photorealistischer Kleidung ein wichtiger Bestandteil. Hier reichen die wichtigen Aspekte, die es zu berĂŒcksichtigen gilt, von optischen Materialeigenschaften ĂŒber makroskopische Selbstabschattung bis zur Animationsgenerierung und -kompression. In dieser Arbeit wird, neben der EinfĂŒhrung in das Thema, ein weiter Ăberblick ĂŒber Ă€hnlich gelagerte Arbeiten gegeben. Der Schwerpunkt der Arbeit liegt auf den wichtigen Aspekten der virtuellen Kleidungsvisualisierung, die oben beschrieben wurden. Die optischen Reflektionseigenschaften von MaterialoberflĂ€chen spielen eine wichtige Rolle, um das so genannte look & feel von Materialien zu charakterisieren. Hierbei kann ein Material vom Nutzer identifiziert werden, ohne dass er es direkt anfassen muss. Die BTF (bidirektionale Texturfunktion)ist eine Funktion die abhĂ€ngig von der Blick- und Beleuchtungsrichtung ist. Daher ist sie eine angemessene ReprĂ€sentation von Reflektionseigenschaften. Sie enthĂ€lt Effekte wie Rauheit, Selbstabschattungen, Verdeckungen, Interreflektionen, Streuung und Farbbluten, die durch die Mesostruktur der OberflĂ€che hervorgerufen werden. Leider besteht ein BTF Datensatz eines Materials aus hunderten oder tausenden von Bildern und sprengt damit herkömmliche Hauptspeicher in Computern bei weitem. Diese Arbeit beschreibt die erste praktikable Methode, um BTF Daten effizient zu komprimieren, zu speichern und fĂŒr Echtzeitanwendungen zum Visualisieren wieder zu dekomprimieren. Die Methode basiert auf der Principal Component Analysis (PCA), die Daten nach Signifikanz ordnet. WĂ€hrend die PCA die entscheidenen visuellen Aspekte der BTF erhĂ€lt, können mit ihrer Hilfe Kompressionsraten erzielt werden, die es erlauben mehrere BTF Materialien im Hauptspeicher eines Consumer PC zu verwalten. Dies erlaubt ein High-Quality Rendering. Korrektes Verwenden von komplexen Beleuchtungssituationen spielt eine weitere, wichtige Rolle, um Kleidung realistisch erscheinen zu lassen. Daher wird zudem eine Erweiterung des BTF Kompressions- und Renderingalgorithmuses erlĂ€utert, die den Einsatz von High-Dynamic Range (HDR) Beleuchtung erlaubt, die in environment maps gespeichert wird. Um die realistische Erscheinung der Kleidung weiter zu unterstĂŒtzen, muss die makroskopische Selbstabschattung integriert werden. FĂŒr die Visualisierung von Falten und den lebensechten 3D Eindruck ist diese Art von Schatten absolut notwendig. Diese Arbeit beschreibt daher auch zwei Methoden, diese Schatten schnell und effizient zu berechnen. Die erste ist nahtlos in den Beleuchtungspart des obigen BTF Renderingalgorithmuses integriert und fĂŒr statische Geometrien optimiert. Die zweite Methode behandelt dynamische Objekte. Dazu werden hardwarebeschleunigte Occlusion Queries verwendet, um die Sichtbarkeitsberechnung durchzufĂŒhren. Diese Methode ist einerseits simpel und leicht zu implementieren, anderseits ist sie schnell und produziert weniger Artefakte, als vergleichbare Methoden. ZusĂ€tzlich ist die Verwendung von verĂ€nderbarer, entfernter HDR Beleuchtung integriert. Das menschliche Wahrnehmungssystem ist das eigentliche Ziel jeglicher Anwendung in der Computergrafik und kann daher selbst als Teil einer erweiterten Rendering Pipeline gesehen werden. Daher kann das Rendering selbst optimiert werden, wenn man die menschliche Wahrnehmung verschiedener visueller Aspekte der berechneten Bilder analysiert. Teil der vorliegenden Arbeit ist die Beschreibung eines Experimentes, das menschliche Schattenwahrnehmung untersucht, um das Rendern der Schatten zu beschleunigen. Ein weiteres Teilgebiet der Kleidungsvisualisierung in der Computergrafik ist die Animation der Kleidung und von Avataren fĂŒr PrĂ€sentationen. Diese Arbeit beschreibt zwei neue Methoden auf diesem Teilgebiet. Einmal ein Algorithmus, der fĂŒr die automatische Generierung neuer Animationssequenzen verwendet werden kann und zum anderen einen Kompressionsalgorithmus fĂŒr eben diese Sequenzen. Die automatische Generierung von völlig neuen, anpassbaren Animationen basiert auf dem Konzept der Ăhnlichkeitssuche. Hierbei werden die einzelnen Schritte von gegebenen Basisanimationen auf Ăhnlichkeiten hin untersucht, die zum Beispiel die Geschwindigkeiten einzelner Objektteile sein können. Die Identifizierung dieser Ăhnlichkeiten erlaubt dann SprĂŒnge innerhalb der Basissequenz, die dazu benutzt werden können, endlose, neue Sequenzen zu erzeugen. Die Ăbertragung von animierten 3D Daten ĂŒber bandbreitenlimitierte KanĂ€le wie ausgedehnte Netzwerke, Mobilfunk oder zu sogenannten thin clients erfordert eine effiziente Komprimierung. Die zweite, in dieser Arbeit vorgestellte Methode, ist ein Kompressionsschema fĂŒr Geometriedaten. Ăhnlich wie bei der Kompression von BTF Daten wird die PCA in Verbindung mit Clustering benutzt, um die animierte Geometrie zu analysieren und in sich Ă€hnlich bewegende Teile zu segmentieren. Diese erkannten Segmente lassen sich dann hoch komprimieren. Der Algorithmus arbeitet automatisch und erlaubt zudem eine sehr exakte RekonstruktionsqualitĂ€t nach der Dekomprimierung
SPATIO-TEMPORAL REGISTRATION IN AUGMENTED REALITY
The overarching goal of Augmented Reality (AR) is to provide users with the illusion that virtual and real objects coexist indistinguishably in the same space. An effective persistent illusion requires accurate registration between the real and the virtual objects, registration that is spatially and temporally coherent. However, visible misregistration can be caused by many inherent error sources, such as errors in calibration, tracking, and modeling, and system delay. This dissertation focuses on new methods that could be considered part of "the last mile" of spatio-temporal registration in AR: closed-loop spatial registration and low-latency temporal registration: 1. For spatial registration, the primary insight is that calibration, tracking and modeling are means to an end---the ultimate goal is registration. In this spirit I present a novel pixel-wise closed-loop registration approach that can automatically minimize registration errors using a reference model comprised of the real scene model and the desired virtual augmentations. Registration errors are minimized in both global world space via camera pose refinement, and local screen space via pixel-wise adjustments. This approach is presented in the context of Video See-Through AR (VST-AR) and projector-based Spatial AR (SAR), where registration results are measurable using a commodity color camera. 2. For temporal registration, the primary insight is that the real-virtual relationships are evolving throughout the tracking, rendering, scanout, and display steps, and registration can be improved by leveraging fine-grained processing and display mechanisms. In this spirit I introduce a general end-to-end system pipeline with low latency, and propose an algorithm for minimizing latency in displays (DLP DMD projectors in particular). This approach is presented in the context of Optical See-Through AR (OST-AR), where system delay is the most detrimental source of error. I also discuss future steps that may further improve spatio-temporal registration. Particularly, I discuss possibilities for using custom virtual or physical-virtual fiducials for closed-loop registration in SAR. The custom fiducials can be designed to elicit desirable optical signals that directly indicate any error in the relative pose between the physical and projected virtual objects.Doctor of Philosoph
Photorealistic retrieval of occluded facial information using a performance-driven face model
Facial occlusions can cause both human observers and computer algorithms
to fail in a variety of important tasks such as facial action analysis and
expression classification. This is because the missing information is not
reconstructed accurately enough for the purpose of the task in hand. Most
current computer methods that are used to tackle this problem implement
complex three-dimensional polygonal face models that are generally timeconsuming
to produce and unsuitable for photorealistic reconstruction of
missing facial features and behaviour.
In this thesis, an image-based approach is adopted to solve the occlusion
problem. A dynamic computer model of the face is used to retrieve the
occluded facial information from the driver faces. The model consists of a
set of orthogonal basis actions obtained by application of principal
component analysis (PCA) on image changes and motion fields extracted
from a sequence of natural facial motion (Cowe 2003). Examples of
occlusion affected facial behaviour can then be projected onto the model to
compute coefficients of the basis actions and thus produce photorealistic
performance-driven animations.
Visual inspection shows that the PCA face model recovers aspects of
expressions in those areas occluded in the driver sequence, but the expression is generally muted. To further investigate this finding, a database
of test sequences affected by a considerable set of artificial and natural
occlusions is created. A number of suitable metrics is developed to measure
the accuracy of the reconstructions. Regions of the face that are most
important for performance-driven mimicry and that seem to carry the best
information about global facial configurations are revealed using Bubbles,
thus in effect identifying facial areas that are most sensitive to occlusions.
Recovery of occluded facial information is enhanced by applying an
appropriate scaling factor to the respective coefficients of the basis actions
obtained by PCA. This method improves the reconstruction of the facial
actions emanating from the occluded areas of the face. However, due to the
fact that PCA produces bases that encode composite, correlated actions,
such an enhancement also tends to affect actions in non-occluded areas of
the face. To avoid this, more localised controls for facial actions are
produced using independent component analysis (ICA). Simple projection
of the data onto an ICA model is not viable due to the non-orthogonality of
the extracted bases. Thus occlusion-affected mimicry is first generated using
the PCA model and then enhanced by accordingly manipulating the
independent components that are subsequently extracted from the mimicry.
This combination of methods yields significant improvements and results in
photorealistic reconstructions of occluded facial actions
Scalable and Extensible Augmented Reality with Applications in Civil Infrastructure Systems.
In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: 1) it reinforces the connections between people and objects, and promotes engineersâ appreciation about their working context; 2) It allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; 3) It offsets the significant cost of 3D Model Engineering by including the real world background.
The research has successfully overcome several long-standing technical obstacles in AR and investigated technical approaches to address fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co- existence; integrating these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community, and can be readily reused and extended by other researchers and engineers.
The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables spotters to âseeâ buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96145/1/dsuyang_1.pd
Kinect Range Sensing: Structured-Light versus Time-of-Flight Kinect
Recently, the new Kinect One has been issued by Microsoft, providing the next
generation of real-time range sensing devices based on the Time-of-Flight (ToF)
principle. As the first Kinect version was using a structured light approach,
one would expect various differences in the characteristics of the range data
delivered by both devices. This paper presents a detailed and in-depth
comparison between both devices. In order to conduct the comparison, we propose
a framework of seven different experimental setups, which is a generic basis
for evaluating range cameras such as Kinect. The experiments have been designed
with the goal to capture individual effects of the Kinect devices as isolatedly
as possible and in a way, that they can also be adopted, in order to apply them
to any other range sensing device. The overall goal of this paper is to provide
a solid insight into the pros and cons of either device. Thus, scientists that
are interested in using Kinect range sensing cameras in their specific
application scenario can directly assess the expected, specific benefits and
potential problem of either device.Comment: 58 pages, 23 figures. Accepted for publication in Computer Vision and
Image Understanding (CVIU
- âŠ