49 research outputs found
Learned holographic light transport: Invited
Computer-generated holography algorithms often fall short in matching simulations with results from a physical holographic display.Our work addresses this mismatch by learning the holographic light transport in holographic displays. Using a camera and a holographic display, we capture the image reconstructions of optimized holograms that rely on ideal simulations to generate a dataset. Inspired by the ideal simulations, we learn a complex-valued convolution kernel that can propagate given holograms to captured photographs in our dataset. Our method can dramatically improve simulation accuracy and image quality in holographic displays while paving the way for physically informed learning approaches
AutoColor: learned light power control for multi-color holograms
Multi-color holograms rely on simultaneous illumination from multiple light sources. These multi-color holograms could utilize light sources better than conventional single-color holograms and can improve the dynamic range of holographic displays. In this letter, we introduce AutoColor, the first learned method for estimating the optimal light source powers required for illuminating multi-color holograms. For this purpose, we establish the first multi-color hologram dataset using synthetic images and their depth information. We generate these synthetic images using a trending pipeline combining generative, large language, and monocular depth estimation models. Finally, we train our learned model using our dataset and experimentally demonstrate that AutoColor significantly decreases the number of steps required to optimize multi-color holograms from > 1000 to 70 iteration steps without compromising image quality
Flora: a framework for decomposing software architecture to introduce local recovery
The decomposition of software architecture into modular units is usually driven by the required quality concerns. In this paper we focus on the impact of local recovery concern on the decomposition of the software system. For achieving local recovery, the system needs to be decomposed into separate units that can be recovered in isolation. However, it appears that this required decomposition for recovery is usually not aligned with the decomposition based on functional concerns. Moreover, introducing local recovery to a software system, while preserving the existing decomposition, is not trivial and requires substantial development and maintenance effort. To reduce this effort we propose a framework that supports the decomposition and implementation of software architecture for local recovery. The framework provides reusable abstractions for defining recoverable units and the necessary coordination and communication protocols for recovery. We discuss our experiences in the application and evaluation of the framework for introducing local recovery to the open-source media player called MPlayer. Copyright 2009 John Wiley & Sons, Ltd
Optimizing decomposition of software architecture for local recovery
The increasing size and complexity of software systems has led to an amplified number of potential failures and as such makes it harder to ensure software reliability. Since it is usually hard to prevent all the failures, fault tolerance techniques have become more important. An essential element of fault tolerance is the recovery from failures. Local recovery is an effective approach whereby only the erroneous parts of the system are recovered while the other parts remain available. For achieving local recovery, the architecture needs to be decomposed into separate units that can be recovered in isolation. Usually, there are many different alternative ways to decompose the system into recoverable units. It appears that each of these decomposition alternatives performs differently with respect to availability and performance metrics. We propose a systematic approach dedicated to optimizing the decomposition of software architecture for local recovery. The approach provides systematic guidelines to depict the design space of the possible decomposition alternatives, to reduce the design space with respect to domain and stakeholder constraints and to balance the feasible alternatives with respect to availability and performance. The approach is supported by an integrated set of tools and illustrated for the open-source MPlayer software. © 2011 Springer Science+Business Media, LLC
Runtime verification of component-based embedded software
To deal with increasing size and complexity, component-based software development has been employed in embedded systems. Due to several faults, components can make wrong assumptions about the working mode of the system and the working modes of the other components. To detect mode inconsistencies at runtime, we propose a "lightweight" error detection mechanism, which can be integrated with component-based embedded systems. We define links among three levels of abstractions: the runtime behavior of components, the working mode specifications of components and the specification of the working modes of the system. This allows us to detect the user observable runtime errors. The effectiveness of the approach is demonstrated by implementing a software monitor integrated into a TV system. © 2012 Springer-Verlag London Limited
Investigation of heavy-heavy pseudoscalar mesons in thermal QCD Sum Rules
We investigate the mass and decay constant of the heavy-heavy pseudoscalar,
, and mesons in the framework of finite temperature QCD
sum rules. The annihilation and scattering parts of spectral density are
calculated in the lowest order of perturbation theory. Taking into account the
additional operators arising at finite temperature, the nonperturbative
corrections are also evaluated. The masses and decay constants remain unchanged
under , but after this point, they start to diminish with
increasing the temperature. At critical or deconfinement temperature, the decay
constants reach approximately to 35% of their values in the vacuum, while the
masses are decreased about 7%, 12% and 2% for , and
states, respectively. The results at zero temperature are in a good consistency
with the existing experimental values as well as predictions of the other
nonperturbative approaches.Comment: 11 Pages, 2 Tables and 6 Figure
Perceptually guided Computer-Generated Holography
Computer-Generated Holography (CGH) promises to deliver genuine, high-quality visuals at any depth. We argue that combining CGH and perceptually guided graphics can soon lead to practical holographic display systems that deliver perceptually realistic images. We propose a new CGH method called metameric varifocal holograms. Our CGH method generates images only at a user’s focus plane while displayed images are statistically correct and indistinguishable from actual targets across peripheral vision (metamers). Thus, a user observing our holograms is set to perceive a high quality visual at their gaze location. At the same time, the integrity of the image follows a statistically correct trend in the remaining peripheral parts. We demonstrate our differentiable CGH optimization pipeline on modern GPUs, and we support our findings with a display prototype. Our method will pave the way towards realistic visuals free from classical CGH problems, such as speckle noise or poor visual quality
Beam forming for a laser based auto-stereoscopic multi-viewer display
An auto-stereoscopic back projection display using a RGB multi-emitter laser illumination source and micro-optics to provide a wider view is described. The laser optical properties and the speckle due to the optical system configuration and its diffusers are characterised
Optimizing decomposition of software architecture for local recovery
Cataloged from PDF version of article.The increasing size and complexity of software systems has led to an amplified number of potential failures and as such makes it harder to ensure software reliability. Since it is usually hard to prevent all the failures, fault tolerance techniques have become more important. An essential element of fault tolerance is the recovery from failures. Local recovery is an effective approach whereby only the erroneous parts of the system are recovered while the other parts remain available. For achieving local recovery, the architecture needs to be decomposed into separate units that can be recovered in isolation. Usually, there are many different alternative ways to decompose the system into recoverable units. It appears that each of these decomposition alternatives performs differently with respect to availability and performance metrics. We propose a systematic approach dedicated to optimizing the decomposition of software architecture for local recovery. The approach provides systematic guidelines to depict the design space of the possible decomposition alternatives, to reduce the design space with respect to domain and stakeholder constraints and to balance the feasible alternatives with respect to availability and performance. The approach is supported by an integrated set of tools and illustrated for the open-source MPlayer software