19,910 research outputs found
The development of local solar irradiance for outdoor computer graphics rendering
Atmospheric effects are approximated by solving the light transfer equation, LTE, of a given viewing path. The resulting accumulated spectral energy (its visible band) arriving at the observer’s eyes, defines the colour of the object currently on the line of sight. Due to the convenience of using a single rendering equation to solve the LTE for daylight sky and distant objects (aerial perspective), recent methods had opt for a similar kind of approach. Alas, the burden that the real-time calculation brings to the foil had forced these methods to make simplifications that were not in line with the actual world observation. Consequently, the results of these methods are laden with visual-errors. The two most common simplifications made were: i) assuming the atmosphere as a full-scattering medium only and ii) assuming a single density atmosphere profile. This research explored the possibility of replacing the real-time calculation involved in solving the LTE with an analytical-based approach. Hence, the two simplifications made by the previous real-time methods can be avoided. The model was implemented on top of a flight simulator prototype system since the requirements of such system match the objectives of this study. Results were verified against the actual images of the daylight skies. Comparison was also made with the previous methods’ results to showcase the proposed model strengths and advantages over its peers
Computational periscopy with an ordinary digital camera
Computing the amounts of light arriving from different directions enables a diffusely reflecting surface to play the part of a mirror in a periscope—that is, perform non-line-of-sight imaging around an obstruction. Because computational periscopy has so far depended on light-travel distances being proportional to the times of flight, it has mostly been performed with expensive, specialized ultrafast optical systems^1,2,3,4,5,6,7,8,9,10,11,12. Here we introduce a two-dimensional computational periscopy technique that requires only a single photograph captured with an ordinary digital camera. Our technique recovers the position of an opaque object and the scene behind (but not completely obscured by) the object, when both the object and scene are outside the line of sight of the camera, without requiring controlled or time-varying illumination. Such recovery is based on the visible penumbra of the opaque object having a linear dependence on the hidden scene that can be modelled through ray optics. Non-line-of-sight imaging using inexpensive, ubiquitous equipment may have considerable value in monitoring hazardous environments, navigation and detecting hidden adversaries.We thank F. Durand, W. T. Freeman, Y. Ma, J. Rapp, J. H. Shapiro, A. Torralba, F. N. C. Wong and G. W. Wornell for discussions. This work was supported by the Defense Advanced Research Projects Agency (DARPA) REVEAL Program contract number HR0011-16-C-0030. (HR0011-16-C-0030 - Defense Advanced Research Projects Agency (DARPA) REVEAL Program)Accepted manuscrip
Using Machine-Learning to Optimize phase contrast in a Low-Cost Cellphone Microscope
Cellphones equipped with high-quality cameras and powerful CPUs as well as
GPUs are widespread. This opens new prospects to use such existing
computational and imaging resources to perform medical diagnosis in developing
countries at a very low cost.
Many relevant samples, like biological cells or waterborn parasites, are
almost fully transparent. As they do not exhibit absorption, but alter the
light's phase only, they are almost invisible in brightfield microscopy.
Expensive equipment and procedures for microscopic contrasting or sample
staining often are not available.
By applying machine-learning techniques, such as a convolutional neural
network (CNN), it is possible to learn a relationship between samples to be
examined and its optimal light source shapes, in order to increase e.g. phase
contrast, from a given dataset to enable real-time applications. For the
experimental setup, we developed a 3D-printed smartphone microscope for less
than 100 \$ using off-the-shelf components only such as a low-cost video
projector. The fully automated system assures true Koehler illumination with an
LCD as the condenser aperture and a reversed smartphone lens as the microscope
objective. We show that the effect of a varied light source shape, using the
pre-trained CNN, does not only improve the phase contrast, but also the
impression of an improvement in optical resolution without adding any special
optics, as demonstrated by measurements
Using Machine-Learning to Optimize phase contrast in a Low-Cost Cellphone Microscope
Cellphones equipped with high-quality cameras and powerful CPUs as well as
GPUs are widespread. This opens new prospects to use such existing
computational and imaging resources to perform medical diagnosis in developing
countries at a very low cost.
Many relevant samples, like biological cells or waterborn parasites, are
almost fully transparent. As they do not exhibit absorption, but alter the
light's phase only, they are almost invisible in brightfield microscopy.
Expensive equipment and procedures for microscopic contrasting or sample
staining often are not available.
By applying machine-learning techniques, such as a convolutional neural
network (CNN), it is possible to learn a relationship between samples to be
examined and its optimal light source shapes, in order to increase e.g. phase
contrast, from a given dataset to enable real-time applications. For the
experimental setup, we developed a 3D-printed smartphone microscope for less
than 100 \$ using off-the-shelf components only such as a low-cost video
projector. The fully automated system assures true Koehler illumination with an
LCD as the condenser aperture and a reversed smartphone lens as the microscope
objective. We show that the effect of a varied light source shape, using the
pre-trained CNN, does not only improve the phase contrast, but also the
impression of an improvement in optical resolution without adding any special
optics, as demonstrated by measurements
Navigation domain representation for interactive multiview imaging
Enabling users to interactively navigate through different viewpoints of a
static scene is a new interesting functionality in 3D streaming systems. While
it opens exciting perspectives towards rich multimedia applications, it
requires the design of novel representations and coding techniques in order to
solve the new challenges imposed by interactive navigation. Interactivity
clearly brings new design constraints: the encoder is unaware of the exact
decoding process, while the decoder has to reconstruct information from
incomplete subsets of data since the server can generally not transmit images
for all possible viewpoints due to resource constrains. In this paper, we
propose a novel multiview data representation that permits to satisfy bandwidth
and storage constraints in an interactive multiview streaming system. In
particular, we partition the multiview navigation domain into segments, each of
which is described by a reference image and some auxiliary information. The
auxiliary information enables the client to recreate any viewpoint in the
navigation segment via view synthesis. The decoder is then able to navigate
freely in the segment without further data request to the server; it requests
additional data only when it moves to a different segment. We discuss the
benefits of this novel representation in interactive navigation systems and
further propose a method to optimize the partitioning of the navigation domain
into independent segments, under bandwidth and storage constraints.
Experimental results confirm the potential of the proposed representation;
namely, our system leads to similar compression performance as classical
inter-view coding, while it provides the high level of flexibility that is
required for interactive streaming. Hence, our new framework represents a
promising solution for 3D data representation in novel interactive multimedia
services
- …