451 research outputs found
Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging
Many graphics and vision problems are naturally expressed as optimizations with either linear or non-linear least squares objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance in interactive applications. We propose a new language, Opt (available under http://optlang.org), in which a user simply writes energy functions over image- or graph-structured unknowns, and a compiler automatically generates state-of-the-art GPU optimization kernels. The end result is a system in which real-world energy functions in graphics and vision applications are expressible in tens of lines of code. They compile directly into highly-optimized GPU solver implementations with performance competitive with the best published hand-tuned, application-specific GPU solvers, and 1-2 orders of magnitude beyond a general-purpose auto-generated solver
A Library for Declarative Resolution-Independent 2D Graphics
The design of most 2D graphics frameworks has been guided by what the computer can draw efficiently, instead of by how graphics can best be expressed and composed. As a result, such frameworks restrict expressivity by providing a limited set of shape primitives, a limited set of textures and only affine transformations. For example, non-affine transformations can only be added by invasive modification or complex tricks rather than by simple composition. More general frameworks exist, but they make it harder to describe and analyze shapes. We present a new declarative approach to resolution-independent 2D graphics that generalizes and simplifies the functionality of traditional frameworks, while preserving their efficiency. As a real-world example, we show the implementation of a form of focus+context lenses that gives better image quality and better performance than the state-of-the-art solution at a fraction of the code. Our approach can serve as a versatile foundation for the creation of advanced graphics and higher level frameworks
Temporal light field reconstruction for rendering distribution effects
Traditionally, effects that require evaluating multidimensional integrals for each pixel, such as motion blur, depth of field, and soft shadows, suffer from noise due to the variance of the high-dimensional integrand. In this paper, we describe a general reconstruction technique that exploits the anisotropy in the temporal light field and permits efficient reuse of samples between pixels, multiplying the effective sampling rate by a large factor. We show that our technique can be applied in situations that are challenging or impossible for previous anisotropic reconstruction methods, and that it can yield good results with very sparse inputs. We demonstrate our method for simultaneous motion blur, depth of field, and soft shadows
Decoupling algorithms from schedules for easy optimization of image processing pipelines
Using existing programming tools, writing high-performance image processing code requires sacrificing readability, portability, and modularity. We argue that this is a consequence of conflating what computations define the algorithm, with decisions about storage and the order of computation. We refer to these latter two concerns as the schedule, including choices of tiling, fusion, recomputation vs. storage, vectorization, and parallelism.
We propose a representation for feed-forward imaging pipelines that separates the algorithm from its schedule, enabling high-performance without sacrificing code clarity. This decoupling simplifies the algorithm specification: images and intermediate buffers become functions over an infinite integer domain, with no explicit storage or boundary conditions. Imaging pipelines are compositions of functions. Programmers separately specify scheduling strategies for the various functions composing the algorithm, which allows them to efficiently explore different optimizations without changing the algorithmic code.
We demonstrate the power of this representation by expressing a range of recent image processing applications in an embedded domain specific language called Halide, and compiling them for ARM, x86, and GPUs. Our compiler targets SIMD units, multiple cores, and complex memory hierarchies. We demonstrate that it can handle algorithms such as a camera raw pipeline, the bilateral grid, fast local Laplacian filtering, and image segmentation. The algorithms expressed in our language are both shorter and faster than state-of-the-art implementations.National Science Foundation (U.S.) (Grant 0964004)National Science Foundation (U.S.) (Grant 0964218)National Science Foundation (U.S.) (Grant 0832997)United States. Dept. of Energy (Award DE-SC0005288)Cognex CorporationAdobe System
A Library for Declarative Resolution-Independent 2D Graphics
htmlabstractThe design of most 2D graphics frameworks has been guided by what the computer can draw efficiently, instead of by how graphics can best be expressed and composed. As a result, such frameworks restrict expressivity by providing a limited set of shape primitives, a limited set of textures and only affine transformations. For example, non-affine transformations can only be added by invasive modification or complex tricks rather than by simple composition. More general frameworks exist, but they make it harder to describe and analyze shapes. We present a new declarative approach to resolution-independent 2D graphics that generalizes and simplifies the functionality of traditional frameworks, while preserving their efficiency. As a real-world example, we show the implementation of a form of focus+context lenses that gives better image quality and better performance than the state-of-the-art solution at a fraction of the code. Our approach can serve as a versatile foundation for the creation of advanced graphics and higher level frameworks
Direct measurement of stellar angular diameters by the VERITAS Cherenkov Telescopes
The angular size of a star is a critical factor in determining its basic
properties. Direct measurement of stellar angular diameters is difficult: at
interstellar distances stars are generally too small to resolve by any
individual imaging telescope. This fundamental limitation can be overcome by
studying the diffraction pattern in the shadow cast when an asteroid occults a
star, but only when the photometric uncertainty is smaller than the noise added
by atmospheric scintillation. Atmospheric Cherenkov telescopes used for
particle astrophysics observations have not generally been exploited for
optical astronomy due to the modest optical quality of the mirror surface.
However, their large mirror area makes them well suited for such
high-time-resolution precision photometry measurements. Here we report two
occultations of stars observed by the VERITAS Cherenkov telescopes with
millisecond sampling, from which we are able to provide a direct measurement of
the occulted stars' angular diameter at the milliarcsecond scale.
This is a resolution never achieved before with optical measurements and
represents an order of magnitude improvement over the equivalent lunar
occultation method. We compare the resulting stellar radius with empirically
derived estimates from temperature and brightness measurements, confirming the
latter can be biased for stars with ambiguous stellar classifications.Comment: Accepted for publication in Nature Astronom
Evidence for proton acceleration up to TeV energies based on VERITAS and Fermi-LAT observations of the Cas A SNR
We present a study of -ray emission from the core-collapse supernova
remnant Cas~A in the energy range from 0.1GeV to 10TeV. We used 65 hours of
VERITAS data to cover 200 GeV - 10 TeV, and 10.8 years of \textit{Fermi}-LAT
data to cover 0.1-500 GeV. The spectral analysis of \textit{Fermi}-LAT data
shows a significant spectral curvature around GeV that is
consistent with the expected spectrum from pion decay. Above this energy, the
joint spectrum from \textit{Fermi}-LAT and VERITAS deviates significantly from
a simple power-law, and is best described by a power-law with spectral index of
with a cut-off energy of TeV. These
results, along with radio, X-ray and -ray data, are interpreted in the
context of leptonic and hadronic models. Assuming a one-zone model, we exclude
a purely leptonic scenario and conclude that proton acceleration up to at least
6 TeV is required to explain the observed -ray spectrum. From modeling
of the entire multi-wavelength spectrum, a minimum magnetic field inside the
remnant of is deduced.Comment: 33 pages, 9 Figures, 6 Table
Very-high-energy observations of the binaries V 404 Cyg and 4U 0115+634 during giant X-ray outbursts
Transient X-ray binaries produce major outbursts in which the X-ray flux can
increase over the quiescent level by factors as large as . The low-mass
X-ray binary V 404 Cyg and the high-mass system 4U 0115+634 underwent such
major outbursts in June and October 2015, respectively. We present here
observations at energies above hundreds of GeV with the VERITAS observatory
taken during some of the brightest X-ray activity ever observed from these
systems. No gamma-ray emission has been detected by VERITAS in 2.5 hours of
observations of the microquasar V 404 Cyg from 2015, June 20-21. The upper flux
limits derived from these observations on the gamma-ray flux above 200 GeV of F
cm s correspond to a tiny fraction (about
) of the Eddington luminosity of the system, in stark contrast to that
seen in the X-ray band. No gamma rays have been detected during observations of
4U 0115+634 in the period of major X-ray activity in October 2015. The flux
upper limit derived from our observations is F cm
s for gamma rays above 300 GeV, setting an upper limit on the ratio of
gamma-ray to X-ray luminosity of less than 4%.Comment: Accepted for publication in the Astrophysical Journa
- …