800 research outputs found
A Compressive Multi-Mode Superresolution Display
Compressive displays are an emerging technology exploring the co-design of
new optical device configurations and compressive computation. Previously,
research has shown how to improve the dynamic range of displays and facilitate
high-quality light field or glasses-free 3D image synthesis. In this paper, we
introduce a new multi-mode compressive display architecture that supports
switching between 3D and high dynamic range (HDR) modes as well as a new
super-resolution mode. The proposed hardware consists of readily-available
components and is driven by a novel splitting algorithm that computes the pixel
states from a target high-resolution image. In effect, the display pixels
present a compressed representation of the target image that is perceived as a
single, high resolution image.Comment: Technical repor
The Unreasonable Effectiveness of Deep Features as a Perceptual Metric
While it is nearly effortless for humans to quickly assess the perceptual
similarity between two images, the underlying processes are thought to be quite
complex. Despite this, the most widely used perceptual metrics today, such as
PSNR and SSIM, are simple, shallow functions, and fail to account for many
nuances of human perception. Recently, the deep learning community has found
that features of the VGG network trained on ImageNet classification has been
remarkably useful as a training loss for image synthesis. But how perceptual
are these so-called "perceptual losses"? What elements are critical for their
success? To answer these questions, we introduce a new dataset of human
perceptual similarity judgments. We systematically evaluate deep features
across different architectures and tasks and compare them with classic metrics.
We find that deep features outperform all previous metrics by large margins on
our dataset. More surprisingly, this result is not restricted to
ImageNet-trained VGG features, but holds across different deep architectures
and levels of supervision (supervised, self-supervised, or even unsupervised).
Our results suggest that perceptual similarity is an emergent property shared
across deep visual representations.Comment: Accepted to CVPR 2018; Code and data available at
https://www.github.com/richzhang/PerceptualSimilarit
High Resolution Linear Polarimetric Imaging for the Event Horizon Telescope
Images of the linear polarization of synchrotron radiation around Active
Galactic Nuclei (AGN) identify their projected magnetic field lines and provide
key data for understanding the physics of accretion and outflow from
supermassive black holes. The highest resolution polarimetric images of AGN are
produced with Very Long Baseline Interferometry (VLBI). Because VLBI
incompletely samples the Fourier transform of the source image, any image
reconstruction that fills in unmeasured spatial frequencies will not be unique
and reconstruction algorithms are required. In this paper, we explore
extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI
imaging. In contrast to previous work, our polarimetric MEM algorithm combines
a Stokes I imager that uses only bispectrum measurements that are immune to
atmospheric phase corruption with a joint Stokes Q and U imager that operates
on robust polarimetric ratios. We demonstrate the effectiveness of our
technique on 7- and 3-mm wavelength quasar observations from the VLBA and
simulated 1.3-mm Event Horizon Telescope observations of Sgr A* and M87.
Consistent with past studies, we find that polarimetric MEM can produce
superior resolution compared to the standard CLEAN algorithm when imaging
smooth and compact source distributions. As an imaging framework, MEM is highly
adaptable, allowing a range of constraints on polarization structure.
Polarimetric MEM is thus an attractive choice for image reconstruction with the
EHT.Comment: 19 pages, 9 figures. Accepted for publication in ApJ. Imaging code
available at https://github.com/achael/eht-imaging
- …