4,255 research outputs found
Selecting texture resolution using a task-specific visibility metric
In real-time rendering, the appearance of scenes is greatly affected by the quality and resolution of the textures used for image
synthesis. At the same time, the size of textures determines the performance and the memory requirements of rendering. As a
result, finding the optimal texture resolution is critical, but also a non-trivial task since the visibility of texture imperfections
depends on underlying geometry, illumination, interactions between several texture maps, and viewing positions. Ideally, we
would like to automate the task with a visibility metric, which could predict the optimal texture resolution. To maximize the
performance of such a metric, it should be trained on a given task. This, however, requires sufficient user data which is often
difficult to obtain. To address this problem, we develop a procedure for training an image visibility metric for a specific task
while reducing the effort required to collect new data. The procedure involves generating a large dataset using an existing
visibility metric followed by refining that dataset with the help of an efficient perceptual experiment. Then, such a refined
dataset is used to retune the metric. This way, we augment sparse perceptual data to a large number of per-pixel annotated
visibility maps which serve as the training data for application-specific visibility metrics. While our approach is general and
can be potentially applied for different image distortions, we demonstrate an application in a game-engine where we optimize
the resolution of various textures, such as albedo and normal maps
The IPAC Image Subtraction and Discovery Pipeline for the intermediate Palomar Transient Factory
We describe the near real-time transient-source discovery engine for the
intermediate Palomar Transient Factory (iPTF), currently in operations at the
Infrared Processing and Analysis Center (IPAC), Caltech. We coin this system
the IPAC/iPTF Discovery Engine (or IDE). We review the algorithms used for
PSF-matching, image subtraction, detection, photometry, and machine-learned
(ML) vetting of extracted transient candidates. We also review the performance
of our ML classifier. For a limiting signal-to-noise ratio of 4 in relatively
unconfused regions, "bogus" candidates from processing artifacts and imperfect
image subtractions outnumber real transients by ~ 10:1. This can be
considerably higher for image data with inaccurate astrometric and/or
PSF-matching solutions. Despite this occasionally high contamination rate, the
ML classifier is able to identify real transients with an efficiency (or
completeness) of ~ 97% for a maximum tolerable false-positive rate of 1% when
classifying raw candidates. All subtraction-image metrics, source features, ML
probability-based real-bogus scores, contextual metadata from other surveys,
and possible associations with known Solar System objects are stored in a
relational database for retrieval by the various science working groups. We
review our efforts in mitigating false-positives and our experience in
optimizing the overall system in response to the multitude of science projects
underway with iPTF.Comment: 66 pages, 21 figures, 7 tables, accepted by PAS
Dataset and metrics for predicting local visible differences
A large number of imaging and computer graphics applications require localized information on the visibility of image distortions. Existing image quality metrics are not suitable for this task as they provide a single quality value per image. Existing visibility metrics produce visual difference maps, and are specifically designed for detecting just noticeable distortions but their predictions are often inaccurate. In this work, we argue that the key reason for this problem is the lack of large image collections with a good coverage of possible distortions that occur in different applications. To address the problem, we collect an extensive dataset of reference and distorted image pairs together with user markings indicating whether distortions are visible or not. We propose a statistical model that is designed for the meaningful interpretation of such data, which is affected by visual search and imprecision of manual marking. We use our dataset for training existing metrics and we demonstrate that their performance significantly improves. We show that our dataset with the proposed statistical model can be used to train a new CNN-based metric, which outperforms the existing solutions. We demonstrate the utility of such a metric in visually lossless JPEG compression, super-resolution and watermarking.</jats:p
Recommended from our members
Visibility metrics and their applications in visually lossless image compression
Visibility metrics are image metrics that predict the probability that a human observer can detect differences between a pair of images. These metrics can provide localized information in the form of visibility maps, in which each value represents a probability of detection. An important application of the visibility metric is visually lossless image compression that aims at compressing a given image to the lowest fraction of bit per pixel while keeping the compression artifacts invisible at the same time.
In previous works, most visibility metrics were modeled based on largely simplified assumptions and mathematical models of human visual systems. This approach generally fits well into experimental data measured with simple stimuli, such as Gabor patches. However, it cannot predict complex non-linear effects, such as contrast masking in natural images, particularly well. To predict visibility of image differences accurately, we collected the largest visibility dataset under fixed viewing conditions for calibrating existing visibility metrics and proposed a deep neural network-based visibility metric. We demonstrated in our experiments that the deep neural network-based visibility metric significantly outperformed existing visibility metrics.
However, the deep neural network-based visibility metric cannot predict visibility under varying viewing conditions, such as display brightness and viewing distances that have great impacts on the visibility of distortions. To extend the deep neural network-based visibility metric to varying viewing conditions, we collected the largest visibility dataset under varying display brightness and viewing distances. We proposed incorporating white-box modules, in other words, luminance masking and viewing distance adaptation, into the black-box deep neural network, and we found that the combination of white-box modules and black-box deep neural networks could generalize our proposed visibility metric to varying viewing conditions.
To demonstrate the application of our proposed deep neural network-based visibility metric to visually lossless image compression, we collected the visually lossless image compression dataset under fixed viewing conditions and significantly improved the deep neural network-based visibility metric's accuracy of predicting visually lossless image compression threshold by pre-training the visibility metric with a synthetic dataset generated by the state-of-the-art white-box visibility metric---HDR-VDP \cite{Mantiuk2011}. In a large-scale study of 1000 images, we found that with our improved visibility metric, we can save around 60\% to 70\% bits for visually lossless image compression encoding as compared to the default visually lossless quality level of 90.
Because predicting image visibility and predicting image quality are closely related research topics, we also proposed a trained perceptually uniform transform for high dynamic range images and videos quality assessments by training a perceptual encoding function on a set of subjective quality assessment datasets. We have shown that when combining the trained perceptual encoding function with standard dynamic range image quality metrics, such as peak-signal-noise-ratio (PSNR), better performance was achieved compared to the untrained version
Holistic recommender systems for software engineering
The knowledge possessed by developers is often not sufficient to overcome a programming problem. Short of talking to teammates, when available, developers often gather additional knowledge from development artifacts (e.g., project documentation), as well as online resources. The web has become an essential component in the modern developer’s daily life, providing a plethora of information from sources like forums, tutorials, Q&A websites, API documentation, and even video tutorials. Recommender Systems for Software Engineering (RSSE) provide developers with assistance to navigate the information space, automatically suggest useful items, and reduce the time required to locate the needed information. Current RSSEs consider development artifacts as containers of homogeneous information in form of pure text. However, text is a means to represent heterogeneous information provided by, for example, natural language, source code, interchange formats (e.g., XML, JSON), and stack traces. Interpreting the information from a pure textual point of view misses the intrinsic heterogeneity of the artifacts, thus leading to a reductionist approach. We propose the concept of Holistic Recommender Systems for Software Engineering (H-RSSE), i.e., RSSEs that go beyond the textual interpretation of the information contained in development artifacts. Our thesis is that modeling and aggregating information in a holistic fashion enables novel and advanced analyses of development artifacts. To validate our thesis we developed a framework to extract, model and analyze information contained in development artifacts in a reusable meta- information model. We show how RSSEs benefit from a meta-information model, since it enables customized and novel analyses built on top of our framework. The information can be thus reinterpreted from an holistic point of view, preserving its multi-dimensionality, and opening the path towards the concept of holistic recommender systems for software engineering
Soccer on Your Tabletop
We present a system that transforms a monocular video of a soccer game into a
moving 3D reconstruction, in which the players and field can be rendered
interactively with a 3D viewer or through an Augmented Reality device. At the
heart of our paper is an approach to estimate the depth map of each player,
using a CNN that is trained on 3D player data extracted from soccer video
games. We compare with state of the art body pose and depth estimation
techniques, and show results on both synthetic ground truth benchmarks, and
real YouTube soccer footage.Comment: CVPR'18. Project: http://grail.cs.washington.edu/projects/soccer
CHORUS Deliverable 2.1: State of the Art on Multimedia Search Engines
Based on the information provided by European projects and national initiatives related to multimedia search as well as domains experts that participated in the CHORUS Think-thanks and workshops, this document reports on the state of the art related to multimedia content search from, a technical, and socio-economic perspective.
The technical perspective includes an up to date view on content based indexing and retrieval technologies, multimedia search in the context of mobile devices and peer-to-peer networks, and an overview of current evaluation and benchmark inititiatives to measure the performance of multimedia search engines.
From a socio-economic perspective we inventorize the impact and legal consequences of these technical advances and point out future directions of research
- …