277 research outputs found
Optimising Spatial and Tonal Data for PDE-based Inpainting
Some recent methods for lossy signal and image compression store only a few
selected pixels and fill in the missing structures by inpainting with a partial
differential equation (PDE). Suitable operators include the Laplacian, the
biharmonic operator, and edge-enhancing anisotropic diffusion (EED). The
quality of such approaches depends substantially on the selection of the data
that is kept. Optimising this data in the domain and codomain gives rise to
challenging mathematical problems that shall be addressed in our work.
In the 1D case, we prove results that provide insights into the difficulty of
this problem, and we give evidence that a splitting into spatial and tonal
(i.e. function value) optimisation does hardly deteriorate the results. In the
2D setting, we present generic algorithms that achieve a high reconstruction
quality even if the specified data is very sparse. To optimise the spatial
data, we use a probabilistic sparsification, followed by a nonlocal pixel
exchange that avoids getting trapped in bad local optima. After this spatial
optimisation we perform a tonal optimisation that modifies the function values
in order to reduce the global reconstruction error. For homogeneous diffusion
inpainting, this comes down to a least squares problem for which we prove that
it has a unique solution. We demonstrate that it can be found efficiently with
a gradient descent approach that is accelerated with fast explicit diffusion
(FED) cycles. Our framework allows to specify the desired density of the
inpainting mask a priori. Moreover, is more generic than other data
optimisation approaches for the sparse inpainting problem, since it can also be
extended to nonlinear inpainting operators such as EED. This is exploited to
achieve reconstructions with state-of-the-art quality.
We also give an extensive literature survey on PDE-based image compression
methods
Improved methods and system for watermarking halftone images
Watermarking is becoming increasingly important for content control and authentication. Watermarking seamlessly embeds data in media that provide additional information about that media. Unfortunately, watermarking schemes that have been developed for continuous tone images cannot be directly applied to halftone images. Many of the existing watermarking methods require characteristics that are implicit in continuous tone images, but are absent from halftone images. With this in mind, it seems reasonable to develop watermarking techniques specific to halftones that are equipped to work in the binary image domain. In this thesis, existing techniques for halftone watermarking are reviewed and improvements are developed to increase performance and overcome their limitations. Post-halftone watermarking methods work on existing halftones. Data Hiding Cell Parity (DHCP) embeds data in the parity domain instead of individual pixels. Data Hiding Mask Toggling (DHMT) works by encoding two bits in the 2x2 neighborhood of a pseudorandom location. Dispersed Pseudorandom Generator (DPRG), on the other hand, is a preprocessing step that takes place before image halftoning. DPRG disperses the watermark embedding locations to achieve better visual results. Using the Modified Peak Signal-to-Noise Ratio (MPSNR) metric, the proposed techniques outperform existing methods by up to 5-20%, depending on the image type and method considered. Field programmable gate arrays (FPGAs) are ideal for solutions that require the flexibility of software, while retaining the performance of hardware. Using VHDL, an FPGA based halftone watermarking engine was designed and implemented for the Xilinx Virtex XCV300. This system was designed for watermarking pre-existing halftones and halftones obtained from grayscale images. This design utilizes 99% of the available FPGA resources and runs at 33 MHz. Such a design could be applied to a scanner or printer at the hardware level without adversely affecting performance
Digital Color Imaging
This paper surveys current technology and research in the area of digital
color imaging. In order to establish the background and lay down terminology,
fundamental concepts of color perception and measurement are first presented
us-ing vector-space notation and terminology. Present-day color recording and
reproduction systems are reviewed along with the common mathematical models
used for representing these devices. Algorithms for processing color images for
display and communication are surveyed, and a forecast of research trends is
attempted. An extensive bibliography is provided
Innovations in the Analysis of Chandra-ACIS Observations
As members of the instrument team for the Advanced CCD Imaging Spectrometer
(ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we
have developed a wide variety of data analysis methods that we believe are
useful to the Chandra community, and have constructed a significant body of
publicly-available software (the ACIS Extract package) addressing important
ACIS data and science analysis tasks. This paper seeks to describe these data
analysis methods for two purposes: to document the data analysis work performed
in our own science projects, and to help other ACIS observers judge whether
these methods may be useful in their own projects (regardless of what tools and
procedures they choose to implement those methods).
The ACIS data analysis recommendations we offer here address much of the
workflow in a typical ACIS project, including data preparation, point source
detection via both wavelet decomposition and image reconstruction, masking
point sources, identification of diffuse structures, event extraction for both
point and diffuse sources, merging extractions from multiple observations,
nonparametric broad-band photometry, analysis of low-count spectra, and
automation of these tasks. Many of the innovations presented here arise from
several, often interwoven, complications that are found in many Chandra
projects: large numbers of point sources (hundreds to several thousand), faint
point sources, misaligned multiple observations of an astronomical field, point
source crowding, and scientifically relevant diffuse emission.Comment: Accepted by the ApJ, 2010 Mar 10 (\#343576) 39 pages, 16 figure
High Redshift Supernovae in the Hubble Deep Field
Two supernovae detected in the Hubble Deep Field using the original December
1995 epoch and data from a shorter (63000 s in F814W) December 1997 visit with
HST are discussed. The supernovae (SNe) are both associated with distinct
galaxies at redshifts of 0.95 (spectroscopic) from Cohen etal. (1996) and 1.32
(photometric) from the work of Fernandez-Soto, Lanzetta, and Yahil (1998).
These redshifts are near, in the case of 0.95, and well beyond for 1.32 the
greatest distance reported previously for SNe. We show that our observations
are sensitive to SNe to z < 1.8 in either epoch for an event near peak
brightness. Detailed simulations are discussed that quantify the level at which
false events from our search phase would start to to arise, and the
completeness of our search as a function of both SN brightness and host galaxy
redshift. The number of Type Ia and Type II SNe expected as a function of
redshift in the two HDF epochs are discussed in relation to several published
predictions and our own detailed calculations. A mean detection frequency of
one SN per epoch for the small HDF area is consistent with expectations from
current theory.Comment: 62 pages, 17 figures, ApJ 1999 in pres
Perceptual Error Optimization for {Monte Carlo} Rendering
Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods
- …