52,179 research outputs found
A contrast-sensitive reversible visible image watermarking technique
A reversible (also called lossless, distortion-free, or
invertible) visible watermarking scheme is proposed to satisfy the applications, in which the visible watermark is expected to combat copyright piracy but can be removed to losslessly recover the original image. We transparently reveal the watermark image by overlapping it on a user-specified region of the host image through adaptively adjusting the pixel values beneath the watermark, depending on the human visual system-based scaling factors. In order to achieve reversibility, a reconstruction/ recovery packet, which is utilized to restore the watermarked area, is reversibly inserted into non-visibly-watermarked region. The packet is established according to the difference image between the original image and its approximate version instead of its visibly watermarked version so as to alleviate its overhead. For the generation of the approximation, we develop a simple prediction technique that makes use of the unaltered neighboring pixels as auxiliary information. The recovery packet is uniquely encoded before hiding so that the original watermark pattern can be reconstructed based on the encoded packet. In this way, the image recovery process is carried out without needing the availability of the watermark. In addition, our method adopts data compression for further reduction in the recovery packet size and improvement in embedding capacity. The experimental results demonstrate the superiority of the proposed scheme compared to the existing methods
Background derivation and image flattening: getimages
Modern high-resolution images obtained with space observatories display
extremely strong intensity variations across images on all spatial scales.
Source extraction in such images with methods based on global thresholding may
bring unacceptably large numbers of spurious sources in bright areas while
failing to detect sources in low-background or low-noise areas. It would be
highly beneficial to subtract background and equalize the levels of small-scale
fluctuations in the images before extracting sources or filaments. This paper
describes getimages, a new method of background derivation and image
flattening. It is based on median filtering with sliding windows that
correspond to a range of spatial scales from the observational beam size up to
a maximum structure width . The latter is a single free parameter
of getimages that can be evaluated manually from the observed image
. The median filtering algorithm provides a background
image for structures of all widths below
. The same median filtering procedure applied to an image of
standard deviations derived from a
background-subtracted image results in a
flattening image . Finally, a flattened
detection image
is computed, whose standard deviations are uniform outside sources and
filaments. Detecting sources in such greatly simplified images results in much
cleaner extractions that are more complete and reliable. As a bonus, getimages
reduces various observational and map-making artifacts and equalizes noise
levels between independent tiles of mosaicked images.Comment: 14 pages, 11 figures (main text + 3 appendices), accepted by
Astronomy & Astrophysics; fixed Metadata abstract (typesetting
TreeCol: a novel approach to estimating column densities in astrophysical simulations
We present TreeCol, a new and efficient tree-based scheme to calculate column
densities in numerical simulations. Knowing the column density in any direction
at any location in space is a prerequisite for modelling the propagation of
radiation through the computational domain. TreeCol therefore forms the basis
for a fast, approximate method for modelling the attenuation of radiation
within large numerical simulations. It constructs a HEALPix sphere at any
desired location and accumulates the column density by walking the tree and by
adding up the contributions from all tree nodes whose line of sight contributes
to the pixel under consideration. In particular when combined with widely-used
tree-based gravity solvers the new scheme requires little additional
computational cost. In a simulation with resolution elements, the
computational cost of TreeCol scales as , instead of the
scaling of most other radiative transfer schemes. TreeCol is naturally
adaptable to arbitrary density distributions and is easy to implement and to
parallelize. We discuss its accuracy and performance characteristics for the
examples of a spherical protostellar core and for the turbulent interstellar
medium. We find that the column density estimates provided by TreeCol are on
average accurate to better than 10 percent. In another application, we compute
the dust temperatures for solar neighborhood conditions and compare with the
result of a full-fledged Monte Carlo radiation-transfer calculation. We find
that both methods give very similar answers. We conclude that TreeCol provides
a fast, easy to use, and sufficiently accurate method of calculating column
densities that comes with little additional computational cost when combined
with an existing tree-based gravity solver.Comment: 11 pages, 10 figures, submitted to MNRA
Edge Potential Functions (EPF) and Genetic Algorithms (GA) for Edge-Based Matching of Visual Objects
Edges are known to be a semantically rich representation of the contents of a digital image. Nevertheless, their use in practical applications is sometimes limited by computation and complexity constraints. In this paper, a new approach is presented that addresses the problem of matching visual objects in digital images by combining the concept of Edge Potential Functions (EPF) with a powerful matching tool based on Genetic Algorithms (GA). EPFs can be easily calculated starting from an edge map and provide a kind of attractive pattern for a matching contour, which is conveniently exploited by GAs. Several tests were performed in the framework of different image matching applications. The results achieved clearly outline the potential of the proposed method as compared to state of the art methodologies. (c) 2007 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works
An Evaluation of Popular Copy-Move Forgery Detection Approaches
A copy-move forgery is created by copying and pasting content within the same
image, and potentially post-processing it. In recent years, the detection of
copy-move forgeries has become one of the most actively researched topics in
blind image forensics. A considerable number of different algorithms have been
proposed focusing on different types of postprocessed copies. In this paper, we
aim to answer which copy-move forgery detection algorithms and processing steps
(e.g., matching, filtering, outlier detection, affine transformation
estimation) perform best in various postprocessing scenarios. The focus of our
analysis is to evaluate the performance of previously proposed feature sets. We
achieve this by casting existing algorithms in a common pipeline. In this
paper, we examined the 15 most prominent feature sets. We analyzed the
detection performance on a per-image basis and on a per-pixel basis. We created
a challenging real-world copy-move dataset, and a software framework for
systematic image manipulation. Experiments show, that the keypoint-based
features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and
Zernike features perform very well. These feature sets exhibit the best
robustness against various noise sources and downsampling, while reliably
identifying the copied regions.Comment: Main paper: 14 pages, supplemental material: 12 pages, main paper
appeared in IEEE Transaction on Information Forensics and Securit
- …