1,128 research outputs found
The Hyper Suprime-Cam Software Pipeline
In this paper, we describe the optical imaging data processing pipeline
developed for the Subaru Telescope's Hyper Suprime-Cam (HSC) instrument. The
HSC Pipeline builds on the prototype pipeline being developed by the Large
Synoptic Survey Telescope's Data Management system, adding customizations for
HSC, large-scale processing capabilities, and novel algorithms that have since
been reincorporated into the LSST codebase. While designed primarily to reduce
HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline
for reducing general-observer HSC data. The HSC pipeline includes high level
processing steps that generate coadded images and science-ready catalogs as
well as low-level detrending and image characterizations.Comment: 39 pages, 21 figures, 2 tables. Submitted to Publications of the
Astronomical Society of Japa
Learning Iterative Neural Optimizers for Image Steganography
Image steganography is the process of concealing secret information in images
through imperceptible changes. Recent work has formulated this task as a
classic constrained optimization problem. In this paper, we argue that image
steganography is inherently performed on the (elusive) manifold of natural
images, and propose an iterative neural network trained to perform the
optimization steps. In contrast to classical optimization methods like L-BFGS
or projected gradient descent, we train the neural network to also stay close
to the manifold of natural images throughout the optimization. We show that our
learned neural optimization is faster and more reliable than classical
optimization approaches. In comparison to previous state-of-the-art
encoder-decoder-based steganography methods, it reduces the recovery error rate
by multiple orders of magnitude and achieves zero error up to 3 bits per pixel
(bpp) without the need for error-correcting codes.Comment: International Conference on Learning Representations (ICLR) 202
Theoretical model of the FLD ensemble classifier based on hypothesis testing theory
International audienceThe FLD ensemble classifier is a widely used machine learning tool for steganalysis of digital media due to its efficiency when working with high dimensional feature sets. This paper explains how this classifier can be formulated within the framework of optimal detection by using an accurate statistical model of base learners' projections and the hypothesis testing theory. A substantial advantage of this formulation is the ability to theoretically establish the test properties, including the probability of false alarm and the test power, and the flexibility to use other criteria of optimality than the conventional total probability of error. Numerical results on real images show the sharpness of the theoretically established results and the relevance of the proposed methodology
Practical Deep Dispersed Watermarking with Synchronization and Fusion
Deep learning based blind watermarking works have gradually emerged and
achieved impressive performance. However, previous deep watermarking studies
mainly focus on fixed low-resolution images while paying less attention to
arbitrary resolution images, especially widespread high-resolution images
nowadays. Moreover, most works usually demonstrate robustness against typical
non-geometric attacks (\textit{e.g.}, JPEG compression) but ignore common
geometric attacks (\textit{e.g.}, Rotate) and more challenging combined
attacks. To overcome the above limitations, we propose a practical deep
\textbf{D}ispersed \textbf{W}atermarking with \textbf{S}ynchronization and
\textbf{F}usion, called \textbf{\proposed}. Specifically, given an
arbitrary-resolution cover image, we adopt a dispersed embedding scheme which
sparsely and randomly selects several fixed small-size cover blocks to embed a
consistent watermark message by a well-trained encoder. In the extraction
stage, we first design a watermark synchronization module to locate and rectify
the encoded blocks in the noised watermarked image. We then utilize a decoder
to obtain messages embedded in these blocks, and propose a message fusion
strategy based on similarity to make full use of the consistency among
messages, thus determining a reliable message. Extensive experiments conducted
on different datasets convincingly demonstrate the effectiveness of our
proposed {\proposed}. Compared with state-of-the-art approaches, our blind
watermarking can achieve better performance: averagely improve the bit accuracy
by 5.28\% and 5.93\% against single and combined attacks, respectively, and
show less file size increment and better visual quality. Our code is available
at https://github.com/bytedance/DWSF.Comment: Accpeted by ACM MM 202
Steganography Approach to Image Authentication Using Pulse Coupled Neural Network
This paper introduces a model for the authentication of large-scale images. The crucial element of the proposed model is the optimized Pulse Coupled Neural Network. This neural network generates position matrices based on which the embedding of authentication data into cover images is applied. Emphasis is placed on the minimalization of the stego image entropy change. Stego image entropy is consequently compared with the reference entropy of the cover image. The security of the suggested solution is granted by the neural network weights initialized with a steganographic key and by the encryption of accompanying steganographic data using the AES-256 algorithm. The integrity of the images is verified through the SHA-256 hash function. The integration of the accompanying and authentication data directly into the stego image and the authentication of the large images are the main contributions of the work
- …