140 research outputs found

    Iterative image restoration with adaptive regularization and parametric constraints

    No full text
    In this paper the iterative methods of image restoration are considered. These methods are based on the successive approximation algorithm with adaptive regularization and parametric constrains on the solution. The adaptive regularization preserves the global image smoothing and is considered as the combined nonlinear operator for simultaneous removal of additive Gaussian and impulse noises. The corresponded condition of iteration convergence is investigated. The adaptation strategy is based on the generalized noise visibility function which determines the pixel belonging to the flat arias or edges. Noise visibility function is considered as an indicator function and mathematically determined as the intersection of two additional binary images obtained from local variance estimation and edge image. Extending the previous work, it is proposed the newpaprametric constrain on the solution in spatial frequency domain. Opposite the above mentioned regularization, which bounds from above the energy of the restored frequency components, the proposed adaptive frequency constrain determines the lower bound of the solution. The introduction of such constrain is conditioned by the inability of the classical regularized iterative algorithms with the existed constrains to restore the strongly depressed or missed frequency components. To overcome this disadvantagethe parametric model of image spectrum is used. The model consists of the sum of 3 exponential decays to approximate the whole image magnitude spectrum using available information about low-pass frequency part of the degraded image. The proposed approach has the corresponded analogue in the spatial coordinate domain where well-known parametric model of maximum entropy method is used to obtain high spatial resolution. However, opposite maximum entropy model, which is mostly suitable for impulselike images due to its all-poles character, the proposed frequency parametric model has the higher level of generalization, because the exponential model describes the large amount of real image spectra. The performed computer simulation illustrates the high efficiency of the proposed technique on the examples of images degraded by defocusing

    Radiometry imaging system with digital signal processing

    No full text
    The problem of high resolution radiometry imaging is considered. An iterative method of radiometry image processing is presented in the paper. The problem of image reconstruction is considered as the inverse ill-posed problem. The method proposed makes use of the iterative technique and regularization procedure to improve image quality. Nonlinearity of the method is provided by a nonnegative constrain and a space limitation on the probable image extent that makes possible to accomplish band-limited extrapolation and enhance the resolution

    Multimodal object authentication with random projections : a worst-case approach

    No full text
    In this paper, we consider a forensic multimodal authentication framework based on binary hypothesis testing in random projections domain. We formulate a generic authentication problem taking into account several possible counterfeiting strategies. The authentication performance analysis is accomplished in the scope of Neyman- Pearson framework as well as for an average probability of error for both direct and random projections domains. Worst-case attack/acquisition channel leading to the worst performance loss in terms of Bhattacharyya distance reduction is presented. The obtained theoretical findings are also confirmed by results of computer simulation

    Copyright and content protection for digital images based on asymmetric cryptographic techniques

    No full text
    This paper presents a new approach for the copyright protection of digital multimedia data. The system applies cryptographic protocols and a public key technique for different purposes, namely encoding/decoding a digital watermark generated by any spread spectrum technique and the secure transfer of watermarked data from the sender to the receiver in a commercial business process. The public key technique is applied for the construction of a one-way watermark embedding and verification function to identify and prove the uniqueness of the watermark. Our approach provides secure owner authentication data who has initiated the watermark process for a specific data set. Legal dispute resolution is supported for multiple watermarking of digital data without revealing the confidential keying information. Content protection for images is provided by ciphering/deciphering the data in the transform domain

    Research data SNF IT-DIS: Information-theoretic analysis of deep identification systems

    No full text
    AbstractSNF No. 200021-182063: Information-theoretic analysis of deep identification systems GitHub: https://github.com/sip-group/snf-it-dis-cod

    Active Content Fingerpriting

    No full text
    Content fingerprinting and digital watermarking are techniques that are used for content protection and distribution monitoring and, more recently, for interaction with physical objects. Over the past few years, both techniques have been well studied and their shortcomings understood. In this paper, we introduce a new framework called active content fingerprinting, which takes the best from two worlds of content fingerprinting and digital watermarking, in order to overcome some of the fundamental restrictions of these techniques in terms of performance and complexity. The proposed framework extends the encoding process of conventional content fingerprinting in a way similar to digital watermarking, thus allowing the extraction of fingerprints from the modified cover data. We consider several encoding strategies, examine the performance of the proposed schemes in terms of bit error rate, the probabilities of correct identification and false acceptance and compare it with those of conventional fingerprinting and digital watermarking. Finally, we extend the proposed framework to the multidimensional case based on lattices and demonstrate its performance on both synthetic data and real images

    Optimal transform domain watermark embedding via linear programming

    No full text
    Invisible Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyright material. In recent years it has been recognized that embedding information in a transform domain leads to mo re robust watermarks. A major difficulty inwatermarking in a transform domain lies in the fact that constraints on the allowable distortion at any pixel may be speci ed in the spatial domain. The central contribution of the paper is the proposal of an approach which takes into account spatial domain constraints in an optimal fashion. The main idea is to structure the watermark embedding as a linear programming problem in which we wish to maximize the strength of the watermark subject to a set of lin- ear constraints on the pixel distortions as determined by a masking function. We consider the special cases of embedding in the DCT domain and wavelet domain using the Haar wavelet and Daubechies 4-tap lter in conjunction with a masking function based on a non-stationary Gaussian model, but the algorithm is applicable to any combination of transform and masking functions. Our results indicate that the proposed approach performs well against lossy compression such as JPEG and other types of ltering which do not change the geometry of the image
    • …
    corecore