65 research outputs found

    Information-Theoretic Multiclass Classification Based on Binary Classifiers: On Coding Matrix Design, Reliability and Maximum Number of Classes

    Get PDF
    In this paper, we consider the multiclass classification problem based on sets of independent binary classifiers. Each binary classifier represents the output of a quantized projection of training data onto a randomly generated orthonormal basis vector thus producing a binary label. The ensemble of all binary labels forms an analogue of a coding matrix. The properties of such kind of matrices and their impact on the maximum number of uniquely distinguishable classes are analyzed in this paper from an information-theoretic point of view. We also consider a concept of reliability for such kind of coding matrix generation that can be an alternative to other adaptive training techniques and investigate the impact on the bit error probability. We demonstrate that it is equivalent to the considered random coding matrix without any bit reliability information in terms of recognition rat

    Privacy-Preserving Image Sharing via Sparsifying Layers on Convolutional Groups

    Full text link
    We propose a practical framework to address the problem of privacy-aware image sharing in large-scale setups. We argue that, while compactness is always desired at scale, this need is more severe when trying to furthermore protect the privacy-sensitive content. We therefore encode images, such that, from one hand, representations are stored in the public domain without paying the huge cost of privacy protection, but ambiguated and hence leaking no discernible content from the images, unless a combinatorially-expensive guessing mechanism is available for the attacker. From the other hand, authorized users are provided with very compact keys that can easily be kept secure. This can be used to disambiguate and reconstruct faithfully the corresponding access-granted images. We achieve this with a convolutional autoencoder of our design, where feature maps are passed independently through sparsifying transformations, providing multiple compact codes, each responsible for reconstructing different attributes of the image. The framework is tested on a large-scale database of images with public implementation available.Comment: Accepted as an oral presentation for ICASSP 202

    Anomaly localization for copy detection patterns through print estimations

    Full text link
    Copy detection patterns (CDP) are recent technologies for protecting products from counterfeiting. However, in contrast to traditional copy fakes, deep learning-based fakes have shown to be hardly distinguishable from originals by traditional authentication systems. Systems based on classical supervised learning and digital templates assume knowledge of fake CDP at training time and cannot generalize to unseen types of fakes. Authentication based on printed copies of originals is an alternative that yields better results even for unseen fakes and simple authentication metrics but comes at the impractical cost of acquisition and storage of printed copies. In this work, to overcome these shortcomings, we design a machine learning (ML) based authentication system that only requires digital templates and printed original CDP for training, whereas authentication is based solely on digital templates, which are used to estimate original printed codes. The obtained results show that the proposed system can efficiently authenticate original and detect fake CDP by accurately locating the anomalies in the fake CDP. The empirical evaluation of the authentication system under investigation is performed on the original and ML-based fakes CDP printed on two industrial printers

    Generalized radar/radiometry imaging problems, Journal of Telecommunications and Information Technology, 2001, nr 4

    Get PDF
    In the paper the results of spatio-temporal imaging simulation based on radar, synthetic aperture radar (SAR) and radiometry systems are presented. The analytical relationship between object scattering/emitting and the formed image is given and the general approach for the description of imaging system by means of Frendholm equation solution is developed. The potential limit of image resolution based on Rao-Cramer inequality is estimated

    Radio-astronomical Image Reconstruction with Conditional Denoising Diffusion Model

    Full text link
    Reconstructing sky models from dirty radio images for accurate source localization and flux estimation is crucial for studying galaxy evolution at high redshift, especially in deep fields using instruments like the Atacama Large Millimetre Array (ALMA). With new projects like the Square Kilometre Array (SKA), there's a growing need for better source extraction methods. Current techniques, such as CLEAN and PyBDSF, often fail to detect faint sources, highlighting the need for more accurate methods. This study proposes using stochastic neural networks to rebuild sky models directly from dirty images. This method can pinpoint radio sources and measure their fluxes with related uncertainties, marking a potential improvement in radio source characterization. We tested this approach on 10164 images simulated with the CASA tool simalma, based on ALMA's Cycle 5.3 antenna setup. We applied conditional Denoising Diffusion Probabilistic Models (DDPMs) for sky models reconstruction, then used Photutils to determine source coordinates and fluxes, assessing the model's performance across different water vapor levels. Our method showed excellence in source localization, achieving more than 90% completeness at a signal-to-noise ratio (SNR) as low as 2. It also surpassed PyBDSF in flux estimation, accurately identifying fluxes for 96% of sources in the test set, a significant improvement over CLEAN+ PyBDSF's 57%. Conditional DDPMs is a powerful tool for image-to-image translation, yielding accurate and robust characterisation of radio sources, and outperforming existing methodologies. While this study underscores its significant potential for applications in radio astronomy, we also acknowledge certain limitations that accompany its usage, suggesting directions for further refinement and research.Comment: In production in Astronomy&Astrophyic

    Information-theoretic analysis of privacy-preserving identification

    No full text
    Digital content fingerprinting has emerged as a possible technique for fast, robust and privacy-preserving identification, which is a highly demanded application due to the increased interaction with humans and physical objects as well as explosive amount of multimedia data. Besides of the high attractiveness, the existing methods of content identification based on content fingerprinting still lack both deep theoretical understanding of achievable performance limits and practical methods capable to achieve these limits. Additionally, yet little is known about privacy protection technologies, which can provide zero-privacy leakage together with sufficient identification rate. In this thesis, the information-theoretic fundamentals of privacy-preserving content identification are introduced and analyzed. To cover the majority of existing practical techniques, the generalized model of content identification system is proposed which consists of decorrelation transform, dimensionality reduction mapper, privacy protection, indexed database and decoder. The state-of-the-art methods are analyzed according to the defined performance measures based on the achievable identification rate and privacy leak. More particularly, the trade-off between them is investigated with respect to each element of content identification system. The thesis provides four major contributions in part of generalized analysis of dimensionality reduction mapping, a new technique of channel decomposition according to the signs and magnitudes of signal components, a concept of channel splitting and polarization and, finally, a new method of zero-leak privacy protection
    corecore