1,465 research outputs found

    In-Band Disparity Compensation for Multiview Image Compression and View Synthesis

    Get PDF

    Roadmap on optical security

    Get PDF
    Information security and authentication are important challenges facing society. Recent attacks by hackers on the databases of large commercial and financial companies have demonstrated that more research and development of advanced approaches are necessary to deny unauthorized access to critical data. Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack. This roadmap article presents an overview of the potential, recent advances, and challenges of optical security and encryption using free space optics. The roadmap on optical security is comprised of six categories that together include 16 short sections written by authors who have made relevant contributions in this field. The first category of this roadmap describes novel encryption approaches, including secure optical sensing which summarizes double random phase encryption applications and flaws [Yamaguchi], the digital holographic encryption in free space optical technique which describes encryption using multidimensional digital holography [Nomura], simultaneous encryption of multiple signals [PĂ©rez-CabrĂ©], asymmetric methods based on information truncation [Nishchal], and dynamic encryption of video sequences [Torroba]. Asymmetric and one-way cryptosystems are analyzed by Peng. The second category is on compression for encryption. In their respective contributions, Alfalou and Stern propose similar goals involving compressed data and compressive sensing encryption. The very important area of cryptanalysis is the topic of the third category with two sections: Sheridan reviews phase retrieval algorithms to perform different attacks, whereas Situ discusses nonlinear optical encryption techniques and the development of a rigorous optical information security theory. The fourth category with two contributions reports how encryption could be implemented at the nano- or micro-scale. Naruse discusses the use of nanostructures in security applications and Carnicer proposes encoding information in a tightly focused beam. In the fifth category, encryption based on ghost imaging using single-pixel detectors is also considered. In particular, the authors [Chen, Tajahuerce] emphasize the need for more specialized hardware and image processing algorithms. Finally, in the sixth category, Mosk and Javidi analyze in their corresponding papers how quantum imaging can benefit optical encryption systems. Sources that use few photons make encryption systems much more difficult to attack, providing a secure method for authentication.Centro de Investigaciones ÓpticasConsejo Nacional de Investigaciones CientĂ­ficas y TĂ©cnica

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Performance evaluation measurement of image steganography techniques with analysis of LSB based on variation image formats

    Get PDF
    Recently, Steganography is an outstanding research area which used for data protection from unauthorized access. Steganography is defined as the art and science of covert information in plain sight in various media sources such as text, images, audio, video, network channel etc. so, as to not stimulate any suspicion; while steganalysis is the science of attacking the steganographic system to reveal the secret message. This research clarifies the diverse showing the evaluation factors based on image steganographic algorithms. The effectiveness of a steganographic is rated to three main parameters, payload capacity, image quality measure and security measure. This study is focused on image steganographic which is most popular in in steganographic branches. Generally, the Least significant bit is major efficient approach utilized to embed the secret message. In addition, this paper has more detail knowledge based on Least significant bit LSB within various Images formats. All metrics are illustrated in this study with arithmetical equations while some important trends are discussed also at the end of the paper

    Toward smart and efficient scientific data management

    Get PDF
    Scientific research generates vast amounts of data, and the scale of data has significantly increased with advancements in scientific applications. To manage this data effectively, lossy data compression techniques are necessary to reduce storage and transmission costs. Nevertheless, the use of lossy compression introduces uncertainties related to its performance. This dissertation aims to answer key questions surrounding lossy data compression, such as how the performance changes, how much reduction can be achieved, and how to optimize these techniques for modern scientific data management workflows. One of the major challenges in adopting lossy compression techniques is the trade-off between data accuracy and compression performance, particularly the compression ratio. This trade-off is not well understood, leading to a trial-and-error approach in selecting appropriate setups. To address this, the dissertation analyzes and estimates the compression performance of two modern lossy compressors, SZ and ZFP, on HPC datasets at various error bounds. By predicting compression ratios based on intrinsic metrics collected under a given base error bound, the effectiveness of the estimation scheme is confirmed through evaluations using real HPC datasets. Furthermore, as scientific simulations scale up on HPC systems, the disparity between computation and input/output (I/O) becomes a significant challenge. To overcome this, error-bounded lossy compression has emerged as a solution to bridge the gap between computation and I/O. Nonetheless, the lack of understanding of compression performance hinders the wider adoption of lossy compression. The dissertation aims to address this challenge by examining the complex interaction between data, error bounds, and compression algorithms, providing insights into compression performance and its implications for scientific production. Lastly, the dissertation addresses the performance limitations of progressive data retrieval frameworks for post-hoc data analytics on full-resolution scientific simulation data. Existing frameworks suffer from over-pessimistic error control theory, leading to fetching more data than necessary for recomposition, resulting in additional I/O overhead. To enhance the performance of progressive retrieval, deep neural networks are leveraged to optimize the error control mechanism, reducing unnecessary data fetching and improving overall efficiency. By tackling these challenges and providing insights, this dissertation contributes to the advancement of scientific data management, lossy data compression techniques, and HPC progressive data retrieval frameworks. The findings and methodologies presented pave the way for more efficient and effective management of large-scale scientific data, facilitating enhanced scientific research and discovery. In future research, this dissertation highlights the importance of investigating the impact of lossy data compression on downstream analysis. On the one hand, more data reduction can be achieved under scenarios like image visualization where the error tolerance is very high, leading to less I/O and communication overhead. On the other hand, post-hoc calculations based on physical properties after compression may lead to misinterpretation, as the statistical information of such properties might be compromised during compression. Therefore, a comprehensive understanding of the impact of lossy data compression on each specific scenario is vital to ensure accurate analysis and interpretation of results

    First Quantization Estimation by a Robust Data Exploitation Strategy of DCT Coefficients

    Get PDF
    It is well known that the JPEG compression pipeline leaves residual traces in the compressed images that are useful for forensic investigations. Through the analysis of such insights the history of a digital image can be reconstructed by means of First Quantization Estimations (FQE), often employed for the camera model identification (CMI) task. In this paper, a novel FQE technique for JPEG double compressed images is proposed which employs a mixed approach based on Machine Learning and statistical analysis. The proposed method was designed to work in the aligned case (i.e., 8imes88 imes 8 JPEG grid is not misaligned among the various compressions) and demonstrated to be able to work effectively in different challenging scenarios (small input patches, custom quantization tables) without strong a-priori assumptions, surpassing state-of-the-art solutions. Finally, an in-depth analysis on the impact of image input sizes, dataset image resolutions, custom quantization tables and different Discrete Cosine Transform (DCT) implementations was carried out

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Improved decoder metrics for DS-CDMA in practical 3G systems

    Get PDF
    While 4G mobile networks have been deployed since 2008. In several of the more developed markets, 3G mobile networks are still growing with 3G having the largest market -in terms of number of users- by 2019. 3G networks are based on Direct- Sequence Code-Division Multiple-Access (DS-CDMA). DS-CDMA suffers mainly from the Multiple Access Interference (MAI) and fading. Multi-User Detectors (MUDs) and Error Correcting Codes (ECCs) are the primary means to combat MAI and fading. MUDs, however, suffer from high complexity, including most of sub-optimal algorithms. Hence, most commercial implementations still use conventional single-user matched filter detectors. This thesis proposes improved channel decoder metrics for enhancing uplink performance in 3G systems. The basic idea is to model the MAI as conditionally Gaussian, instead of Gaussian, conditioned on the users’ cross-correlations and/or the channel fading coefficients. The conditioning implies a time-dependent variance that provides enhanced reliability estimates at the decoder inputs. We derive improved log-likelihood ratios (ILLRs) for bit- and chip- asynchronous multipath fading channels. We show that while utilizing knowledge of all users’ code sequences for the ILLR metric is very complicated in chip-asynchronous reception, a simplified expression relying on truncated group delay results in negligible performance loss. We also derive an expression for the error probability using the standard Gaussian approximation for asynchronous channels for the widely used raised cosine pule shaping. Our study framework considers practical 3G systems, with finite interleaving, correlated multipath fading channel models, practical pulse shaping, and system parameters obtained from CDMA2000 standard. Our results show that for the fully practical cellular uplink channel, the performance advantage due to ILLRs is significant and approaches 3 dB
    • 

    corecore