370 research outputs found
Experimental evidence of a hiding zone in a density-near-zero acoustic metamaterial
[EN] This paper examines the feasibility of cloaking an obstacle using Plate-type Acoustic Metamaterials (PAMs). We present two distinct strategies
to cloak this obstacle, using either the near-zero-density regime of a periodic arrangement of plates or the acoustic doping phenomenon
to achieve simultaneous zero-phase propagation and impedance matching. The strong limitations induced by viscothermal and
viscoelastic losses that cannot be avoided in such a system are studied. A hiding zone is reported analytically, numerically, and experimentally.
In contrast to cloaking, where zero-phase propagation must be accompanied by total transmission and zero reflection, the hiding configuration
requires that the scattering properties of the hiding device must not be affected by the presence of the obstacle embedded in it.
Contrary to cloaking, the hiding phenomenon is achievable even with a realistic PAM possessing unavoidable losses.This article is based upon the work from COST Action DENORMS (No. CA15125), supported by COST (European Cooperation). The authors would like to thank the support of the ANR-RGC METARoom (No. ANR-18-CE08-0021) project. J. Christensen acknowledges the support from the European Research Council (ERC) through the Starting Grant No. 714577 PHONOMETA and from the MINECO through a Ramon y Cajal Grant (No. RYC-2015-17156).Malléjac, M.; Merkel, A.; Sánchez-Dehesa Moreno-Cid, J.; Christensen, J.; Tournat, V.; Romero-García, V.; Groby, J. (2021). Experimental evidence of a hiding zone in a density-near-zero acoustic metamaterial. Journal of Applied Physics. 129(14):1-9. https://doi.org/10.1063/5.0042383S191291
Roadmap on optical security
Postprint (author's final draft
Mobile app with steganography functionalities
[Abstract]: Steganography is the practice of hiding information within other data, such as images, audios,
videos, etc. In this research, we consider applying this useful technique to create a mobile
application that lets users conceal their own secret data inside other media formats, send that
encoded data to other users, and even perform analysis to images that may have been under
a steganography attack.
For image steganography, lossless compression formats employ Least Significant Bit (LSB)
encoding within Red Green Blue (RGB) pixel values. Reciprocally, lossy compression formats,
such as JPEG, utilize data concealment in the frequency domain by altering the quantized
matrices of the files.
Video steganography follows two similar methods. In lossless video formats that permit
compression, the LSB approach is applied to the RGB pixel values of individual frames.
Meanwhile, in lossy High Efficient Video Coding (HEVC) formats, a displaced bit modification
technique is used with the YUV components.[Resumo]: A esteganografía é a práctica de ocultar determinada información dentro doutros datos,
como imaxes, audio, vídeos, etc. Neste proxecto pretendemos aplicar esta técnica como visión
para crear unha aplicación móbil que permita aos usuarios ocultar os seus propios datos
secretos dentro doutros formatos multimedia, enviar eses datos cifrados a outros usuarios e
mesmo realizar análises de imaxes que puidesen ter sido comprometidas por un ataque esteganográfico.
Para a esteganografía de imaxes, os formatos con compresión sen perdas empregan a
codificación Least Significant Bit (LSB) dentro dos valores Red Green Blue (RGB) dos seus
píxeles. Por outra banda, os formatos de compresión con perdas, como JPEG, usan a ocultación
de datos no dominio de frecuencia modificando as matrices cuantificadas dos ficheiros.
A esteganografía de vídeo segue dous métodos similares. En formatos de vídeo sen perdas,
o método LSB aplícase aos valores RGB de píxeles individuais de cadros. En cambio, nos
formatos High Efficient Video Coding (HEVC) con compresión con perdas, úsase unha técnica
de cambio de bits nos compoñentes YUV.Traballo fin de grao (UDC.FIC). Enxeñaría Informática. Curso 2022/202
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Entropy in Image Analysis II
Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas
Recommended from our members
Secure digital documents using Steganography and QR Code
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonWith the increasing use of the Internet several problems have arisen regarding the processing of electronic documents. These include content filtering, content retrieval/search. Moreover, document security has taken a centre stage including copyright protection, broadcast monitoring etc. There is an acute need of an effective tool which can find the identity, location and the time when the document was created so that it can be determined whether or not the contents of the document were tampered with after creation. Owing the sensitivity of the large amounts of data which is processed on a daily basis, verifying the authenticity and integrity of a document is more important now than it ever was. Unsurprisingly document authenticity verification has become the centre of attention in the world of research. Consequently, this research is concerned with creating a tool which deals with the above problem. This research proposes the use of a Quick Response Code as a message carrier for Text Key-print. The Text Key-print is a novel method which employs the basic element of the language (i.e. Characters of the alphabet) in order to achieve authenticity of electronic documents through the transformation of its physical structure into a logical structured relationship. The resultant dimensional matrix is then converted into a binary stream and encapsulated with a serial number or URL inside a Quick response Code (QR code) to form a digital fingerprint mark. For hiding a QR code, two image steganography techniques were developed based upon the spatial and the transform domains. In the spatial domain, three methods were proposed and implemented based on the least significant bit insertion technique and the use of pseudorandom number generator to scatter the message into a set of arbitrary pixels. These methods utilise the three colour channels in the images based on the RGB model based in order to embed one, two or three bits per the eight bit channel which results in three different hiding capacities. The second technique is an adaptive approach in transforming domain where a threshold value is calculated under a predefined location for embedding in order to identify the embedding strength of the embedding technique. The quality of the generated stego images was evaluated using both objective (PSNR) and Subjective (DSCQS) methods to ensure the reliability of our proposed methods. The experimental results revealed that PSNR is not a strong indicator of the perceived stego image quality, but not a bad interpreter also of the actual quality of stego images. Since the visual difference between the cover and the stego image must be absolutely imperceptible to the human visual system, it was logically convenient to ask human observers with different qualifications and experience in the field of image processing to evaluate the perceived quality of the cover and the stego image. Thus, the subjective responses were analysed using statistical measurements to describe the distribution of the scores given by the assessors. Thus, the proposed scheme presents an alternative approach to protect digital documents rather than the traditional techniques of digital signature and watermarking
Data Hiding and Its Applications
Data hiding techniques have been widely used to provide copyright protection, data integrity, covert communication, non-repudiation, and authentication, among other applications. In the context of the increased dissemination and distribution of multimedia content over the internet, data hiding methods, such as digital watermarking and steganography, are becoming increasingly relevant in providing multimedia security. The goal of this book is to focus on the improvement of data hiding algorithms and their different applications (both traditional and emerging), bringing together researchers and practitioners from different research fields, including data hiding, signal processing, cryptography, and information theory, among others
The Survey, Taxonomy, and Future Directions of Trustworthy AI: A Meta Decision of Strategic Decisions
When making strategic decisions, we are often confronted with overwhelming
information to process. The situation can be further complicated when some
pieces of evidence are contradicted each other or paradoxical. The challenge
then becomes how to determine which information is useful and which ones should
be eliminated. This process is known as meta-decision. Likewise, when it comes
to using Artificial Intelligence (AI) systems for strategic decision-making,
placing trust in the AI itself becomes a meta-decision, given that many AI
systems are viewed as opaque "black boxes" that process large amounts of data.
Trusting an opaque system involves deciding on the level of Trustworthy AI
(TAI). We propose a new approach to address this issue by introducing a novel
taxonomy or framework of TAI, which encompasses three crucial domains:
articulate, authentic, and basic for different levels of trust. To underpin
these domains, we create ten dimensions to measure trust:
explainability/transparency, fairness/diversity, generalizability, privacy,
data governance, safety/robustness, accountability, reproducibility,
reliability, and sustainability. We aim to use this taxonomy to conduct a
comprehensive survey and explore different TAI approaches from a strategic
decision-making perspective
- …