234 research outputs found

    Reversible Image Watermarking Using Modified Quadratic Difference Expansion and Hybrid Optimization Technique

    Get PDF
    With increasing copyright violation cases, watermarking of digital images is a very popular solution for securing online media content. Since some sensitive applications require image recovery after watermark extraction, reversible watermarking is widely preferred. This article introduces a Modified Quadratic Difference Expansion (MQDE) and fractal encryption-based reversible watermarking for securing the copyrights of images. First, fractal encryption is applied to watermarks using Tromino's L-shaped theorem to improve security. In addition, Cuckoo Search-Grey Wolf Optimization (CSGWO) is enforced on the cover image to optimize block allocation for inserting an encrypted watermark such that it greatly increases its invisibility. While the developed MQDE technique helps to improve coverage and visual quality, the novel data-driven distortion control unit ensures optimal performance. The suggested approach provides the highest level of protection when retrieving the secret image and original cover image without losing the essential information, apart from improving transparency and capacity without much tradeoff. The simulation results of this approach are superior to existing methods in terms of embedding capacity. With an average PSNR of 67 dB, the method shows good imperceptibility in comparison to other schemes

    An image steganography using improved hyper-chaotic Henon map and fractal Tromino

    Get PDF
    Steganography is a vital security approach that hides any secret content within ordinary data, such as multimedia. First, the cover image is converted into a wavelet environment using the integer wavelet transform (IWT), which protects the cover images from false mistakes. The grey wolf optimizer (GWO) is used to choose the pixel’s image that would be utilized to insert the hidden image in the cover image. GWO effectively selects pixels by calculating entropy, pixel intensity, and fitness function using the cover images. Moreover, the secret image was encrypted by utilizing a proposed hyper-chaotic improved Henon map and fractal Tromino. The suggested method increases computational security and efficiency with increased embedding capacity. Following the embedding algorithm of the secret image and the alteration of the cover image, the least significant bit (LSB) is utilized to locate the tempered region and to provide self-recovery characteristics in the digital image. According to the findings, the proposed technique provides a more secure transmission network with lower complexity in terms of peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC), structural similarity index (SSIM), entropy and mean square error (MSE). As compared to the current approaches, the proposed method performed better in terms of PSNR 70.58% Db and SSIM 0.999 respectively

    Anonymizing Speech: Evaluating and Designing Speaker Anonymization Techniques

    Full text link
    The growing use of voice user interfaces has led to a surge in the collection and storage of speech data. While data collection allows for the development of efficient tools powering most speech services, it also poses serious privacy issues for users as centralized storage makes private personal speech data vulnerable to cyber threats. With the increasing use of voice-based digital assistants like Amazon's Alexa, Google's Home, and Apple's Siri, and with the increasing ease with which personal speech data can be collected, the risk of malicious use of voice-cloning and speaker/gender/pathological/etc. recognition has increased. This thesis proposes solutions for anonymizing speech and evaluating the degree of the anonymization. In this work, anonymization refers to making personal speech data unlinkable to an identity while maintaining the usefulness (utility) of the speech signal (e.g., access to linguistic content). We start by identifying several challenges that evaluation protocols need to consider to evaluate the degree of privacy protection properly. We clarify how anonymization systems must be configured for evaluation purposes and highlight that many practical deployment configurations do not permit privacy evaluation. Furthermore, we study and examine the most common voice conversion-based anonymization system and identify its weak points before suggesting new methods to overcome some limitations. We isolate all components of the anonymization system to evaluate the degree of speaker PPI associated with each of them. Then, we propose several transformation methods for each component to reduce as much as possible speaker PPI while maintaining utility. We promote anonymization algorithms based on quantization-based transformation as an alternative to the most-used and well-known noise-based approach. Finally, we endeavor a new attack method to invert anonymization.Comment: PhD Thesis Pierre Champion | Universit\'e de Lorraine - INRIA Nancy | for associated source code, see https://github.com/deep-privacy/SA-toolki

    Robust image steganography method suited for prining = Robustna steganografska metoda prilagođena procesu tiska

    Get PDF
    U ovoj doktorskoj dizertaciji prezentirana je robustna steganografska metoda razvijena i prilagođena za tisak. Osnovni cilj metode je pružanje zaštite od krivotvorenja ambalaže. Zaštita ambalaže postiže se umetanjem više bitova informacije u sliku pri enkoderu, a potom maskiranjem informacije kako bi ona bila nevidljiva ljudskom oku. Informacija se pri dekoderu detektira pomoću infracrvene kamere. Preliminarna istraživanja pokazala su da u relevantnoj literaturi nedostaje metoda razvijenih za domenu tiska. Razlog za takav nedostatak jest činjenica da razvijanje steganografskih metoda za tisak zahtjeva veću količinu resursa i materijala, u odnosu na razvijanje sličnih domena za digitalnu domenu. Također, metode za tisak često zahtijevaju višu razinu kompleksnosti, budući da se tijekom reprodukcije pojavljuju razni oblici procesiranja koji mogu kompromitirati informaciju u slici [1]. Da bi se sačuvala skrivena informacija, metoda mora biti otporna na procesiranje koje se događa tijekom reprodukcije. Kako bi se postigla visoka razina otpornosti, informacija se može umetnuti unutar frekvencijske domene slike [2], [3]. Frekvencijskoj domeni slike možemo pristupiti pomoću matematičkih transformacija. Najčešće se koriste diskretna kosinusna transformacija (DCT), diskretna wavelet transformacija (DWT) i diskretna Fourierova transformacija (DFT) [2], [4]. Korištenje svake od navedenih transformacija ima određene prednosti i nedostatke, ovisno o kontekstu razvijanja metode [5]. Za metode prilagođene procesu tiska, diskretna Fourierova transformacija je optimalan odabir, budući da metode bazirane na DFT-u pružaju otpornost na geometrijske transformacije koje se događaju tijekom reprodukcije [5], [6]. U ovom istraživanju korištene su slike u cmyk prostoru boja. Svaka slika najprije je podijeljena u blokove, a umetanje informacije vrši se za svaki blok pojedinačno. Pomoću DFT-a, ???? kanal slikovnog bloka se transformira u frekvencijsku domenu, gdje se vrši umetanje informacije. Akromatska zamjena koristi se za maskiranje vidljivih artefakata nastalih prilikom umetanja informacije. Primjeri uspješnog korištenja akromatske zamjene za maskiranje artefakata mogu se pronaći u [7] i [8]. Nakon umetanja informacije u svaki slikovni blok, blokovi se ponovno spajaju u jednu, jedinstvenu sliku. Akromatska zamjena tada mijenja vrijednosti c, m i y kanala slike, dok kanal k, u kojemu se nalazi umetnuta informacija, ostaje nepromijenjen. Time nakon maskiranja akromatskom zamjenom označena slika posjeduje ista vizualna svojstva kao i slika prije označavanja. U eksperimentalnom dijelu rada koristi se 1000 slika u cmyk prostoru boja. U digitalnom okruženju provedeno je istraživanje otpornosti metode na slikovne napade specifične za reprodukcijski proces - skaliranje, blur, šum, rotaciju i kompresiju. Također, provedeno je istraživanje otpornosti metode na reprodukcijski proces, koristeći tiskane uzorke. Objektivna metrika bit error rate (BER) korištena je za evaluaciju. Mogućnost optimizacije metode testirala se procesiranjem slike (unsharp filter) i korištenjem error correction kodova (ECC). Provedeno je istraživanje kvalitete slike nakon umetanja informacije. Za evaluaciju su korištene objektivne metrike peak signal to noise ratio (PSNR) i structural similarity index measure (SSIM). PSNR i SSIM su tzv. full-reference metrike. Drugim riječima, potrebne su i neoznačena i označena slika istovremeno, kako bi se mogla utvrditi razina sličnosti između slika [9], [10]. Subjektivna analiza provedena je na 36 ispitanika, koristeći ukupno 144 uzorka slika. Ispitanici su ocijenjivali vidljivost artefakata na skali od nula (nevidljivo) do tri (vrlo vidljivo). Rezultati pokazuju da metoda posjeduje visoku razinu otpornosti na reprodukcijski proces. Također, metoda se uistinu optimizirala korištenjem unsharp filtera i ECC-a. Kvaliteta slike ostaje visoka bez obzira na umetanje informacije, što su potvrdili rezultati eksperimenata s objektivnim metrikama i subjektivna analiza

    Visual secret sharing and related Works -A Review

    Get PDF
    The accelerated development of network technology and internet applications has increased the significance of protecting digital data and images from unauthorized access and manipulation. The secret image-sharing network (SIS) is a crucial technique used to protect private digital photos from illegal editing and copying. SIS can be classified into two types: single-secret sharing (SSS) and multi-secret sharing (MSS). In SSS, a single secret image is divided into multiple shares, while in MSS, multiple secret images are divided into multiple shares. Both SSS and MSS ensure that the original secret images cannot be reconstructed without the correct combination of shares. Therefore, several secret image-sharing methods have been developed depending on these two methods for example visual cryptography, steganography, discrete wavelet transform, watermarking, and threshold. All of these techniques are capable of randomly dividing the secret image into a large number of shares, each of which cannot provide any information to the intrusion team.  This study examined various visual secret-sharing schemes as unique examples of participant secret-sharing methods. Several structures that generalize and enhance VSS were also discussed in this study on covert image-sharing protocols and also this research also gives a comparative analysis of several methods based on various attributes in order to better concentrate on the future directions of the secret image. Generally speaking, the image quality generated employing developed methodologies is preferable to the image quality achieved through using the traditional visual secret-sharing methodology

    Low-frequency Image Deep Steganography: Manipulate the Frequency Distribution to Hide Secrets with Tenacious Robustness

    Full text link
    Image deep steganography (IDS) is a technique that utilizes deep learning to embed a secret image invisibly into a cover image to generate a container image. However, the container images generated by convolutional neural networks (CNNs) are vulnerable to attacks that distort their high-frequency components. To address this problem, we propose a novel method called Low-frequency Image Deep Steganography (LIDS) that allows frequency distribution manipulation in the embedding process. LIDS extracts a feature map from the secret image and adds it to the cover image to yield the container image. The container image is not directly output by the CNNs, and thus, it does not contain high-frequency artifacts. The extracted feature map is regulated by a frequency loss to ensure that its frequency distribution mainly concentrates on the low-frequency domain. To further enhance robustness, an attack layer is inserted to damage the container image. The retrieval network then retrieves a recovered secret image from a damaged container image. Our experiments demonstrate that LIDS outperforms state-of-the-art methods in terms of robustness, while maintaining high fidelity and specificity. By avoiding high-frequency artifacts and manipulating the frequency distribution of the embedded feature map, LIDS achieves improved robustness against attacks that distort the high-frequency components of container images

    Optimization of medical image steganography using n-decomposition genetic algorithm

    Get PDF
    Protecting patients' confidential information is a critical concern in medical image steganography. The Least Significant Bits (LSB) technique has been widely used for secure communication. However, it is susceptible to imperceptibility and security risks due to the direct manipulation of pixels, and ASCII patterns present limitations. Consequently, sensitive medical information is subject to loss or alteration. Despite attempts to optimize LSB, these issues persist due to (1) the formulation of the optimization suffering from non-valid implicit constraints, causing inflexibility in reaching optimal embedding, (2) lacking convergence in the searching process, where the message length significantly affects the size of the solution space, and (3) issues of application customizability where different data require more flexibility in controlling the embedding process. To overcome these limitations, this study proposes a technique known as an n-decomposition genetic algorithm. This algorithm uses a variable-length search to identify the best location to embed the secret message by incorporating constraints to avoid local minimum traps. The methodology consists of five main phases: (1) initial investigation, (2) formulating an embedding scheme, (3) constructing a decomposition scheme, (4) integrating the schemes' design into the proposed technique, and (5) evaluating the proposed technique's performance based on parameters using medical datasets from kaggle.com. The proposed technique showed resistance to statistical analysis evaluated using Reversible Statistical (RS) analysis and histogram. It also demonstrated its superiority in imperceptibility and security measured by MSE and PSNR to Chest and Retina datasets (0.0557, 0.0550) and (60.6696, 60.7287), respectively. Still, compared to the results obtained by the proposed technique, the benchmark outperforms the Brain dataset due to the homogeneous nature of the images and the extensive black background. This research has contributed to genetic-based decomposition in medical image steganography and provides a technique that offers improved security without compromising efficiency and convergence. However, further validation is required to determine its effectiveness in real-world applications

    Multimodal Biometric Systems for Personal Identification and Authentication using Machine and Deep Learning Classifiers

    Get PDF
    Multimodal biometrics, using machine and deep learning, has recently gained interest over single biometric modalities. This interest stems from the fact that this technique improves recognition and, thus, provides more security. In fact, by combining the abilities of single biometrics, the fusion of two or more biometric modalities creates a robust recognition system that is resistant to the flaws of individual modalities. However, the excellent recognition of multimodal systems depends on multiple factors, such as the fusion scheme, fusion technique, feature extraction techniques, and classification method. In machine learning, existing works generally use different algorithms for feature extraction of modalities, which makes the system more complex. On the other hand, deep learning, with its ability to extract features automatically, has made recognition more efficient and accurate. Studies deploying deep learning algorithms in multimodal biometric systems tried to find a good compromise between the false acceptance and the false rejection rates (FAR and FRR) to choose the threshold in the matching step. This manual choice is not optimal and depends on the expertise of the solution designer, hence the need to automatize this step. From this perspective, the second part of this thesis details an end-to-end CNN algorithm with an automatic matching mechanism. This thesis has conducted two studies on face and iris multimodal biometric recognition. The first study proposes a new feature extraction technique for biometric systems based on machine learning. The iris and facial features extraction is performed using the Discrete Wavelet Transform (DWT) combined with the Singular Value Decomposition (SVD). Merging the relevant characteristics of the two modalities is used to create a pattern for an individual in the dataset. The experimental results show the robustness of our proposed technique and the efficiency when using the same feature extraction technique for both modalities. The proposed method outperformed the state-of-the-art and gave an accuracy of 98.90%. The second study proposes a deep learning approach using DensNet121 and FaceNet for iris and faces multimodal recognition using feature-level fusion and a new automatic matching technique. The proposed automatic matching approach does not use the threshold to ensure a better compromise between performance and FAR and FRR errors. However, it uses a trained multilayer perceptron (MLP) model that allows people’s automatic classification into two classes: recognized and unrecognized. This platform ensures an accurate and fully automatic process of multimodal recognition. The results obtained by the DenseNet121-FaceNet model by adopting feature-level fusion and automatic matching are very satisfactory. The proposed deep learning models give 99.78% of accuracy, and 99.56% of precision, with 0.22% of FRR and without FAR errors. The proposed and developed platform solutions in this thesis were tested and vali- dated in two different case studies, the central pharmacy of Al-Asria Eye Clinic in Dubai and the Abu Dhabi Police General Headquarters (Police GHQ). The solution allows fast identification of the persons authorized to access the different rooms. It thus protects the pharmacy against any medication abuse and the red zone in the military zone against the unauthorized use of weapons

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed
    corecore