64 research outputs found

    Blind colour image watermarking techniques in hybrid domain using least significant bit and slantlet transform

    Get PDF
    Colour image watermarking has attracted a lot of interests since the last decade in tandem with the rapid growth of internet and its applications. This is due to increased awareness especially amongst netizens to protect digital assets from fraudulent activities. Many research efforts focused on improving the imperceptibility or robustness of both semi-blind and non-blind watermarking in spatial or transform domain. The results so far have been encouraging. Nonetheless, the requirements of the watermarking applications are varied in terms of imperceptibility, robustness and capacity. Ironically, limited studies concern on the authenticity and blind watermarking. Hence, this study presents two new blind RGB image watermarking techniques called Model1 and Model2 in hybrid domain using Least Significant Bit (LSB) insertion and Slantlet Transform (SLT). The models share similar pre-processing and LSB insertion stages but differ in SLT approach. In addition, two interrelated watermarks known as main watermark (MW) and sub-watermark (SW) are also utilized. Firstly, the RGB cover image is converted into YCbCr colour space and then split up into three components namely, Y, Cb and Cr. Secondly, the Cb component is selected as a cover for the MW embedding using the LSB substitution to attain a Cb-watermarked image (CbW). Thirdly, the Cr component is chosen and converted into the transform domain using SLT, and is subsequently decomposed into two paths: three-level sub-bands for Model1 and two-level sub-bands for Model2. For each model, the sub-bands are then used as a cover for sub-watermark embedding to generate a Cr-watermarked image (CrW). Following that, the Y component, CbW and CrW are combined to obtain a YCbCr-watermarked image. Finally, the image is reverted to RGB colour space to attain the actual watermarked image (WI). Upon embedding, the MW and SW are extracted from WI. The extraction process is similar to the above embedding except it is accomplished in a reverse order. Experimental results which utilized the standard dataset with fifteen well-known attacks revealed that, among others: Model1 has produced high imperceptibility, moderate robustness and good capacity, with Peak Signal-to-Noise Ratio (PSNR) rose to 65dB, Normalized Cross Correlation (NCC) moderated at 0.80, and capacity was 15%. Meanwhile, Model2, as per designed, performed positively in all aspects, with NCC strengthened to 1.00, capacity jumped to 25% and PSNR softened at 55dB but still on the high side. Interestingly, in terms of authenticity, Model2 performed impressively albeit the extracted MW has been completely altered. Overall, the models have successfully fulfilled all the research objectives and also markedly outperformed benchmark watermarking techniques

    An Adaptive Image Encryption Scheme Guided by Fuzzy Models

    Full text link
    A new image encryption scheme using the advanced encryption standard (AES), a chaotic map, a genetic operator, and a fuzzy inference system is proposed in this paper. In this work, plain images were used as input, and the required security level was achieved. Security criteria were computed after running a proposed encryption process. Then an adaptive fuzzy system decided whether to repeat the encryption process, terminate it, or run the next stage based on the achieved results and user demand. The SHA-512 hash function was employed to increase key sensitivity. Security analysis was conducted to evaluate the security of the proposed scheme, which showed it had high security and all the criteria necessary for a good and efficient encryption algorithm were met. Simulation results and the comparison of similar works showed the proposed encryptor had a pseudo-noise output and was strongly dependent upon the changing key and plain image.Comment: Iranian Journal of Fuzzy Systems (2023

    Detection and Mitigation of Steganographic Malware

    Get PDF
    A new attack trend concerns the use of some form of steganography and information hiding to make malware stealthier and able to elude many standard security mechanisms. Therefore, this Thesis addresses the detection and the mitigation of this class of threats. In particular, it considers malware implementing covert communications within network traffic or cloaking malicious payloads within digital images. The first research contribution of this Thesis is in the detection of network covert channels. Unfortunately, the literature on the topic lacks of real traffic traces or attack samples to perform precise tests or security assessments. Thus, a propaedeutic research activity has been devoted to develop two ad-hoc tools. The first allows to create covert channels targeting the IPv6 protocol by eavesdropping flows, whereas the second allows to embed secret data within arbitrary traffic traces that can be replayed to perform investigations in realistic conditions. This Thesis then starts with a security assessment concerning the impact of hidden network communications in production-quality scenarios. Results have been obtained by considering channels cloaking data in the most popular protocols (e.g., TLS, IPv4/v6, and ICMPv4/v6) and showcased that de-facto standard intrusion detection systems and firewalls (i.e., Snort, Suricata, and Zeek) are unable to spot this class of hazards. Since malware can conceal information (e.g., commands and configuration files) in almost every protocol, traffic feature or network element, configuring or adapting pre-existent security solutions could be not straightforward. Moreover, inspecting multiple protocols, fields or conversations at the same time could lead to performance issues. Thus, a major effort has been devoted to develop a suite based on the extended Berkeley Packet Filter (eBPF) to gain visibility over different network protocols/components and to efficiently collect various performance indicators or statistics by using a unique technology. This part of research allowed to spot the presence of network covert channels targeting the header of the IPv6 protocol or the inter-packet time of generic network conversations. In addition, the approach based on eBPF turned out to be very flexible and also allowed to reveal hidden data transfers between two processes co-located within the same host. Another important contribution of this part of the Thesis concerns the deployment of the suite in realistic scenarios and its comparison with other similar tools. Specifically, a thorough performance evaluation demonstrated that eBPF can be used to inspect traffic and reveal the presence of covert communications also when in the presence of high loads, e.g., it can sustain rates up to 3 Gbit/s with commodity hardware. To further address the problem of revealing network covert channels in realistic environments, this Thesis also investigates malware targeting traffic generated by Internet of Things devices. In this case, an incremental ensemble of autoencoders has been considered to face the ''unknown'' location of the hidden data generated by a threat covertly exchanging commands towards a remote attacker. The second research contribution of this Thesis is in the detection of malicious payloads hidden within digital images. In fact, the majority of real-world malware exploits hiding methods based on Least Significant Bit steganography and some of its variants, such as the Invoke-PSImage mechanism. Therefore, a relevant amount of research has been done to detect the presence of hidden data and classify the payload (e.g., malicious PowerShell scripts or PHP fragments). To this aim, mechanisms leveraging Deep Neural Networks (DNNs) proved to be flexible and effective since they can learn by combining raw low-level data and can be updated or retrained to consider unseen payloads or images with different features. To take into account realistic threat models, this Thesis studies malware targeting different types of images (i.e., favicons and icons) and various payloads (e.g., URLs and Ethereum addresses, as well as webshells). Obtained results showcased that DNNs can be considered a valid tool for spotting the presence of hidden contents since their detection accuracy is always above 90% also when facing ''elusion'' mechanisms such as basic obfuscation techniques or alternative encoding schemes. Lastly, when detection or classification are not possible (e.g., due to resource constraints), approaches enforcing ''sanitization'' can be applied. Thus, this Thesis also considers autoencoders able to disrupt hidden malicious contents without degrading the quality of the image

    Classifiers and machine learning techniques for image processing and computer vision

    Get PDF
    Orientador: Siome Klein GoldensteinTese (doutorado) - Universidade Estadual de Campinas, Instituto da ComputaçãoResumo: Neste trabalho de doutorado, propomos a utilizaçãoo de classificadores e técnicas de aprendizado de maquina para extrair informações relevantes de um conjunto de dados (e.g., imagens) para solução de alguns problemas em Processamento de Imagens e Visão Computacional. Os problemas de nosso interesse são: categorização de imagens em duas ou mais classes, detecçãao de mensagens escondidas, distinção entre imagens digitalmente adulteradas e imagens naturais, autenticação, multi-classificação, entre outros. Inicialmente, apresentamos uma revisão comparativa e crítica do estado da arte em análise forense de imagens e detecção de mensagens escondidas em imagens. Nosso objetivo é mostrar as potencialidades das técnicas existentes e, mais importante, apontar suas limitações. Com esse estudo, mostramos que boa parte dos problemas nessa área apontam para dois pontos em comum: a seleção de características e as técnicas de aprendizado a serem utilizadas. Nesse estudo, também discutimos questões legais associadas a análise forense de imagens como, por exemplo, o uso de fotografias digitais por criminosos. Em seguida, introduzimos uma técnica para análise forense de imagens testada no contexto de detecção de mensagens escondidas e de classificação geral de imagens em categorias como indoors, outdoors, geradas em computador e obras de arte. Ao estudarmos esse problema de multi-classificação, surgem algumas questões: como resolver um problema multi-classe de modo a poder combinar, por exemplo, caracteríisticas de classificação de imagens baseadas em cor, textura, forma e silhueta, sem nos preocuparmos demasiadamente em como normalizar o vetor-comum de caracteristicas gerado? Como utilizar diversos classificadores diferentes, cada um, especializado e melhor configurado para um conjunto de caracteristicas ou classes em confusão? Nesse sentido, apresentamos, uma tecnica para fusão de classificadores e caracteristicas no cenário multi-classe através da combinação de classificadores binários. Nós validamos nossa abordagem numa aplicação real para classificação automática de frutas e legumes. Finalmente, nos deparamos com mais um problema interessante: como tornar a utilização de poderosos classificadores binarios no contexto multi-classe mais eficiente e eficaz? Assim, introduzimos uma tecnica para combinação de classificadores binarios (chamados classificadores base) para a resolução de problemas no contexto geral de multi-classificação.Abstract: In this work, we propose the use of classifiers and machine learning techniques to extract useful information from data sets (e.g., images) to solve important problems in Image Processing and Computer Vision. We are particularly interested in: two and multi-class image categorization, hidden messages detection, discrimination among natural and forged images, authentication, and multiclassification. To start with, we present a comparative survey of the state-of-the-art in digital image forensics as well as hidden messages detection. Our objective is to show the importance of the existing solutions and discuss their limitations. In this study, we show that most of these techniques strive to solve two common problems in Machine Learning: the feature selection and the classification techniques to be used. Furthermore, we discuss the legal and ethical aspects of image forensics analysis, such as, the use of digital images by criminals. We introduce a technique for image forensics analysis in the context of hidden messages detection and image classification in categories such as indoors, outdoors, computer generated, and art works. From this multi-class classification, we found some important questions: how to solve a multi-class problem in order to combine, for instance, several different features such as color, texture, shape, and silhouette without worrying about the pre-processing and normalization of the combined feature vector? How to take advantage of different classifiers, each one custom tailored to a specific set of classes in confusion? To cope with most of these problems, we present a feature and classifier fusion technique based on combinations of binary classifiers. We validate our solution with a real application for automatic produce classification. Finally, we address another interesting problem: how to combine powerful binary classifiers in the multi-class scenario more effectively? How to boost their efficiency? In this context, we present a solution that boosts the efficiency and effectiveness of multi-class from binary techniques.DoutoradoEngenharia de ComputaçãoDoutor em Ciência da Computaçã

    Data Hiding in Digital Video

    Get PDF
    With the rapid development of digital multimedia technologies, an old method which is called steganography has been sought to be a solution for data hiding applications such as digital watermarking and covert communication. Steganography is the art of secret communication using a cover signal, e.g., video, audio, image etc., whereas the counter-technique, detecting the existence of such as a channel through a statistically trained classifier, is called steganalysis. The state-of-the art data hiding algorithms utilize features; such as Discrete Cosine Transform (DCT) coefficients, pixel values, motion vectors etc., of the cover signal to convey the message to the receiver side. The goal of embedding algorithm is to maximize the number of bits sent to the decoder side (embedding capacity) with maximum robustness against attacks while keeping the perceptual and statistical distortions (security) low. Data Hiding schemes are characterized by these three conflicting requirements: security against steganalysis, robustness against channel associated and/or intentional distortions, and the capacity in terms of the embedded payload. Depending upon the application it is the designer\u27s task to find an optimum solution amongst them. The goal of this thesis is to develop a novel data hiding scheme to establish a covert channel satisfying statistical and perceptual invisibility with moderate rate capacity and robustness to combat steganalysis based detection. The idea behind the proposed method is the alteration of Video Object (VO) trajectory coordinates to convey the message to the receiver side by perturbing the centroid coordinates of the VO. Firstly, the VO is selected by the user and tracked through the frames by using a simple region based search strategy and morphological operations. After the trajectory coordinates are obtained, the perturbation of the coordinates implemented through the usage of a non-linear embedding function, such as a polar quantizer where both the magnitude and phase of the motion is used. However, the perturbations made to the motion magnitude and phase were kept small to preserve the semantic meaning of the object motion trajectory. The proposed method is well suited to the video sequences in which VOs have smooth motion trajectories. Examples of these types could be found in sports videos in which the ball is the focus of attention and exhibits various motion types, e.g., rolling on the ground, flying in the air, being possessed by a player, etc. Different sports video sequences have been tested by using the proposed method. Through the experimental results, it is shown that the proposed method achieved the goal of both statistical and perceptual invisibility with moderate rate embedding capacity under AWGN channel with varying noise variances. This achievement is important as the first step for both active and passive steganalysis is the detection of the existence of covert channel. This work has multiple contributions in the field of data hiding. Firstly, it is the first example of a data hiding method in which the trajectory of a VO is used. Secondly, this work has contributed towards improving steganographic security by providing new features: the coordinate location and semantic meaning of the object

    Digital Multispectral Map Reconstruction Using Aerial Imagery

    Get PDF
    Advances made in the computer vision field allowed for the establishment of faster and more accurate photogrammetry techniques. Structure from Motion(SfM) is a photogrammetric technique focused on the digital spatial reconstruction of objects based on a sequence of images. The benefit of Unmanned Aerial Vehicle (UAV) platforms allowed the ability to acquire high fidelity imagery intended for environmental mapping. This way, UAV platforms became a heavily adopted method of survey. The combination of SfM and the recent improvements of Unmanned Aerial Vehicle (UAV) platforms granted greater flexibility and applicability, opening a new path for a new remote sensing technique aimed to replace more traditional and laborious approaches often associated with high monetary costs. The continued development of digital reconstruction software and advances in the field of computer processing allowed for a more affordable and higher resolution solution when compared to the traditional methods. The present work proposed a digital reconstruction algorithm based on images taken by a UAV platform inspired by the work made available by the open-source project OpenDroneMap. The aerial images are inserted in the computer vision program and several operations are applied to them, including detection and matching of features, point cloud reconstruction, meshing, and texturing, which results in a final product that represents the surveyed site. Additionally, from the study, it was concluded that an implementation which addresses the processing of thermal images was not integrated in the works of OpenDroneMap. By this point, their work was altered to allow for the reconstruction of thermal maps without sacrificing the resolution of the final model. Standard methods to process thermal images required a larger image footprint (or area of ground capture in a frame), the reason for this is that these types of images lack the presence of invariable features and by increasing the image’s footprint, the number of features present in each frame also rises. However, this method of image capture results in a lower resolution of the final product. The algorithm was developed using open-source libraries. In order to validate the obtained results, this model was compared to data obtained from commercial products, like Pix4D. Furthermore, due to circumstances brought about by the current pandemic, it was not possible to conduct a field study for the comparison and assessment of our results, as such the validation of the models was performed by verifying if the geographic location of the model was performed correctly and by visually assessing the generated maps.Avanços no campo da visão computacional permitiu o desenvolvimento de algoritmos mais eficientes de fotogrametria. Structure from Motion (SfM) é uma técnica de fotogrametria que tem como objetivo a reconstrução digital de objectos no espaço derivados de uma sequência de imagens. A característica importante que os Veículos Aérios não-tripulados (UAV) conseguem fornecer, a nível de mapeamento, é a sua capacidade de obter um conjunto de imagens de alta resolução. Devido a isto, UAV tornaram-se num dos métodos adotados no estudo de topografia. A combinação entre SfM e recentes avanços nos UAV permitiram uma melhor flexibilidade e aplicabilidade, permitindo deste modo desenvolver um novo método de Remote Sensing. Este método pretende substituir técnicas tradicionais, as quais estão associadas a mão-de-obra intensiva e a custos monetários elevados. Avanços contínuos feitos em softwares de reconstrução digital e no poder de processamento resultou em modelos de maior resolução e menos dispendiosos comparando a métodos tradicionais. O presente estudo propõe um algoritmo de reconstrução digital baseado em imagens obtidas através de UAV inspiradas no estudo disponibilizado pela OpenDroneMap. Estas imagens são inseridas no programa de visão computacional, onde várias operações são realizadas, incluindo: deteção e correspondência de caracteristicas, geração da point cloud, meshing e texturação dos quais resulta o produto final que representa o local em estudo. De forma complementar, concluiu-se que o trabalho da OpenDroneMap não incluia um processo de tratamento de imagens térmicas. Desta forma, alterações foram efetuadas que permitissem a criação de mapas térmicos sem sacrificar resolução do produto final, pois métodos típicos para processamento de imagens térmicas requerem uma área de captura maior, devido à falta de características invariantes neste tipo de imagens, o que leva a uma redução de resolução. Desta forma, o programa proposto foi desenvolvido através de bibliotecas open-source e os resultados foram comparados com modelos gerados através de software comerciais. Além do mais, devido à situação pandémica atual, não foi possível efetuar um estudo de campo para validar os modelos obtidos, como tal esta verificação foi feita através da correta localização geográfica do modelo, bem como avaliação visual dos modelos criados

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Browse-to-search

    Full text link
    This demonstration presents a novel interactive online shopping application based on visual search technologies. When users want to buy something on a shopping site, they usually have the requirement of looking for related information from other web sites. Therefore users need to switch between the web page being browsed and other websites that provide search results. The proposed application enables users to naturally search products of interest when they browse a web page, and make their even causal purchase intent easily satisfied. The interactive shopping experience is characterized by: 1) in session - it allows users to specify the purchase intent in the browsing session, instead of leaving the current page and navigating to other websites; 2) in context - -the browsed web page provides implicit context information which helps infer user purchase preferences; 3) in focus - users easily specify their search interest using gesture on touch devices and do not need to formulate queries in search box; 4) natural-gesture inputs and visual-based search provides users a natural shopping experience. The system is evaluated against a data set consisting of several millions commercial product images. © 2012 Authors
    corecore