4 research outputs found

    Image steganography based on color palette transformation in color space

    Get PDF
    In this paper, we present a novel image steganography method which is based on color palette transformation in color space. Most of the existing image steganography methods modify separate image pixels, and random noise appears in the image. By proposing a method, which changes the color palette of the image (all pixels of the same color will be changed to the same color), we achieve a higher user perception. Presented comparison of stegoimage quality metrics with other image steganography methods proved the new method is one of the best according to Structural Similarity Index (SSIM) and Peak Signal Noise Ratio (PSNR) values. The capability is average among other methods, but our method has a bigger capacity among methods with better SSIM and PSNR values. The color and pixel capability can be increased by using standard or adaptive color palette images with smoothing, but it will increase the embedding identification possibilityThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.The research had no specific funding and was implemented as a master thesis in Šiauliai Univesity with the supervisor from Vilnius Gediminas Technical Universit

    Data Hiding and Its Applications

    Get PDF
    Data hiding techniques have been widely used to provide copyright protection, data integrity, covert communication, non-repudiation, and authentication, among other applications. In the context of the increased dissemination and distribution of multimedia content over the internet, data hiding methods, such as digital watermarking and steganography, are becoming increasingly relevant in providing multimedia security. The goal of this book is to focus on the improvement of data hiding algorithms and their different applications (both traditional and emerging), bringing together researchers and practitioners from different research fields, including data hiding, signal processing, cryptography, and information theory, among others

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    O uso da Divergência de Kullback-Leibler e da Divergência Generalizada como medida de similaridade em sistemas CBIR

    Get PDF
    The content-based image retrieval is important for various purposes like disease diagnoses from computerized tomography, for example. The relevance, social and economic of image retrieval systems has created the necessity of its improvement. Within this context, the content-based image retrieval systems are composed of two stages, the feature extraction and similarity measurement. The stage of similarity is still a challenge due to the wide variety of similarity measurement functions, which can be combined with the different techniques present in the recovery process and return results that aren’t always the most satisfactory. The most common functions used to measure the similarity are the Euclidean and Cosine, but some researchers have noted some limitations in these functions conventional proximity, in the step of search by similarity. For that reason, the Bregman divergences (Kullback Leibler and I-Generalized) have attracted the attention of researchers, due to its flexibility in the similarity analysis. Thus, the aim of this research was to conduct a comparative study over the use of Bregman divergences in relation the Euclidean and Cosine functions, in the step similarity of content-based image retrieval, checking the advantages and disadvantages of each function. For this, it was created a content-based image retrieval system in two stages: offline and online, using approaches BSM, FISM, BoVW and BoVW-SPM. With this system was created three groups of experiments using databases: Caltech101, Oxford and UK-bench. The performance of content-based image retrieval system using the different functions of similarity was tested through of evaluation measures: Mean Average Precision, normalized Discounted Cumulative Gain, precision at k, precision x recall. Finally, this study shows that the use of Bregman divergences (Kullback Leibler and Generalized) obtains better results than the Euclidean and Cosine measures with significant gains for content-based image retrieval.Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorDissertação (Mestrado)A recuperação de imagem baseada em conteúdo é importante para diversos fins, como diagnósticos de doenças a partir de tomografias computadorizadas, por exemplo. A relevância social e econômica de sistemas de recuperação de imagens criou a necessidade do seu aprimoramento. Dentro deste contexto, os sistemas de recuperação de imagens baseadas em conteúdo são compostos de duas etapas: extração de característica e medida de similaridade. A etapa de similaridade ainda é um desafio, devido à grande variedade de funções de medida de similaridade, que podem ser combinadas com as diferentes técnicas presentes no processo de recuperação e retornar resultados que nem sempre são os mais satisfatórios. As funções geralmente mais usadas para medir a similaridade são as Euclidiana e Cosseno, mas alguns pesquisadores têm notado algumas limitações nestas funções de proximidade convencionais, na etapa de busca por similaridade. Por esse motivo, as divergências de Bregman (Kullback Leibler e Generalizada) têm atraído a atenção dos pesquisadores, devido à sua flexibilidade em análise de similaridade. Desta forma, o objetivo desta pesquisa foi realizar um estudo comparativo sobre a utilização das divergências de Bregman em relação às funções Euclidiana e Cosseno, na etapa de similaridade da recuperação de imagens baseadas em conteúdo, averiguando as vantagens e desvantagens de cada função. Para isso, criou-se um sistema de recuperação de imagens baseado em conteúdo em duas etapas: off-line e on-line, utilizando as abordagens BSM, FISM, BoVW e BoVW-SPM. Com esse sistema, foram realizados três grupos de experimentos utilizando os bancos de dados: Caltech101, Oxford e UK-bench. O desempenho do sistema de recuperação de imagem baseada em conteúdo utilizando as diferentes funções de similaridade foram testadas por meio das medidas de avaliação: Mean Average Precision, normalized Discounted Cumulative Gain, precisão em k, e precisão x revocação. Por fim, o presente estudo aponta que o uso das divergências de Bregman (Kullback Leibler e Generalizada) obtiveram melhores resultados do que as medidas Euclidiana e Cosseno, com ganhos relevantes para recuperação de imagem baseada em conteúdo
    corecore