60 research outputs found

    Optimized Visual Internet of Things in Video Processing for Video Streaming

    Get PDF
    The global expansion of the Visual Internet of Things (VIoT) has enabled various new applications during the last decade through the interconnection of a wide range of devices and sensors.Frame freezing and buffering are the major artefacts in broad area of multimedia networking applications occurring due to significant packet loss and network congestion. Numerous studies have been carried out in order to understand the impact of packet loss on QoE for a wide range of applications. This paper improves the video streaming quality by using the proposed framework Lossy Video Transmission (LVT)  for simulating the effect of network congestion on the performance of  encrypted static images sent over wireless sensor networks.The simulations are intended for analysing video quality and determining packet drop resilience during video conversations.The assessment of emerging trends in quality measurement, including picture preference, visual attention, and audio visual quality is checked. To appropriately quantify the video quality loss caused by the encoding system, various encoders compress video sequences at various data rates.Simulation results for different QoE metrics with respect to user developed videos have been demonstrated which outperforms the existing metrics

    Recovering Sign Bits of DCT Coefficients in Digital Images as an Optimization Problem

    Full text link
    Recovering unknown, missing, damaged, distorted or lost information in DCT coefficients is a common task in multiple applications of digital image processing, including image compression, selective image encryption, and image communications. This paper investigates recovery of a special type of information in DCT coefficients of digital images: sign bits. This problem can be modelled as a mixed integer linear programming (MILP) problem, which is NP-hard in general. To efficiently solve the problem, we propose two approximation methods: 1) a relaxation-based method that convert the MILP problem to a linear programming (LP) problem; 2) a divide-and-conquer method which splits the target image into sufficiently small regions, each of which can be more efficiently solved as an MILP problem, and then conducts a global optimization phase as a smaller MILP problem or an LP problem to maximize smoothness across different regions. To the best of our knowledge, we are the first who considered how to use global optimization to recover sign bits of DCT coefficients. We considered how the proposed methods can be applied to JPEG-encoded images and conducted extensive experiments to validate the performances of our proposed methods. The experimental results showed that the proposed methods worked well, especially when the number of unknown sign bits per DCT block is not too large. Compared with other existing methods, which are all based on simple error-concealment strategies, our proposed methods outperformed them with a substantial margin, both according to objective quality metrics (PSNR and SSIM) and also our subjective evaluation. Our work has a number of profound implications, e.g., more sign bits can be discarded to develop more efficient image compression methods, and image encryption methods based on sign bit encryption can be less secure than we previously understood.Comment: 13 pages, 8 figure

    Recent Advances in Steganography

    Get PDF
    Steganography is the art and science of communicating which hides the existence of the communication. Steganographic technologies are an important part of the future of Internet security and privacy on open systems such as the Internet. This book's focus is on a relatively new field of study in Steganography and it takes a look at this technology by introducing the readers various concepts of Steganography and Steganalysis. The book has a brief history of steganography and it surveys steganalysis methods considering their modeling techniques. Some new steganography techniques for hiding secret data in images are presented. Furthermore, steganography in speeches is reviewed, and a new approach for hiding data in speeches is introduced

    Um framework para processamento paralelo de algoritmos de aumento de resolução de vídeos

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2013.O aumento dimensional de sinais visuais consiste na alteração do tamanho de uma imagem ou de um vídeo para dimensões espaciais maiores, utilizando técnicas de processa- mento digital de sinais. Geralmente, esse aumento é feito com a utilização de técnicas de interpolação. Contudo, essas técnicas de interpolação produzem distorções nas imagens au- mentadas. Tais distorções ocorrem porque a imagem aumentada possui apenas as amostras da imagem original, de dimensões menores, que são insu cientes para reconstrução exata do sinal, o que gera efeitos de aliasing. Assim sendo, as técnicas de interpolação apenas estimam os coe cientes não-amostrados do sinal, o que muitas vezes produz resultados insatisfatórios para muitas aplicações, necessitando de outras técnicas para reconstituir os coe cientes não-amostrados com maior precisão. Para melhorar a aproximação de uma imagem estimada com relação à imagem origi- nal, existem técnicas que reconstroem os coe cientes não-amostrados. Essas técnicas são chamadas de super-resolução. Elas consistem em aumentar a resolução utilizando, geral- mente, informações de outras imagens em baixa ou alta-resolução para estimar a informação faltante na imagem que se deseja ampliar. Super-resolução é um processo computacionalmente intenso, onde a complexidade dos algoritmos são, geralmente, de ordem exponencial no tempo em função do bloco ou do fa- tor de ampliação. Portanto, quando essas técnicas são aplicadas para vídeos, é necessário que o algoritmo seja extremamente rápido. O problema é que os algoritmos mais com- putacionalmente e cientes, nem sempre são aqueles que produzem os melhores resultados visuais. Sendo assim, este trabalho propõe um framework para melhorar o desempenho de diversos algoritmos de super-resolução através de estratégias de processamento seletivo e paralelo. Para isso, nesta dissertação são examinadas as propriedades dos resultados produzidos pelos algoritmos de super-resolução e os resultados produzidos utilizando-se técnicas de interpolação. Com essas propriedades, é encontrado um critério para classi car as regiões em que os resultados produzidos sejam visualmente equivalentes, não importando o método utilizado para ampliação. Nessas regiões de equivalência utiliza-se um algoritmo de interpolação, que é muito mais veloz do que os computacionalmente complexos de super-resolução. Assim, consegue-se reduzir o tempo de processamento sem prejudicar a qualidade visual do vídeo ampliado. Além dessa abordagem, este trabalho também propõe uma estratégia de divisão de dados entre diferentes tarefas para que a operação de aumento de resolução seja realizada de forma paralela. Um resultado interessante do modelo proposto é que ele desacopla a abstração de distribuição de carga da função de aumento dimensional. Em outras palavras, diferentes métodos de super-resolução podem explorar os recursos do framework sem que para isso seus algoritmos precisem ser modi cados para obtenção do paralelismo. Isso torna o framework portável, escalável e reusável por diferentes métodos de super-resolução. ______________________________________________________________________________ ABSTRACTThe magni cation of visual signals consists of changing the size of an image or a video to larger spatial dimensions, using digital signal processing techniques. Usually, this mag- ni cation is done using numerical interpolation methods. However, these interpolation methods tend to produce some distortions in the increased images. Such distortions oc- cours because the interpolated image is reconstructed using only the original image samples, which are insu cients for the accurate signal reconstruction, generating aliasing e ects. These interpolation techniques only approximate the non-sampled signal coe cients, pro- ducing unsatisfactory results for many applications. Thus, for these applications, others techniques to estimate the non-sampled coe cients are needed. To improve the estimation accuracy of an image with respect to the original, the super- resolution techniques are used to reconstruct the non-sampled coe cients. Generally, these super-resolution techniques enhance the increased image using information of other images to estimate the missing information. Super-resolution is a computationally intensive process, where the algorithms com- plexity are, generally, exponential in time as function of the block size or magni cation factor. Therefore, when these techniques are applied for videos, it is required that the super-resolution algorithm be extremely fast. However, more computationally e cient algorithms are not always those that produce the best visual results. Therefore, this work proposes a framework to improve the performance of various super- resolution algorithms using selective processing and parallel processing strategies. Thus, this dissertation examines the properties of the results produced by the super-resolution algorithms and the results produced by using interpolation techniques. From these proper- ties, is achieved a criterion to classify regions wherein the results produced are equivalent (using both super-resolution or interpolation). In these regions of equivalence, the in- terpolation algorithms are used to increase the dimensions. In the anothers regions, the super-resolution algorithms are used. As interpolation algorithms are faster than the com- putationally complex super-resolution algorithms, the idea is decrease the processing time without a ecting the visual quality of ampli ed video. Besides this approach, this paper also proposes a strategy to divide the data among various processes to perform the super-resolution operation in parallel. An interesting re- sult of the proposed model is the decoupling of the super-resolution algorithm and the parallel processing strategy. In other words, di erent super-resolution algorithms can ex- plore the features of the proposed framework without algorithmic modi cations to achieve the parallelism. Thus, the framework is portable, scalable and can be reusable by di erent super-resolution methods

    Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    Get PDF

    Roadmap on optical security

    Get PDF
    Information security and authentication are important challenges facing society. Recent attacks by hackers on the databases of large commercial and financial companies have demonstrated that more research and development of advanced approaches are necessary to deny unauthorized access to critical data. Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack. This roadmap article presents an overview of the potential, recent advances, and challenges of optical security and encryption using free space optics. The roadmap on optical security is comprised of six categories that together include 16 short sections written by authors who have made relevant contributions in this field. The first category of this roadmap describes novel encryption approaches, including secure optical sensing which summarizes double random phase encryption applications and flaws [Yamaguchi], the digital holographic encryption in free space optical technique which describes encryption using multidimensional digital holography [Nomura], simultaneous encryption of multiple signals [Pérez-Cabré], asymmetric methods based on information truncation [Nishchal], and dynamic encryption of video sequences [Torroba]. Asymmetric and one-way cryptosystems are analyzed by Peng. The second category is on compression for encryption. In their respective contributions, Alfalou and Stern propose similar goals involving compressed data and compressive sensing encryption. The very important area of cryptanalysis is the topic of the third category with two sections: Sheridan reviews phase retrieval algorithms to perform different attacks, whereas Situ discusses nonlinear optical encryption techniques and the development of a rigorous optical information security theory. The fourth category with two contributions reports how encryption could be implemented at the nano- or micro-scale. Naruse discusses the use of nanostructures in security applications and Carnicer proposes encoding information in a tightly focused beam. In the fifth category, encryption based on ghost imaging using single-pixel detectors is also considered. In particular, the authors [Chen, Tajahuerce] emphasize the need for more specialized hardware and image processing algorithms. Finally, in the sixth category, Mosk and Javidi analyze in their corresponding papers how quantum imaging can benefit optical encryption systems. Sources that use few photons make encryption systems much more difficult to attack, providing a secure method for authentication.Centro de Investigaciones ÓpticasConsejo Nacional de Investigaciones Científicas y Técnica

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read
    corecore