58 research outputs found

    Deep Video Compression

    Get PDF

    Multimodal Adversarial Learning

    Get PDF
    Deep Convolutional Neural Networks (DCNN) have proven to be an exceptional tool for object recognition, generative modelling, and multi-modal learning in various computer vision applications. However, recent findings have shown that such state-of-the-art models can be easily deceived by inserting slight imperceptible perturbations to key pixels in the input. A good target detection systems can accurately identify targets by localizing their coordinates on the input image of interest. This is ideally achieved by labeling each pixel in an image as a background or a potential target pixel. However, prior research still confirms that such state of the art targets models are susceptible to adversarial attacks. In the case of generative models, facial sketches drawn by artists mostly used by law enforcement agencies depend on the ability of the artist to clearly replicate all the key facial features that aid in capturing the true identity of a subject. Recent works have attempted to synthesize these sketches into plausible visual images to improve visual recognition and identification. However, synthesizing photo-realistic images from sketches proves to be an even more challenging task, especially for sensitive applications such as suspect identification. However, the incorporation of hybrid discriminators, which perform attribute classification of multiple target attributes, a quality guided encoder that minimizes the perceptual dissimilarity of the latent space embedding of the synthesized and real image at different layers in the network have shown to be powerful tools towards better multi modal learning techniques. In general, our overall approach was aimed at improving target detection systems and the visual appeal of synthesized images while incorporating multiple attribute assignment to the generator without compromising the identity of the synthesized image. We synthesized sketches using XDOG filter for the CelebA, Multi-modal and CelebA-HQ datasets and from an auxiliary generator trained on sketches from CUHK, IIT-D and FERET datasets. Our results overall for different model applications are impressive compared to current state of the art

    Deep learning based objective quality assessment of multidimensional visual content

    Get PDF
    Tese (doutorado) — Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2022.Na última década, houve um tremendo aumento na popularidade dos aplicativos multimídia, aumentando assim o conteúdo multimídia. Quando esses conteúdossão gerados, transmitidos, reconstruídos e compartilhados, seus valores de pixel originais são transformados. Nesse cenário, torna-se mais crucial e exigente avaliar a qualidade visual do conteúdo visual afetado para que os requisitos dos usuários finais sejam atendidos. Neste trabalho, investigamos recursos espaciais, temporais e angulares eficazes desenvolvendo algoritmos sem referência que avaliam a qualidade visual de conteúdo visual multidimensional distorcido. Usamos algoritmos de aprendizado de máquina e aprendizado profundo para obter precisão de previsão.Para avaliação de qualidade de imagem bidimensional (2D), usamos padrões binários locais multiescala e informações de saliência e treinamos/testamos esses recursos usando o Random Forest Regressor. Para avaliação de qualidade de vídeo 2D, apresentamos um novo conceito de saliência espacial e temporal e pontuações de qualidade objetivas personalizadas. Usamos um modelo leve baseado em Rede Neural Convolucional (CNN) para treinamento e teste em patches selecionados de quadros de vídeo.Para avaliação objetiva da qualidade de imagens de campo de luz (LFI) em quatro dimensões (4D), propomos sete métodos de avaliação de qualidade LFI (LF-IQA) no total. Considerando que o LFI é composto por multi-views densas, Inspired by Human Visual System (HVS), propomos nosso primeiro método LF-IQA que é baseado em uma arquitetura CNN de dois fluxos. O segundo e terceiro métodos LF-IQA também são baseados em uma arquitetura de dois fluxos, que incorpora CNN, Long Short-Term Memory (LSTM) e diversos recursos de gargalo. O quarto LF-IQA é baseado nas camadas CNN e Atrous Convolution (ACL), enquanto o quinto método usa as camadas CNN, ACL e LSTM. O sexto método LF-IQA também é baseado em uma arquitetura de dois fluxos, na qual EPIs horizontais e verticais são processados no domínio da frequência. Por último, mas não menos importante, o sétimo método LF-IQA é baseado em uma Rede Neural Convolucional de Gráfico. Para todos os métodos mencionados acima, realizamos experimentos intensivos e os resultados mostram que esses métodos superaram os métodos de última geração em conjuntos de dados de qualidade populares.Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES).In the last decade, there has been a tremendous increase in the popularity of multimedia applications, hence increasing multimedia content. When these contents are generated, transmitted, reconstructed and shared, their original pixel values are transformed. In this scenario, it becomes more crucial and demanding to assess visual quality of the affected visual content so that the requirements of end-users are satisfied. In this work, we investigate effective spatial, temporal, and angular features by developing no-reference algorithms that assess the visual quality of distorted multi-dimensional visual content. We use machine learning and deep learning algorithms to obtain prediction accuracy. For two-dimensional (2D) image quality assessment, we use multiscale local binary patterns and saliency information, and train / test these features using Random Forest Regressor. For 2D video quality assessment, we introduce a novel concept of spatial and temporal saliency and custom objective quality scores. We use a Convolutional Neural Network (CNN) based light-weight model for training and testing on selected patches of video frames. For objective quality assessment of four-dimensional (4D) light field images (LFI), we propose seven LFI quality assessment (LF-IQA) methods in total. Considering that LFI is composed of dense multi-views, Inspired by Human Visual System (HVS), we propose our first LF-IQA method that is based on a two-streams CNN architecture. The second and third LF-IQA methods are also based on a two-stream architecture, which incorporates CNN, Long Short-Term Memory (LSTM), and diverse bottleneck features. The fourth LF-IQA is based on CNN and Atrous Convolution layers (ACL), while the fifth method uses CNN, ACL, and LSTM layers. The sixth LF-IQA method is also based on a two-stream architecture, in which, horizontal and vertical EPIs are processed in the frequency domain. Last, but not least, the seventh LF-IQA method is based on a Graph Convolutional Neural Network. For all of the methods mentioned above, we performed intensive experiments, and the results show that these methods outperformed state-of-the-art methods on popular quality datasets

    Super-resolution towards license plate recognition

    Get PDF
    Orientador: David MenottiDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 24/04/2023Inclui referências: p. 51-59Área de concentração: Ciência da ComputaçãoResumo: Nos últimos anos, houve avanços significativos no campo de Reconhecimento de placas de veiculares (LPR, do inglês License Plate Recognition) por meio da integração de técnicas de aprendizado profundo e do aumento da disponibilidade de dados para treinamento. No entanto, reconstruir placas veiculares a partir de imagens de sistemas de vigilância em baixa resolução ainda é um desafio. Para enfrentar essa dificuldade, apresentamos uma abordagem de Super Resolução de Imagem Única (SISR, do inglês Single-Image Super-Resolution) que integra módulos de atenção para aprimorar a detecção de característica estruturais e texturais em imagens de baixa resolução. Nossa abordagem utiliza camadas de convolução sub-pixel (também conhecidas como PixelShuffle) e uma função de perda que emprega um modelo de Reconhecimento Óptico de Caracteres (OCR, do inglês Optical Character Recognition) para extração de características. Treinamos a arquitetura proposta com imagens sintéticas criadas aplicando ruído gaussiano pesado à imagens de alta resolução de placas veiculares de dois conjuntos de dados públicos, seguido de redução de sua resolução com interpolação bicúbica. Como resultado, as imagens geradas têm um Índice de Similaridade Estrutural (SSIM, do inglês Structural Similarity Index Measure) inferior a 0,10. Nossos resultados experimentais mostram que a abordagem proposta para reconstruir essas imagens sintéticas de baixa resolução superou as existentes tanto em medidas quantitativas quanto qualitativas.Abstract: Recent years have seen significant developments in the field of License Plate Recognition (LPR) through the integration of deep learning techniques and the increasing availability of training data. Nevertheless, reconstructing license plates (LPs) from low-resolution (LR) surveillance footage remains challenging. To address this issue, we introduce a Single-Image Super-Resolution (SISR) approach that integrates attention and transformer modules to enhance the detection of structural and textural features in LR images. Our approach incorporates sub-pixel convolution layers (also known as PixelShuffle) and a loss function that uses an Optical Character Recognition (OCR) model for feature extraction. We trained the proposed architecture on synthetic images created by applying heavy Gaussian noise to high-resolution LP images from two public datasets, followed by bicubic downsampling. As a result, the generated images have a Structural Similarity Index Measure (SSIM) of less than 0.10. Our results show that our approach for reconstructing these low-resolution synthesized images outperforms existing ones in both quantitative and qualitative measures

    Context-aware Facial Inpainting with GANs

    Get PDF
    Facial inpainting is a difficult problem due to the complex structural patterns of a face image. Using irregular hole masks to generate contextualised features in a face image is becoming increasingly important in image inpainting. Existing methods generate images using deep learning models, but aberrations persist. The reason for this is that key operations are required for feature information dissemination, such as feature extraction mechanisms, feature propagation, and feature regularizers, are frequently overlooked or ignored during the design stage. A comprehensive review is conducted to examine existing methods and identify the research gaps that serve as the foundation for this thesis. The aim of this thesis is to develop novel facial inpainting algorithms with the capability of extracting contextualised features. First, Symmetric Skip Connection Wasserstein GAN (SWGAN) is proposed to inpaint high-resolution face images that are perceptually consistent with the rest of the image. Second, a perceptual adversarial Network (RMNet) is proposed to include feature extraction and feature propagation mechanisms that target missing regions while preserving visible ones. Third, a foreground-guided facial inpainting method is proposed with occlusion reasoning capability, which guides the model toward learning contextualised feature extraction and propagation while maintaining fidelity. Fourth, V-LinkNet is pro-posed that takes into account of the critical operations for information dissemination. Additionally, a standard protocol is introduced to prevent potential biases in performance evaluation of facial inpainting algorithms. The experimental results show V-LinkNet achieved the best results with SSIM of 0.96 on the standard protocol. In conclusion, generating facial images with contextualised features is important to achieve realistic results in inpainted regions. Additionally, it is critical to consider the standard procedure while comparing different approaches. Finally, this thesis outlines the new insights and future directions of image inpainting
    corecore