4,619 research outputs found
PEA265: Perceptual Assessment of Video Compression Artifacts
The most widely used video encoders share a common hybrid coding framework
that includes block-based motion estimation/compensation and block-based
transform coding. Despite their high coding efficiency, the encoded videos
often exhibit visually annoying artifacts, denoted as Perceivable Encoding
Artifacts (PEAs), which significantly degrade the visual Qualityof- Experience
(QoE) of end users. To monitor and improve visual QoE, it is crucial to develop
subjective and objective measures that can identify and quantify various types
of PEAs. In this work, we make the first attempt to build a large-scale
subjectlabelled database composed of H.265/HEVC compressed videos containing
various PEAs. The database, namely the PEA265 database, includes 4 types of
spatial PEAs (i.e. blurring, blocking, ringing and color bleeding) and 2 types
of temporal PEAs (i.e. flickering and floating). Each containing at least
60,000 image or video patches with positive and negative labels. To objectively
identify these PEAs, we train Convolutional Neural Networks (CNNs) using the
PEA265 database. It appears that state-of-theart ResNeXt is capable of
identifying each type of PEAs with high accuracy. Furthermore, we define PEA
pattern and PEA intensity measures to quantify PEA levels of compressed video
sequence. We believe that the PEA265 database and our findings will benefit the
future development of video quality assessment methods and perceptually
motivated video encoders.Comment: 10 pages,15 figures,4 table
Face Recognition Using Fractal Codes
In this paper we propose a new method for face recognition using fractal codes. Fractal codes represent local contractive, affine transformations which when iteratively applied to range-domain pairs in an arbitrary initial image result in a fixed point close to a given image. The transformation parameters such as brightness offset, contrast factor, orientation and the address of the corresponding domain for each range are used directly as features in our method. Features of an unknown face image are compared with those pre-computed for images in a database. There is no need to iterate, use fractal neighbor distances or fractal dimensions for comparison in the proposed method. This method is robust to scale change, frame size change and rotations as well as to some noise, facial expressions and blur distortion in the imag
Wavelet Based Image Coding Schemes : A Recent Survey
A variety of new and powerful algorithms have been developed for image
compression over the years. Among them the wavelet-based image compression
schemes have gained much popularity due to their overlapping nature which
reduces the blocking artifacts that are common phenomena in JPEG compression
and multiresolution character which leads to superior energy compaction with
high quality reconstructed images. This paper provides a detailed survey on
some of the popular wavelet coding techniques such as the Embedded Zerotree
Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the
Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding
with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding
techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned
Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency
Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder
(EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run
(SR) coding and the recent Geometric Wavelet (GW) coding are also discussed.
Based on the review, recommendations and discussions are presented for
algorithm development and implementation.Comment: 18 pages, 7 figures, journa
Lossless gray image compression using logic minimization
A novel approach for the lossless compression of gray images is presented. A prediction process is performed followed by the mapping of prediction residuals. The prediction residuals are then split into bit–planes. Two-dimensional (2D) differencing operation is applied to bit-planes prior to segmentation and classification. Performing an Exclusive-OR logic operation between neighboring pixels in the bit planes creates the difference image. The difference image can be coded more efficiently than the original image whenever the average run length of black pixels in the original image is greater than two. The 2d difference bit-plane is divided in to windows or block of size 16*16 pixels. The segmented 2d difference image is partitioned in to non-overlapping rectangular regions of all white and mixed 16*16 blocks. Each partitioned block is transformed in to Boolean switching function in cubical form, treating the pixel values as a output of the function. Minimizing these switching functions using Quine- McCluskey minimization algorithm performs compression
Wavelet-Based Embedded Rate Scalable Still Image Coders: A review
Embedded scalable image coding algorithms based on the wavelet transform have received considerable attention lately in academia and in industry in terms of both coding algorithms and standards activity. In addition to providing a very good coding performance, the embedded coder has the property that the bit stream can be truncated at any point and still decodes a reasonably good image. In this paper we present some state-of-the-art wavelet-based embedded rate scalable still image coders. In addition, the JPEG2000 still image compression standard is presented.
A bitwise clique detection approach for accelerating power graph computation and clustering dense graphs
Graphs are at the essence of many data representations. The visual analytics over graphs is usually difficult due to their size, which makes their visual display challenging, and their fundamental algorithms, which are often classified as NP-hard problems. The Power Graph Analysis (PGA) is a method that simplifies networks using reduced representations for complete subgraphs (cliques) and complete bipartite subgraphs (bicliques), in both cases with edge reductions. The benefits of a power graph are the preservation of information and its capacity to show essential information about the original network. However, finding an optimal representation (maximum edges reduction) is also an NPhard problem. In this work, we propose BCD, a greedy algorithm that uses a Bitwise Clique Detection approach to finding power graphs. BCD is faster than competing strategies and allows the analysis of bigger graphs. For the display of larger power graphs, we propose an orthogonal layout to prevent overlapping of edges and vertices. Finally, we describe how the structure induced by the power graph is used for clustering analysis of dense graphs. We demonstrate with several datasets the results obtained by our proposal and compare against competing strategies.Os grafos são essenciais para muitas representações de dados. A análise visual de grafos é usualmente difícil devido ao tamanho, o que representa um desafio para sua visualização. Além de isso, seus algoritmos fundamentais são frequentemente classificados como NP-difícil. Análises dos grafos de potência (PGA em inglês) é um método que simplifica redes usando representações reduzidas para subgrafos completos chamados cliques e subgrafos bipartidos chamados bicliques, em ambos casos com una redução de arestas. Os benefícios da representação de grafo de potência são a preservação de informação e a capacidade de mostrar a informação essencial sobre a rede original. Entretanto, encontrar uma representação ótima (a máxima redução de arestas possível) é também um problema NP-difícil. Neste trabalho, propomos BCD, um algoritmo guloso que usa um abordagem de detecção de bicliques baseado em operações binarias para encontrar representações de grafos de potencia. O BCD é mas rápido que as estratégias atuais da literatura. Finalmente, descrevemos como a estrutura induzida pelo grafo de potência é utilizado para as análises dos grafos densos na detecção de agrupamentos de nodos
Graph Wedgelets: Adaptive Data Compression on Graphs based on Binary Wedge Partitioning Trees and Geometric Wavelets
We introduce graph wedgelets - a tool for data compression on graphs based on
the representation of signals by piecewise constant functions on adaptively
generated binary graph partitionings. The adaptivity of the partitionings, a
key ingredient to obtain sparse representations of a graph signal, is realized
in terms of recursive wedge splits adapted to the signal. For this, we transfer
adaptive partitioning and compression techniques known for 2D images to general
graph structures and develop discrete variants of continuous wedgelets and
binary space partitionings. We prove that continuous results on best m-term
approximation with geometric wavelets can be transferred to the discrete graph
setting and show that our wedgelet representation of graph signals can be
encoded and implemented in a simple way. Finally, we illustrate that this
graph-based method can be applied for the compression of images as well.Comment: 12 pages, 10 figure
Overlap Removal of Dimensionality Reduction Scatterplot Layouts
Dimensionality Reduction (DR) scatterplot layouts have become a ubiquitous
visualization tool for analyzing multidimensional data items with presence in
different areas. Despite its popularity, scatterplots suffer from occlusion,
especially when markers convey information, making it troublesome for users to
estimate items' groups' sizes and, more importantly, potentially obfuscating
critical items for the analysis under execution. Different strategies have been
devised to address this issue, either producing overlap-free layouts, lacking
the powerful capabilities of contemporary DR techniques in uncover interesting
data patterns, or eliminating overlaps as a post-processing strategy. Despite
the good results of post-processing techniques, the best methods typically
expand or distort the scatterplot area, thus reducing markers' size (sometimes)
to unreadable dimensions, defeating the purpose of removing overlaps. This
paper presents a novel post-processing strategy to remove DR layouts' overlaps
that faithfully preserves the original layout's characteristics and markers'
sizes. We show that the proposed strategy surpasses the state-of-the-art in
overlap removal through an extensive comparative evaluation considering
multiple different metrics while it is 2 or 3 orders of magnitude faster for
large datasets.Comment: 11 pages and 9 figure
- …