144 research outputs found
Architectural Implications of GNN Aggregation Programming Abstractions
Graph neural networks (GNNs) have gained significant popularity due to the
powerful capability to extract useful representations from graph data. As the
need for efficient GNN computation intensifies, a variety of programming
abstractions designed for optimizing GNN Aggregation have emerged to facilitate
acceleration. However, there is no comprehensive evaluation and analysis upon
existing abstractions, thus no clear consensus on which approach is better. In
this letter, we classify existing programming abstractions for GNN Aggregation
by the dimension of data organization and propagation method. By constructing
these abstractions on a state-of-the-art GNN library, we perform a thorough and
detailed characterization study to compare their performance and efficiency,
and provide several insights on future GNN acceleration based on our analysis.Comment: 4 pages, to be published in IEEE Computer Architecture Letters (CAL
Lossy and Lossless (L) Post-training Model Size Compression
Deep neural networks have delivered remarkable performance and have been
widely used in various visual tasks. However, their huge size causes
significant inconvenience for transmission and storage. Many previous studies
have explored model size compression. However, these studies often approach
various lossy and lossless compression methods in isolation, leading to
challenges in achieving high compression ratios efficiently. This work proposes
a post-training model size compression method that combines lossy and lossless
compression in a unified way. We first propose a unified parametric weight
transformation, which ensures different lossy compression methods can be
performed jointly in a post-training manner. Then, a dedicated differentiable
counter is introduced to guide the optimization of lossy compression to arrive
at a more suitable point for later lossless compression. Additionally, our
method can easily control a desired global compression ratio and allocate
adaptive ratios for different layers. Finally, our method can achieve a stable
compression ratio without sacrificing accuracy and a
compression ratio with minor accuracy loss in a short time. Our code is
available at https://github.com/ModelTC/L2_Compression
Measuring Perceptual Color Differences of Smartphone Photographs
Measuring perceptual color differences (CDs) is of great importance in modern
smartphone photography. Despite the long history, most CD measures have been
constrained by psychophysical data of homogeneous color patches or a limited
number of simplistic natural photographic images. It is thus questionable
whether existing CD measures generalize in the age of smartphone photography
characterized by greater content complexities and learning-based image signal
processors. In this paper, we put together so far the largest image dataset for
perceptual CD assessment, in which the photographic images are 1) captured by
six flagship smartphones, 2) altered by Photoshop, 3) post-processed by
built-in filters of the smartphones, and 4) reproduced with incorrect color
profiles. We then conduct a large-scale psychophysical experiment to gather
perceptual CDs of 30,000 image pairs in a carefully controlled laboratory
environment. Based on the newly established dataset, we make one of the first
attempts to construct an end-to-end learnable CD formula based on a lightweight
neural network, as a generalization of several previous metrics. Extensive
experiments demonstrate that the optimized formula outperforms 33 existing CD
measures by a large margin, offers reasonable local CD maps without the use of
dense supervision, generalizes well to homogeneous color patch data, and
empirically behaves as a proper metric in the mathematical sense. Our dataset
and code are publicly available at https://github.com/hellooks/CDNet.Comment: 10 figures, 8 tables, 14 page
- …