2,859 research outputs found
How is Gaze Influenced by Image Transformations? Dataset and Model
Data size is the bottleneck for developing deep saliency models, because
collecting eye-movement data is very time consuming and expensive. Most of
current studies on human attention and saliency modeling have used high quality
stereotype stimuli. In real world, however, captured images undergo various
types of transformations. Can we use these transformations to augment existing
saliency datasets? Here, we first create a novel saliency dataset including
fixations of 10 observers over 1900 images degraded by 19 types of
transformations. Second, by analyzing eye movements, we find that observers
look at different locations over transformed versus original images. Third, we
utilize the new data over transformed images, called data augmentation
transformation (DAT), to train deep saliency models. We find that label
preserving DATs with negligible impact on human gaze boost saliency prediction,
whereas some other DATs that severely impact human gaze degrade the
performance. These label preserving valid augmentation transformations provide
a solution to enlarge existing saliency datasets. Finally, we introduce a novel
saliency model based on generative adversarial network (dubbed GazeGAN). A
modified UNet is proposed as the generator of the GazeGAN, which combines
classic skip connections with a novel center-surround connection (CSC), in
order to leverage multi level features. We also propose a histogram loss based
on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in
terms of luminance distribution. Extensive experiments and comparisons over 3
datasets indicate that GazeGAN achieves the best performance in terms of
popular saliency evaluation metrics, and is more robust to various
perturbations. Our code and data are available at:
https://github.com/CZHQuality/Sal-CFS-GAN
Non-local Attention Optimized Deep Image Compression
This paper proposes a novel Non-Local Attention Optimized Deep Image
Compression (NLAIC) framework, which is built on top of the popular variational
auto-encoder (VAE) structure. Our NLAIC framework embeds non-local operations
in the encoders and decoders for both image and latent feature probability
information (known as hyperprior) to capture both local and global
correlations, and apply attention mechanism to generate masks that are used to
weigh the features for the image and hyperprior, which implicitly adapt bit
allocation for different features based on their importance. Furthermore, both
hyperpriors and spatial-channel neighbors of the latent features are used to
improve entropy coding. The proposed model outperforms the existing methods on
Kodak dataset, including learned (e.g., Balle2019, Balle2018) and conventional
(e.g., BPG, JPEG2000, JPEG) image compression methods, for both PSNR and
MS-SSIM distortion metrics
Understanding user experience of mobile video: Framework, measurement, and optimization
Since users have become the focus of product/service design in last decade, the term User eXperience (UX) has been frequently used in the field of Human-Computer-Interaction (HCI). Research on UX facilitates a better understanding of the various aspects of the user’s interaction with the product or service. Mobile video, as a new and promising service and research field, has attracted great attention. Due to the significance of UX in the success of mobile video (Jordan, 2002), many researchers have centered on this area, examining users’ expectations, motivations, requirements, and usage context. As a result, many influencing factors have been explored (Buchinger, Kriglstein, Brandt & Hlavacs, 2011; Buchinger, Kriglstein & Hlavacs, 2009). However, a general framework for specific mobile video service is lacking for structuring such a great number of factors. To measure user experience of multimedia services such as mobile video, quality of experience (QoE) has recently become a prominent concept. In contrast to the traditionally used concept quality of service (QoS), QoE not only involves objectively measuring the delivered service but also takes into account user’s needs and desires when using the service, emphasizing the user’s overall acceptability on the service. Many QoE metrics are able to estimate the user perceived quality or acceptability of mobile video, but may be not enough accurate for the overall UX prediction due to the complexity of UX. Only a few frameworks of QoE have addressed more aspects of UX for mobile multimedia applications but need be transformed into practical measures. The challenge of optimizing UX remains adaptations to the resource constrains (e.g., network conditions, mobile device capabilities, and heterogeneous usage contexts) as well as meeting complicated user requirements (e.g., usage purposes and personal preferences). In this chapter, we investigate the existing important UX frameworks, compare their similarities and discuss some important features that fit in the mobile video service. Based on the previous research, we propose a simple UX framework for mobile video application by mapping a variety of influencing factors of UX upon a typical mobile video delivery system. Each component and its factors are explored with comprehensive literature reviews. The proposed framework may benefit in user-centred design of mobile video through taking a complete consideration of UX influences and in improvement of mobile videoservice quality by adjusting the values of certain factors to produce a positive user experience. It may also facilitate relative research in the way of locating important issues to study, clarifying research scopes, and setting up proper study procedures. We then review a great deal of research on UX measurement, including QoE metrics and QoE frameworks of mobile multimedia. Finally, we discuss how to achieve an optimal quality of user experience by focusing on the issues of various aspects of UX of mobile video. In the conclusion, we suggest some open issues for future study
CONTENT BASED IMAGE RETRIEVAL (CBIR) SYSTEM
Advancement in hardware and telecommunication technology has boosted up creation
and distribution of digital visual content. However this rapid growth of visual content
creations has not been matched by the simultaneous emergence of technologies to support
efficient image analysis and retrieval. Although there are attempt to solve this problem by
using meta-data text annotation but this approach are not practical when it come to the
large number of data collection.
This system used 7 different feature vectors that are focusing on 3 main low level feature
groups (color, shape and texture). This system will use the image that the user feed and
search the similar images in the database that had similar feature by considering the
threshold value. One of the most important aspects in CBIR is to determine the correct
threshold value. Setting the correct threshold value is important in CBIR because setting
it too low will result in less image being retrieve that might exclude relevant data. Setting
to high threshold value might result in irrelevant data to be retrieved and increase the
search time for image retrieval.
Result show that this project able to increase the image accuracy to average 70% by
combining 7 different feature vector at correct threshold value.
ii
Point Cloud Distortion Quantification based on Potential Energy for Human and Machine Perception
Distortion quantification of point clouds plays a stealth, yet vital role in
a wide range of human and machine perception tasks. For human perception tasks,
a distortion quantification can substitute subjective experiments to guide 3D
visualization; while for machine perception tasks, a distortion quantification
can work as a loss function to guide the training of deep neural networks for
unsupervised learning tasks. To handle a variety of demands in many
applications, a distortion quantification needs to be distortion discriminable,
differentiable, and have a low computational complexity. Currently, however,
there is a lack of a general distortion quantification that can satisfy all
three conditions. To fill this gap, this work proposes multiscale potential
energy discrepancy (MPED), a distortion quantification to measure point cloud
geometry and color difference. By evaluating at various neighborhood sizes, the
proposed MPED achieves global-local tradeoffs, capturing distortion in a
multiscale fashion. Extensive experimental studies validate MPED's superiority
for both human and machine perception tasks
- …