171 research outputs found
Recommended from our members
Developing A Digital Contents Valuing Model: How Users Appreciate Their Values
The principal objective of this paper was to propose and verify a digital content valuing model, which is expected to perform a significant role in future research, and provide novel and practical implications. For the efficacy of a model for the evaluation of digital content value, this study reviewed digital content value and categorized it into intrinsic, interaction, and business value. Based on the research model, we attempted to identify and assess the effects of intrinsic digital content value on digital content interaction value, and to characterize the relationship between digital content interaction value and digital content business value. Consequently, this study finds strong interrelations among different types of values and these interactions lead a value addition to digital content usage. We hope that the proposed valuing model of digital contents will prove useful and provide further research insights, and will also increase our understanding of digital content valuing process
ASP Effects in the Small Size Enterprise: A Case of Bizmeka of Korea Telecom
An application service provider (ASP) is a business model providing information technology (IT) enabled solutions to customers over the Internet. Utilizing an ASP model, small and medium-sized enterprises (SMEs) can acquire affordable IT solutions. Korea Telecom (KT), a leading IT company in Korea, provides an ASP service called Bizmeka to SMEs. In this study, we identify factors influencing the perceived benefits of Bizmeka usage. We also examine the relationship between the perceived benefits of Bizmeka usage and customer satisfaction. Based on the results of our survey, we found the following factors to be important in determining customer satisfaction with ASPs: cost and time impact of the ASP, maintenance impact of ASP, and the security risks involved with the ASP are important factors affecting customer satisfaction
A generalized finite element method with global–local enrichments for the 3D simulation of propagating cohesive fractures
A novel numerical framework based on the generalized finite element method with global‑local enrichments (GFEMgl) is developed for the efficient 3D modeling of propagating fractures, in which a non-linear cohesive law is adopted to capture objectively the amount of dissipated energy during the process of material degradation without the need of adaptive remeshing at the macroscale or artificial regularization parameters. In the proposed scale-bridging strategy, the fine-scale solution provides the coarse-scale problem with information on localized damaged states as well as scale-bridging enrichment functions, thus enabling the accurate solution of the nonlinear global problem on coarse meshes. This is to be contrasted with the original GFEMgl approach based on linear elastic fracture mechanics in which the local solution field contributes to only the kinematic description of the global solution. The cohesive crack is capable of propagating through the interior of finite elements in virtue of the concept of partition of unity employed in the generalized finite element method (GFEM), and thus eliminating the need of interfacial surface elements to represent the geometry of discontinuities and the requirement of finite element meshes fitting the cohesive crack surface
A cross cultural study of corporate blogs in the U.S. and in Korea
Corporate blogging is now world wide due to the potential benefits of blogging. The purpose of this study is to investigate corporate blogging in the U.S. and Korea. The framework to compare corporate blogging in two countries is corporate blogging strategies developed by Lee et al. (2006). Comparing corporate blogging strategies in the U.S., top-down corporate blogging strategy IV (promotion) is mostly adopted in Korea, and most companies use their blogs at the third party site. Promotion blogs have gained popularity and high readership in Korea unlike U.S
Pluralistic Ignorance in the Personal Use of the Internet and System Monitoring
Previous research suggests that computer security countermeasures would be effective in preventing computer abuse in organizations. However, computer abuse problems still persist despite these efforts. This study proposes a new model of computer abuse that explains a causal link between abusive behavior and a psychological state toward this behavior, drawn on the theory of pluralistic ignorance. Pluralistic ignorance is a form of erroneous social interference that is both an immediate cause and a consequence of literal inconsistency between private attitudes and public behavior. Under pluralistic ignorance, mistakenly perceived social norms overwhelm personal attitudes and subsequently lead to overt behavior contrary to an actor’s attitude. This new model contributes to the theoretical body of knowledge on computer abuse by providing a new angle for approaching the problem. In addition, it suggests to practitioners that social solutions should be considered, along with technical countermeasures, to reduce the pervasive computer abuse problems
M2m: Imbalanced Classification via Major-to-minor Translation
In most real-world scenarios, labeled training datasets are highly
class-imbalanced, where deep neural networks suffer from generalizing to a
balanced testing criterion. In this paper, we explore a novel yet simple way to
alleviate this issue by augmenting less-frequent classes via translating
samples (e.g., images) from more-frequent classes. This simple approach enables
a classifier to learn more generalizable features of minority classes, by
transferring and leveraging the diversity of the majority information. Our
experimental results on a variety of class-imbalanced datasets show that the
proposed method improves the generalization on minority classes significantly
compared to other existing re-sampling or re-weighting methods. The performance
of our method even surpasses those of previous state-of-the-art methods for the
imbalanced classification.Comment: 12 pages; CVPR 202
WinCLIP: Zero-/Few-Shot Anomaly Classification and Segmentation
Visual anomaly classification and segmentation are vital for automating
industrial quality inspection. The focus of prior research in the field has
been on training custom models for each quality inspection task, which requires
task-specific images and annotation. In this paper we move away from this
regime, addressing zero-shot and few-normal-shot anomaly classification and
segmentation. Recently CLIP, a vision-language model, has shown revolutionary
generality with competitive zero-/few-shot performance in comparison to
full-supervision. But CLIP falls short on anomaly classification and
segmentation tasks. Hence, we propose window-based CLIP (WinCLIP) with (1) a
compositional ensemble on state words and prompt templates and (2) efficient
extraction and aggregation of window/patch/image-level features aligned with
text. We also propose its few-normal-shot extension WinCLIP+, which uses
complementary information from normal images. In MVTec-AD (and VisA), without
further tuning, WinCLIP achieves 91.8%/85.1% (78.1%/79.6%) AUROC in zero-shot
anomaly classification and segmentation while WinCLIP+ does 93.1%/95.2%
(83.8%/96.4%) in 1-normal-shot, surpassing state-of-the-art by large margins.Comment: Accepted to Conference on Computer Vision and Pattern Recognition
(CVPR) 202
Collaborative Score Distillation for Consistent Visual Synthesis
Generative priors of large-scale text-to-image diffusion models enable a wide
range of new generation and editing applications on diverse visual modalities.
However, when adapting these priors to complex visual modalities, often
represented as multiple images (e.g., video), achieving consistency across a
set of images is challenging. In this paper, we address this challenge with a
novel method, Collaborative Score Distillation (CSD). CSD is based on the Stein
Variational Gradient Descent (SVGD). Specifically, we propose to consider
multiple samples as "particles" in the SVGD update and combine their score
functions to distill generative priors over a set of images synchronously.
Thus, CSD facilitates seamless integration of information across 2D images,
leading to a consistent visual synthesis across multiple samples. We show the
effectiveness of CSD in a variety of tasks, encompassing the visual editing of
panorama images, videos, and 3D scenes. Our results underline the competency of
CSD as a versatile method for enhancing inter-sample consistency, thereby
broadening the applicability of text-to-image diffusion models.Comment: Project page with visuals: https://subin-kim-cv.github.io/CSD
Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models
Fine-tuning text-to-image models with reward functions trained on human
feedback data has proven effective for aligning model behavior with human
intent. However, excessive optimization with such reward models, which serve as
mere proxy objectives, can compromise the performance of fine-tuned models, a
phenomenon known as reward overoptimization. To investigate this issue in
depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which
comprises a diverse collection of text prompts, images, and human annotations.
Our evaluation of several state-of-the-art reward models on this benchmark
reveals their frequent misalignment with human assessment. We empirically
demonstrate that overoptimization occurs notably when a poorly aligned reward
model is used as the fine-tuning objective. To address this, we propose
TextNorm, a simple method that enhances alignment based on a measure of reward
model confidence estimated across a set of semantically contrastive text
prompts. We demonstrate that incorporating the confidence-calibrated rewards in
fine-tuning effectively reduces overoptimization, resulting in twice as many
wins in human evaluation for text-image alignment compared against the baseline
reward models.Comment: ICLR 202
- …