663 research outputs found
The Characteristics, Harm and Anti-monopoly Measures of Digital Enterprise Monopolistic Behavior in Digital Economy: A Case Study of Amazon
With the wave of artificial intelligence, blockchain, cloud computing, big data and other digital technologies becoming new general technologies, digital enterprises present new characteristics of industrial organization: non-contest-ability of information products, zero marginal cost of information, digital market can be absent online, and big data can replace material materials as key inputs. These new characteristics lead to some new types of monopolistic behaviors in digital enterprises. This paper selects the self-preferential behavior for detailed analysis, and takes the well-known digital enterprise Amazon platform as an example: (1) summarizes the foundation (development mode) of realizing self-preferential behavior; (2) the whole chain of self-preferential treatment behavior from pricing, product selection, procurement, after-sales and inventory; (3) It points out four kinds of behaviors that are harmful to competition, such as weakening competitors' advantages, increasing competitors' costs, reducing innovation motivation and damaging consumers' welfare. Finally, this paper provides a reference for the monopoly identification of this behavior from three aspects: expanding the identification of digital economy market, refining the standards for identifying dominant market position, and carefully identifying the abuse of dominant market position by self-preferential treatment, and further puts forward regulatory suggestions
Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning
Large Language models (LLMs) possess the capability to engage In-context
Learning (ICL) by leveraging a few demonstrations pertaining to a new
downstream task as conditions. However, this particular learning paradigm
suffers from high instability stemming from substantial variances induced by
factors such as the input distribution of selected examples, their ordering,
and prompt formats. In this work, we demonstrate that even when all these
factors are held constant, the random selection of examples still results in
high variance. Consequently, we aim to explore the informative ability of data
examples by quantifying the Information Gain (IG) obtained in prediction after
observing a given example candidate. Then we propose to sample those with
maximum IG. Additionally, we identify the presence of template bias, which can
lead to unfair evaluations of IG during the sampling process. To mitigate this
bias, we introduce Calibration Before Sampling strategy. The experimental
results illustrate that our proposed method can yield an average relative
improvement of 14.3% across six classification tasks using three LLMs.Comment: Accepted to the Findings of EMNLP 202
Effects of Autonomous Learning Software on Chinese Learners’ English Performance and Course Assessment
Based on the construction of an autonomous learning platform for college English learners, a one-year teaching reform experiment has been carried out and 204 subjects were involved. Data collection was conducted mainly through questionnaires and the subjects’ autonomous learning achievements, regular grades, final exam performance, and English listening achievements. The software SPSS17.0 was applied to analyze those data. The results reveal that the experimental class’ achievements on the self-learning platform are positively correlated with their achievements in the final examination. In addition, the correlation between the experimental class’ regular grades and final exam performance is more statistically significant than the control class; moreover, the experimental class performed significantly better than the control class in the English listening test. The vast majority of the students in the experimental class hold positive attitudes towards the software; however, there is still some room to improve it
Bi-level Guided Diffusion Models for Zero-Shot Medical Imaging Inverse Problems
In the realm of medical imaging, inverse problems aim to infer high-quality
images from incomplete, noisy measurements, with the objective of minimizing
expenses and risks to patients in clinical settings. The Diffusion Models have
recently emerged as a promising approach to such practical challenges, proving
particularly useful for the zero-shot inference of images from partially
acquired measurements in Magnetic Resonance Imaging (MRI) and Computed
Tomography (CT). A central challenge in this approach, however, is how to guide
an unconditional prediction to conform to the measurement information. Existing
methods rely on deficient projection or inefficient posterior score
approximation guidance, which often leads to suboptimal performance. In this
paper, we propose \underline{\textbf{B}}i-level \underline{G}uided
\underline{D}iffusion \underline{M}odels ({BGDM}), a zero-shot imaging
framework that efficiently steers the initial unconditional prediction through
a \emph{bi-level} guidance strategy. Specifically, BGDM first approximates an
\emph{inner-level} conditional posterior mean as an initial
measurement-consistent reference point and then solves an \emph{outer-level}
proximal optimization objective to reinforce the measurement consistency. Our
experimental findings, using publicly available MRI and CT medical datasets,
reveal that BGDM is more effective and efficient compared to the baselines,
faithfully generating high-fidelity medical images and substantially reducing
hallucinatory artifacts in cases of severe degradation.Comment: 19 pages, 14 figure
Dual Node and Edge Fairness-Aware Graph Partition
Fair graph partition of social networks is a crucial step toward ensuring
fair and non-discriminatory treatments in unsupervised user analysis. Current
fair partition methods typically consider node balance, a notion pursuing a
proportionally balanced number of nodes from all demographic groups, but ignore
the bias induced by imbalanced edges in each cluster. To address this gap, we
propose a notion edge balance to measure the proportion of edges connecting
different demographic groups in clusters. We analyze the relations between node
balance and edge balance, then with line graph transformations, we propose a
co-embedding framework to learn dual node and edge fairness-aware
representations for graph partition. We validate our framework through several
social network datasets and observe balanced partition in terms of both nodes
and edges along with good utility. Moreover, we demonstrate our fair partition
can be used as pseudo labels to facilitate graph neural networks to behave
fairly in node classification and link prediction tasks
Mining Label Distribution Drift in Unsupervised Domain Adaptation
Unsupervised domain adaptation targets to transfer task knowledge from
labeled source domain to related yet unlabeled target domain, and is catching
extensive interests from academic and industrial areas. Although tremendous
efforts along this direction have been made to minimize the domain divergence,
unfortunately, most of existing methods only manage part of the picture by
aligning feature representations from different domains. Beyond the discrepancy
in feature space, the gap between known source label and unknown target label
distribution, recognized as label distribution drift, is another crucial factor
raising domain divergence, and has not been paid enough attention and well
explored. From this point, in this paper, we first experimentally reveal how
label distribution drift brings negative effects on current domain adaptation
methods. Next, we propose Label distribution Matching Domain Adversarial
Network (LMDAN) to handle data distribution shift and label distribution drift
jointly. In LMDAN, label distribution drift problem is addressed by the
proposed source samples weighting strategy, which select samples to contribute
to positive adaptation and avoid negative effects brought by the mismatched in
label distribution. Finally, different from general domain adaptation
experiments, we modify domain adaptation datasets to create the considerable
label distribution drift between source and target domain. Numerical results
and empirical model analysis show that LMDAN delivers superior performance
compared to other state-of-the-art domain adaptation methods under such
scenarios
- …