285 research outputs found

    The Characteristics, Harm and Anti-monopoly Measures of Digital Enterprise Monopolistic Behavior in Digital Economy: A Case Study of Amazon

    Get PDF
    With the wave of artificial intelligence, blockchain, cloud computing, big data and other digital technologies becoming new general technologies, digital enterprises present new characteristics of industrial organization: non-contest-ability of information products, zero marginal cost of information, digital market can be absent online, and big data can replace material materials as key inputs. These new characteristics lead to some new types of monopolistic behaviors in digital enterprises. This paper selects the self-preferential behavior for detailed analysis, and takes the well-known digital enterprise Amazon platform as an example: (1) summarizes the foundation (development mode) of realizing self-preferential behavior; (2) the whole chain of self-preferential treatment behavior from pricing, product selection, procurement, after-sales and inventory; (3) It points out four kinds of behaviors that are harmful to competition, such as weakening competitors' advantages, increasing competitors' costs, reducing innovation motivation and damaging consumers' welfare. Finally, this paper provides a reference for the monopoly identification of this behavior from three aspects: expanding the identification of digital economy market, refining the standards for identifying dominant market position, and carefully identifying the abuse of dominant market position by self-preferential treatment, and further puts forward regulatory suggestions

    Towards Informative Few-Shot Prompt with Maximum Information Gain for In-Context Learning

    Full text link
    Large Language models (LLMs) possess the capability to engage In-context Learning (ICL) by leveraging a few demonstrations pertaining to a new downstream task as conditions. However, this particular learning paradigm suffers from high instability stemming from substantial variances induced by factors such as the input distribution of selected examples, their ordering, and prompt formats. In this work, we demonstrate that even when all these factors are held constant, the random selection of examples still results in high variance. Consequently, we aim to explore the informative ability of data examples by quantifying the Information Gain (IG) obtained in prediction after observing a given example candidate. Then we propose to sample those with maximum IG. Additionally, we identify the presence of template bias, which can lead to unfair evaluations of IG during the sampling process. To mitigate this bias, we introduce Calibration Before Sampling strategy. The experimental results illustrate that our proposed method can yield an average relative improvement of 14.3% across six classification tasks using three LLMs.Comment: Accepted to the Findings of EMNLP 202

    Dual Node and Edge Fairness-Aware Graph Partition

    Full text link
    Fair graph partition of social networks is a crucial step toward ensuring fair and non-discriminatory treatments in unsupervised user analysis. Current fair partition methods typically consider node balance, a notion pursuing a proportionally balanced number of nodes from all demographic groups, but ignore the bias induced by imbalanced edges in each cluster. To address this gap, we propose a notion edge balance to measure the proportion of edges connecting different demographic groups in clusters. We analyze the relations between node balance and edge balance, then with line graph transformations, we propose a co-embedding framework to learn dual node and edge fairness-aware representations for graph partition. We validate our framework through several social network datasets and observe balanced partition in terms of both nodes and edges along with good utility. Moreover, we demonstrate our fair partition can be used as pseudo labels to facilitate graph neural networks to behave fairly in node classification and link prediction tasks

    Mining Label Distribution Drift in Unsupervised Domain Adaptation

    Full text link
    Unsupervised domain adaptation targets to transfer task knowledge from labeled source domain to related yet unlabeled target domain, and is catching extensive interests from academic and industrial areas. Although tremendous efforts along this direction have been made to minimize the domain divergence, unfortunately, most of existing methods only manage part of the picture by aligning feature representations from different domains. Beyond the discrepancy in feature space, the gap between known source label and unknown target label distribution, recognized as label distribution drift, is another crucial factor raising domain divergence, and has not been paid enough attention and well explored. From this point, in this paper, we first experimentally reveal how label distribution drift brings negative effects on current domain adaptation methods. Next, we propose Label distribution Matching Domain Adversarial Network (LMDAN) to handle data distribution shift and label distribution drift jointly. In LMDAN, label distribution drift problem is addressed by the proposed source samples weighting strategy, which select samples to contribute to positive adaptation and avoid negative effects brought by the mismatched in label distribution. Finally, different from general domain adaptation experiments, we modify domain adaptation datasets to create the considerable label distribution drift between source and target domain. Numerical results and empirical model analysis show that LMDAN delivers superior performance compared to other state-of-the-art domain adaptation methods under such scenarios

    Tactical Trajectory Planning for Stealth Unmanned Aerial Vehicle to Win the Radar Game

    Get PDF
    In this paper, problem of planning tactical trajectory for stealth unmanned aerial vehicle (UAV) to win the radar game is studied. Three principles of how to win the radar game are presented, and their utilizations for stealth UAV to evade radar tracking are analysed. The problem is formulated by integrating the model of stealth UAV, the constraints of radar detecting and the multi-objectives of the game. The pseudospectral multi-phase optimal control based trajectory planning algorithm is developed to solve the formulated problem. Pseudospectral method is employed to seek the optimal solution with satisfying convergence speed. The results of experiments show that the proposed method is feasible and effective. By following the planned trajectory with several times of switches between exposure and stealth, stealth UAV could win the radar game triumphantly.Defence Science Journal, 2012, 62(6), pp.375-381, DOI:http://dx.doi.org/10.14429/dsj.62.268

    Knowledge Reused Outlier Detection

    Get PDF
    Tremendous efforts have been invested in the unsupervised outlier detection research, which is conducted on unlabeled data set with abnormality assumptions. With abundant related labeled data available as auxiliary information, we consider transferring the knowledge from the labeled source data to facilitate the unsupervised outlier detection on target data set. To fully make use of the source knowledge, the source data and target data are put together for joint clustering and outlier detection using the source data cluster structure as a constraint. To achieve this, the categorical utility function is employed to regularize the partitions of target data to be consistent with source data labels. With an augmented matrix, the problem is completely solved by a K-means - a based method with the rigid mathematical formulation and theoretical convergence guarantee. We have used four real-world data sets and eight outlier detection methods of different kinds for extensive experiments and comparison. The results demonstrate the effectiveness and significant improvements of the proposed methods in terms of outlier detection and cluster validity metrics. Moreover, the parameter analysis is provided as a practical guide, and noisy source label analysis proves that the proposed method can handle real applications where source labels can be noisy

    Affine Transformation Edited and Refined Deep Neural Network for Quantitative Susceptibility Mapping

    Full text link
    Deep neural networks have demonstrated great potential in solving dipole inversion for Quantitative Susceptibility Mapping (QSM). However, the performances of most existing deep learning methods drastically degrade with mismatched sequence parameters such as acquisition orientation and spatial resolution. We propose an end-to-end AFfine Transformation Edited and Refined (AFTER) deep neural network for QSM, which is robust against arbitrary acquisition orientation and spatial resolution up to 0.6 mm isotropic at the finest. The AFTER-QSM neural network starts with a forward affine transformation layer, followed by an Unet for dipole inversion, then an inverse affine transformation layer, followed by a Residual Dense Network (RDN) for QSM refinement. Simulation and in-vivo experiments demonstrated that the proposed AFTER-QSM network architecture had excellent generalizability. It can successfully reconstruct susceptibility maps from highly oblique and anisotropic scans, leading to the best image quality assessments in simulation tests and suppressed streaking artifacts and noise levels for in-vivo experiments compared with other methods. Furthermore, ablation studies showed that the RDN refinement network significantly reduced image blurring and susceptibility underestimation due to affine transformations. In addition, the AFTER-QSM network substantially shortened the reconstruction time from minutes using conventional methods to only a few seconds

    On the Inflation of KNN-Shapley Value

    Full text link
    Shapley value-based data valuation methods, originating from cooperative game theory, quantify the usefulness of each individual sample by considering its contribution to all possible training subsets. Despite their extensive applications, these methods encounter the challenge of value inflation - while samples with negative Shapley values are detrimental, some with positive values can also be harmful. This challenge prompts two fundamental questions: the suitability of zero as a threshold for distinguishing detrimental from beneficial samples and the determination of an appropriate threshold. To address these questions, we focus on KNN-Shapley and propose Calibrated KNN-Shapley (CKNN-Shapley), which calibrates zero as the threshold to distinguish detrimental samples from beneficial ones by mitigating the negative effects of small-sized training subsets. Through extensive experiments, we demonstrate the effectiveness of CKNN-Shapley in alleviating data valuation inflation, detecting detrimental samples, and assessing data quality. We also extend our approach beyond conventional classification settings, applying it to diverse and practical scenarios such as learning with mislabeled data, online learning with stream data, and active learning for label annotation
    • …
    corecore