5,823 research outputs found

    Power Scaling of Uplink Massive MIMO Systems with Arbitrary-Rank Channel Means

    Full text link
    This paper investigates the uplink achievable rates of massive multiple-input multiple-output (MIMO) antenna systems in Ricean fading channels, using maximal-ratio combining (MRC) and zero-forcing (ZF) receivers, assuming perfect and imperfect channel state information (CSI). In contrast to previous relevant works, the fast fading MIMO channel matrix is assumed to have an arbitrary-rank deterministic component as well as a Rayleigh-distributed random component. We derive tractable expressions for the achievable uplink rate in the large-antenna limit, along with approximating results that hold for any finite number of antennas. Based on these analytical results, we obtain the scaling law that the users' transmit power should satisfy, while maintaining a desirable quality of service. In particular, it is found that regardless of the Ricean KK-factor, in the case of perfect CSI, the approximations converge to the same constant value as the exact results, as the number of base station antennas, MM, grows large, while the transmit power of each user can be scaled down proportionally to 1/M1/M. If CSI is estimated with uncertainty, the same result holds true but only when the Ricean KK-factor is non-zero. Otherwise, if the channel experiences Rayleigh fading, we can only cut the transmit power of each user proportionally to 1/M1/\sqrt M. In addition, we show that with an increasing Ricean KK-factor, the uplink rates will converge to fixed values for both MRC and ZF receivers

    Power Allocation Schemes for Multicell Massive MIMO Systems

    Full text link
    This paper investigates the sum-rate gains brought by power allocation strategies in multicell massive multipleinput multiple-output systems, assuming time-division duplex transmission. For both uplink and downlink, we derive tractable expressions for the achievable rate with zero-forcing receivers and precoders respectively. To avoid high complexity joint optimization across the network, we propose a scheduling mechanism for power allocation, where in a single time slot, only cells that do not interfere with each other adjust their transmit powers. Based on this, corresponding transmit power allocation strategies are derived, aimed at maximizing the sum rate per-cell. These schemes are shown to bring considerable gains over equal power allocation for practical antenna configurations (e.g., up to a few hundred). However, with fixed number of users (N), these gains diminish as M turns to infinity, and equal power allocation becomes optimal. A different conclusion is drawn for the case where both M and N grow large together, in which case: (i) improved rates are achieved as M grows with fixed M/N ratio, and (ii) the relative gains over the equal power allocation diminish as M/N grows. Moreover, we also provide applicable values of M/N under an acceptable power allocation gain threshold, which can be used as to determine when the proposed power allocation schemes yield appreciable gains, and when they do not. From the network point of view, the proposed scheduling approach can achieve almost the same performance as the joint power allocation after one scheduling round, with much reduced complexity

    Heterformer: Transformer-based Deep Node Representation Learning on Heterogeneous Text-Rich Networks

    Full text link
    Representation learning on networks aims to derive a meaningful vector representation for each node, thereby facilitating downstream tasks such as link prediction, node classification, and node clustering. In heterogeneous text-rich networks, this task is more challenging due to (1) presence or absence of text: Some nodes are associated with rich textual information, while others are not; (2) diversity of types: Nodes and edges of multiple types form a heterogeneous network structure. As pretrained language models (PLMs) have demonstrated their effectiveness in obtaining widely generalizable text representations, a substantial amount of effort has been made to incorporate PLMs into representation learning on text-rich networks. However, few of them can jointly consider heterogeneous structure (network) information as well as rich textual semantic information of each node effectively. In this paper, we propose Heterformer, a Heterogeneous Network-Empowered Transformer that performs contextualized text encoding and heterogeneous structure encoding in a unified model. Specifically, we inject heterogeneous structure information into each Transformer layer when encoding node texts. Meanwhile, Heterformer is capable of characterizing node/edge type heterogeneity and encoding nodes with or without texts. We conduct comprehensive experiments on three tasks (i.e., link prediction, node classification, and node clustering) on three large-scale datasets from different domains, where Heterformer outperforms competitive baselines significantly and consistently.Comment: KDD 2023. (Code: https://github.com/PeterGriffinJin/Heterformer

    Robust offline reinforcement learning with heavy-tailed rewards

    Get PDF
    This paper endeavors to augment the robustness of offline reinforcement learning (RL) in scenarios laden with heavy-tailed rewards, a prevalent circumstance in real-world applications. We propose two algorithmic frameworks, ROAM and ROOM, for robust off-policy evaluation and offline policy optimization (OPO), respectively. Central to our frameworks is the strategic incorporation of the median-of-means method with offline RL, enabling straightforward uncertainty estimation for the value function estimator. This not only adheres to the principle of pessimism in OPO but also adeptly manages heavytailed rewards. Theoretical results and extensive experiments demonstrate that our two frameworks outperform existing methods on the logged dataset exhibits heavytailed reward distributions. The implementation of the proposal is available at https: //github.com/Mamba413/ROOM

    Design concept evaluation based on rough number and information entropy theory

    Get PDF
    Concept evaluation at the early phase of product development plays a crucial role in new product development. It determines the direction of the subsequent design activities. However, the evaluation information at this stage mainly comes from experts' judgments, which is subjective and imprecise. How to manage the subjectivity to reduce the evaluation bias is a big challenge in design concept evaluation. This paper proposes a comprehensive evaluation method which combines information entropy theory and rough number. Rough number is first presented to aggregate individual judgments and priorities and to manipulate the vagueness under a group decision-making environment. A rough number based information entropy method is proposed to determine the relative weights of evaluation criteria. The composite performance values based on rough number are then calculated to rank the candidate design concepts. The results from a practical case study on the concept evaluation of an industrial robot design show that the integrated evaluation model can effectively strengthen the objectivity across the decision-making processes

    Robust Offline Policy Evaluation and Optimization with Heavy-Tailed Rewards

    Full text link
    This paper endeavors to augment the robustness of offline reinforcement learning (RL) in scenarios laden with heavy-tailed rewards, a prevalent circumstance in real-world applications. We propose two algorithmic frameworks, ROAM and ROOM, for robust off-policy evaluation (OPE) and offline policy optimization (OPO), respectively. Central to our frameworks is the strategic incorporation of the median-of-means method with offline RL, enabling straightforward uncertainty estimation for the value function estimator. This not only adheres to the principle of pessimism in OPO but also adeptly manages heavy-tailed rewards. Theoretical results and extensive experiments demonstrate that our two frameworks outperform existing methods on the logged dataset exhibits heavy-tailed reward distributions
    • …
    corecore