1,634 research outputs found

    Zero-Delay Joint Source-Channel Coding in the Presence of Interference Known at the Encoder

    Get PDF
    Zero-delay transmission of a Gaussian source over an additive white Gaussian noise (AWGN) channel is considered in the presence of an additive Gaussian interference signal. The mean squared error (MSE) distortion is minimized under an average power constraint assuming that the interference signal is known at the transmitter. Optimality of simple linear transmission does not hold in this setting due to the presence of the known interference signal. While the optimal encoder-decoder pair remains an open problem, various non-linear transmission schemes are proposed in this paper. In particular, interference concentration (ICO) and one-dimensional lattice (1DL) strategies, using both uniform and non-uniform quantization of the interference signal, are studied. It is shown that, in contrast to typical scalar quantization of Gaussian sources, a non-uniform quantizer, whose quantization intervals become smaller as we go further from zero, improves the performance. Given that the optimal decoder is the minimum MSE (MMSE) estimator, a necessary condition for the optimality of the encoder is derived, and the numerically optimized encoder (NOE) satisfying this condition is obtained. Based on the numerical results, it is shown that 1DL with nonuniform quantization performs closer (compared to the other schemes) to the numerically optimized encoder while requiring significantly lower complexity

    Dynamic resource constrained multi-project scheduling problem with weighted earliness/tardiness costs

    Get PDF
    In this study, a conceptual framework is given for the dynamic multi-project scheduling problem with weighted earliness/tardiness costs (DRCMPSPWET) and a mathematical programming formulation of the problem is provided. In DRCMPSPWET, a project arrives on top of an existing project portfolio and a due date has to be quoted for the new project while minimizing the costs of schedule changes. The objective function consists of the weighted earliness tardiness costs of the activities of the existing projects in the current baseline schedule plus a term that increases linearly with the anticipated completion time of the new project. An iterated local search based approach is developed for large instances of this problem. In order to analyze the performance and behavior of the proposed method, a new multi-project data set is created by controlling the total number of activities, the due date tightness, the due date range, the number of resource types, and the completion time factor in an instance. A series of computational experiments are carried out to test the performance of the local search approach. Exact solutions are provided for the small instances. The results indicate that the local search heuristic performs well in terms of both solution quality and solution time

    Improved delivery rate-cache capacity trade-off for centralized coded caching

    No full text
    Centralized coded caching problem, in which a server with N distinct files, each with the size of F bits, serves K users, each equipped with a cache of capacity MF bits, is considered. The server is allowed to proactively cache contents into user terminals during the placement phase, without knowing the particular user requests. After the placement phase, each user requests one of the N files from the server, and all the users' requests are satisfied simultaneously by the server through an error-free shared link during the delivery phase. A novel coded caching algorithm is proposed, which is shown to achieve a smaller delivery rate compared to the existing coded caching schemes in the literature for a range of N and K values; particularly when the number of files is larger than the number of users in the system

    Caching and coded delivery over Gaussian broadcast channels for energy efficiency

    Get PDF
    A cache-aided K -user Gaussian broadcast channel (BC) is considered. The transmitter has a library of N equal- rate files, from which each user demands one. The impact of the equal-capacity receiver cache memories on the minimum required transmit power to satisfy all user demands is studied. Considering uniformly random demands across the library, both the minimum average power (averaged over all demand combinations) and the minimum peak power (minimum power required to satisfy all demand combinations) are studied. Upper bounds are presented on the minimum required average and peak transmit power as a function of the cache capacity considering both centralized and decentralized caching. The lower bounds on the minimum required average and peak power values are also derived assuming uncoded cache placement. The bounds for both the peak and average power values are shown to be tight in the centralized scenario through numerical simulations. The results in this paper show that proactive caching and coded delivery can provide significant energy savings in wireless networks

    Federated learning over wireless fading channels

    Get PDF
    We study federated machine learning at the wirelessnetwork edge, where limited power wireless devices, each withits own dataset, build a joint model with the help of a remoteparameter server (PS). We consider a bandwidth-limited fadingmultiple access channel (MAC) from the wireless devices to thePS, and propose various techniques to implement distributedstochastic gradient descent (DSGD) over this shared noisywireless channel. We first propose a digital DSGD (D-DSGD)scheme, in which one device is selected opportunistically fortransmission at each iteration based on the channel conditions;the scheduled device quantizes its gradient estimate to a finitenumber of bits imposed by the channel condition, and transmitsthese bits to the PS in a reliable manner. Next, motivated bythe additive nature of the wireless MAC, we propose a novelanalog communication scheme, referred to as thecompressedanalogDSGD (CA-DSGD), where the devices first sparsifytheir gradient estimates while accumulating error from previousiterations, and project the resultant sparse vector into a low-dimensional vector for bandwidth reduction. We also design apower allocation scheme to align the received gradient vectorsat the PS in an efficient manner. Numerical results show thatD-DSGD outperforms other digital approaches in the literature;however, in general the proposed CA-DSGD algorithm convergesfaster than the D-DSGD scheme, and reaches a higher level ofaccuracy. We have observed that the gap between the analogand digital schemes increases when the datasets of devices arenot independent and identically distributed (i.i.d.). Furthermore,the performance of the CA-DSGD scheme is shown to be robustagainst imperfect channel state information (CSI) at the devices.Overall these results show clear advantages for the proposedanalog over-the-air DSGD scheme, which suggests that learningand communication algorithms should be designed jointly toachieve the best end-to-end performance in machine learningapplications at the wireless edge

    Cache-aided content delivery over erasure broadcast channels

    Get PDF
    A cache-aided broadcast network is studied, in which a server delivers contents to a group of receivers over a packet erasure broadcast channel (BC). The receivers are divided into two sets with regards to their channel qualities: the weak and strong receivers, where all the weak receivers have statistically worse channel qualities than all the strong receivers. The weak receivers, in order to compensate for the high erasure probability they encounter over the channel, are equipped with cache memories of equal size, while the receivers in the strong set have no caches. Data can be pre-delivered to weak receivers’ caches over the off-peak traffic period before the receivers reveal their demands. Allowing arbitrary erasure probabilities for the weak and strong receivers, a joint caching and channel coding scheme, which divides each file into several subfiles, and applies a different caching and delivery scheme for each subfile, is proposed. It is shown that all the receivers, even those without any cache memories, benefit from the presence of caches across the network. An information theoretic trade-off between the cache size and the achievable rate is formulated. It is shown that the proposed scheme improves upon the state-of-the-art in terms of the achievable trade-off

    Diagnosis of central venous catheter-related thrombus by transesophageal echocardiography

    Get PDF

    AirNet: Neural Network Transmission over the Air

    Get PDF
    State-of-the-art performance for many emerging edge applications is achieved by deep neural networks (DNNs). Often, the employed DNNs are location- and time-dependent, and the parameters of a specific DNN must be delivered from an edge server to the edge device rapidly and efficiently to carry out time-sensitive inference tasks. This can be considered as a joint source-channel coding (JSCC) problem, in which the goal is not to recover the DNN coefficients with the minimal distortion, but in a manner that provides the highest accuracy in the downstream task. For this purpose we introduce AirNet, a novel training and analog transmission method to deliver DNNs over the air. We first train the DNN with noise injection to counter the wireless channel noise. We also employ pruning to identify the most significant DNN parameters that can be delivered within the available channel bandwidth, knowledge distillation, and nonlinear bandwidth expansion to provide better error protection for the most important network parameters. We show that AirNet achieves significantly higher test accuracy compared to the separation-based alternative, and exhibits graceful degradation with channel quality

    Remote contextual bandits

    Get PDF
    We consider a remote contextual multi-armed bandit (CMAB) problem, in which the decision-maker observes the context and the reward, but must communicate the actions to be taken by the agents over a rate-limited communication channel. This can model, for example, a personalized ad placement application, where the content owner observes the individual visitors to its website, and hence has the context information, but must convey the ads that must be shown to each visitor to a separate entity that manages the marketing content. In this remote CMAB (R-CMAB) problem, the constraint on the communication rate between the decision-maker and the agents imposes a trade-off between the number of bits sent per agent and the acquired average reward. We are particularly interested in characterizing the rate required to achieve sub-linear regret. Consequently, this can be considered as a policy compression problem, where the distortion metric is induced by the learning objectives. We first study the fundamental information theoretic limits of this problem by letting the number of agents go to infinity, and study the regret achieved when Thompson sampling strategy is adopted. In particular, we identify two distinct rate regions resulting in linear and sub-linear regret behavior, respectively. Then, we provide upper bounds for the achievable regret when the decision-maker can reliably transmit the policy without distortion

    Remote Contextual Bandits

    Get PDF
    We consider a remote contextual multi-armed bandit (CMAB) problem, in which the decision-maker observes the context and the reward, but must communicate the actions to be taken by the agents over a rate-limited communication channel. This can model, for example, a personalized ad placement application, where the content owner observes the individual visitors to its website, and hence has the context information, but must convey the ads that must be shown to each visitor to a separate entity that manages the marketing content. In this remote CMAB (R-CMAB) problem, the constraint on the communication rate between the decision-maker and the agents imposes a trade-off between the number of bits sent per agent and the acquired average reward. We are particularly interested in characterizing the rate required to achieve sub-linear regret. Consequently, this can be considered as a policy compression problem, where the distortion metric is induced by the learning objectives. We first study the fundamental information theoretic limits of this problem by letting the number of agents go to infinity, and study the regret achieved when Thompson sampling strategy is adopted. In particular, we identify two distinct rate regions resulting in linear and sub-linear regret behavior, respectively. Then, we provide upper bounds for the achievable regret when the decision-maker can reliably transmit the policy without distortion
    corecore