935 research outputs found

    Zero-Delay Joint Source-Channel Coding in the Presence of Interference Known at the Encoder

    Get PDF
    Zero-delay transmission of a Gaussian source over an additive white Gaussian noise (AWGN) channel is considered in the presence of an additive Gaussian interference signal. The mean squared error (MSE) distortion is minimized under an average power constraint assuming that the interference signal is known at the transmitter. Optimality of simple linear transmission does not hold in this setting due to the presence of the known interference signal. While the optimal encoder-decoder pair remains an open problem, various non-linear transmission schemes are proposed in this paper. In particular, interference concentration (ICO) and one-dimensional lattice (1DL) strategies, using both uniform and non-uniform quantization of the interference signal, are studied. It is shown that, in contrast to typical scalar quantization of Gaussian sources, a non-uniform quantizer, whose quantization intervals become smaller as we go further from zero, improves the performance. Given that the optimal decoder is the minimum MSE (MMSE) estimator, a necessary condition for the optimality of the encoder is derived, and the numerically optimized encoder (NOE) satisfying this condition is obtained. Based on the numerical results, it is shown that 1DL with nonuniform quantization performs closer (compared to the other schemes) to the numerically optimized encoder while requiring significantly lower complexity

    Distortion Minimization in Gaussian Layered Broadcast Coding with Successive Refinement

    Full text link
    A transmitter without channel state information (CSI) wishes to send a delay-limited Gaussian source over a slowly fading channel. The source is coded in superimposed layers, with each layer successively refining the description in the previous one. The receiver decodes the layers that are supported by the channel realization and reconstructs the source up to a distortion. The expected distortion is minimized by optimally allocating the transmit power among the source layers. For two source layers, the allocation is optimal when power is first assigned to the higher layer up to a power ceiling that depends only on the channel fading distribution; all remaining power, if any, is allocated to the lower layer. For convex distortion cost functions with convex constraints, the minimization is formulated as a convex optimization problem. In the limit of a continuum of infinite layers, the minimum expected distortion is given by the solution to a set of linear differential equations in terms of the density of the fading distribution. As the bandwidth ratio b (channel uses per source symbol) tends to zero, the power distribution that minimizes expected distortion converges to the one that maximizes expected capacity. While expected distortion can be improved by acquiring CSI at the transmitter (CSIT) or by increasing diversity from the realization of independent fading paths, at high SNR the performance benefit from diversity exceeds that from CSIT, especially when b is large.Comment: Accepted for publication in IEEE Transactions on Information Theor

    Minimum Expected Distortion in Gaussian Layered Broadcast Coding with Successive Refinement

    Full text link
    A transmitter without channel state information (CSI) wishes to send a delay-limited Gaussian source over a slowly fading channel. The source is coded in superimposed layers, with each layer successively refining the description in the previous one. The receiver decodes the layers that are supported by the channel realization and reconstructs the source up to a distortion. In the limit of a continuum of infinite layers, the optimal power distribution that minimizes the expected distortion is given by the solution to a set of linear differential equations in terms of the density of the fading distribution. In the optimal power distribution, as SNR increases, the allocation over the higher layers remains unchanged; rather the extra power is allocated towards the lower layers. On the other hand, as the bandwidth ratio b (channel uses per source symbol) tends to zero, the power distribution that minimizes expected distortion converges to the power distribution that maximizes expected capacity. While expected distortion can be improved by acquiring CSI at the transmitter (CSIT) or by increasing diversity from the realization of independent fading paths, at high SNR the performance benefit from diversity exceeds that from CSIT, especially when b is large.Comment: To appear in the proceedings of the 2007 IEEE International Symposium on Information Theory, Nice, France, June 24-29, 200

    DeepWiVe: deep-learning-aided wireless video transmission

    Get PDF
    We present DeepWiVe , the first-ever end-to-end joint source-channel coding (JSCC) video transmission scheme that leverages the power of deep neural networks (DNNs) to directly map video signals to channel symbols, combining video compression, channel coding, and modulation steps into a single neural transform. Our DNN decoder predicts residuals without distortion feedback, which improves the video quality by accounting for occlusion/disocclusion and camera movements. We simultaneously train different bandwidth allocation networks for the frames to allow variable bandwidth transmission. Then, we train a bandwidth allocation network using reinforcement learning (RL) that optimizes the allocation of limited available channel bandwidth among video frames to maximize the overall visual quality. Our results show that DeepWiVe can overcome the cliff-effect , which is prevalent in conventional separation-based digital communication schemes, and achieve graceful degradation with the mismatch between the estimated and actual channel qualities. DeepWiVe outperforms H.264 video compression followed by low-density parity check (LDPC) codes in all channel conditions by up to 0.0485 in terms of the multi-scale structural similarity index measure (MS-SSIM), and H.265+ LDPC by up to 0.0069 on average. We also illustrate the importance of optimizing bandwidth allocation in JSCC video transmission by showing that our optimal bandwidth allocation policy is superior to uniform allocation as well as a heuristic policy benchmark

    Genistein-induced mir-23b expression inhibits the growth of breast cancer cells

    Get PDF
    Aim of the study: Genistein, an isoflavonoid, plays roles in the inhibition of protein tyrosine kinase phosphorylation, induction of apoptosis, and cell differentiation in breast cancer. This study aims to induce cellular stress by exposing genistein to determine alterations of miRNA expression profiles in MCF-7 cells. Material and methods: XTT assay and trypan blue dye exclusion assays were performed to examine the cytotoxic effects of genistein treatment. Expressions of miRNAs were quantified using Real-Time Online RT-PCR. Results: The IC50 dose of genistein was 175 μM in MCF-7 cell, line and the cytotoxic effect of genistein was detected after 48 hours. miR-23b was found to be up-regulated 56.69 fold following the treatment of genistein. It was found that miR-23b was up-regulated for MCF-7 breast cancer cells after genistein treatment. Conclusions: Up-regulated ex-expression of miR-23b might be a putative biomarker for use in the therapy of breast cancer patients. miR-23b up-regulation might be important in terms of response to genistein. © 2015, Termedia Publishing House Ltd. All rights reserved

    Hierarchical Over-the-Air Federated Edge Learning

    Get PDF
    Federated learning (FL) over wireless communication channels, specifically, over-the-air (OTA) model aggregation framework is considered. In OTA wireless setups, the adverse channel effects can be alleviated by increasing the number of receive antennas at the parameter server (PS), which performs model aggregation. However, the performance of OTA FL is severely limited by the presence of mobile users (MUs) located far away from the PS. In this paper, to mitigate this limitation, we propose hierarchical over-the-air federated learning (HOTAFL), which utilizes intermediary servers (IS) to form clusters near MUs. We provide a convergence analysis for the proposed setup, and demonstrate through experimental results that local aggregation in each cluster before global aggregation leads to a better performance and faster convergence than OTA FL

    Attributed relational graphs for cell nucleus segmentation in fluorescence microscopy Images

    Get PDF
    Cataloged from PDF version of article.More rapid and accurate high-throughput screening in molecular cellular biology research has become possible with the development of automated microscopy imaging, for which cell nucleus segmentation commonly constitutes the core step. Although several promising methods exist for segmenting the nuclei of monolayer isolated and less-confluent cells, it still remains an open problem to segment the nuclei of more-confluent cells, which tend to grow in overlayers. To address this problem, we propose a new model-based nucleus segmentation algorithm. This algorithm models how a human locates a nucleus by identifying the nucleus boundaries and piecing them together. In this algorithm, we define four types of primitives to represent nucleus boundaries at different orientations and construct an attributed relational graph on the primitives to represent their spatial relations. Then, we reduce the nucleus identification problem to finding predefined structural patterns in the constructed graph and also use the primitives in region growing to delineate the nucleus borders. Working with fluorescence microscopy images, our experiments demonstrate that the proposed algorithm identifies nuclei better than previous nucleus segmentation algorithms

    Sparse random networks for communication-efficient federated learning

    Get PDF
    One main challenge in federated learning is the large communication cost of ex-changing weight updates from clients to the server at each round. While prior work has made great progress in compressing the weight updates through gradient compression methods, we propose a radically different approach that does not update the weights at all. Instead, our method freezes the weights at their initial random values and learns how to sparsify the random network for the best performance. To this end, the clients collaborate in training a stochastic binary mask to find the optimal sparse random network within the original one. At the end of the training, the final model is a sparse network with random weights – or a sub-network inside the dense random network. We show improvements in accuracy, communication (less than 1 bit per parameter (bpp)), convergence speed, and final model size (less than 1 bpp) over relevant baselines on MNIST, EMNIST, CIFAR- 10, and CIFAR-100 datasets, in the low bitrate regime

    DeepJSCC-Q: Channel Input Constrained Deep Joint Source-Channel Coding

    Get PDF
    Recent works have shown that the task of wireless transmission of images can be learned with the use of machine learning techniques. Very promising results in end-to-end image quality, superior to popular digital schemes that utilize source and channel coding separation, have been demonstrated through the training of an autoencoder, with a non-trainable channel layer in the middle. However, these methods assume that any complex value can be transmitted over the channel, which can prevent the application of the algorithm in scenarios where the hardware or protocol can only admit certain sets of channel inputs, such as the use of a digital constellation. Herein, we propose DeepJSCC-Q, an end-to-end optimized joint source-channel coding scheme for wireless image transmission, which is able to operate with a fixed channel input alphabet. We show that DeepJSCC-Q can achieve similar performance to models that use continuous-valued channel input. Importantly, it preserves the graceful degradation of image quality observed in prior work when channel conditions worsen, making DeepJSCC-Q much more attractive for deployment in practical systems

    Semi-automatic segmentation of subcutaneous tumours from micro-computed tomography images

    Get PDF
    Cataloged from PDF version of article.This paper outlines the first attempt to segment the boundary of preclinical subcutaneous tumours, which are frequently used in cancer research, from micro-computed tomography (microCT) image data. MicroCT images provide low tissue contrast, and the tumour-to-muscle interface is hard to determine, however faint features exist which enable the boundary to be located. These are used as the basis of our semi-automatic segmentation algorithm. Local phase feature detection is used to highlight the faint boundary features, and a level set-based active contour is used to generate smooth contours that fit the sparse boundary features. The algorithm is validated against manually drawn contours and micro-positron emission tomography (microPET) images. When compared against manual expert segmentations, it was consistently able to segment at least 70% of the tumour region (n = 39) in both easy and difficult cases, and over a broad range of tumour volumes. When compared against tumour microPET data, it was able to capture over 80% of the functional microPET volume. Based on these results, we demonstrate the feasibility of subcutaneous tumour segmentation from microCT image data without the assistance of exogenous contrast agents. Our approach is a proof-of-concept that can be used as the foundation for further research, and to facilitate this, the code is open-source and available from www.setuvo.com. © 2013 Institute of Physics and Engineering in Medicine
    • …
    corecore