92,064 research outputs found

    Clustering based space-time network coding

    Full text link
    Abstract—Many-to-one communication is a challenging prob-lem in practice due to channel fading and multi-user interfer-ences. In this work, a new protocol that leverages spatial diversity through space-time network coding is proposed. The N source nodes are first divided into K clusters, each having Q nodes, and the clusters send data successively in a time-division multiple access way. Each node behaves as a decode-and-forward relay to other clusters, and uses linear coding to combine the local symbol and the relayed symbols. To separate the multi-source signals, each node has a unique signature waveform, and linear decorrelator is used at the receivers. Both the exact Symbol Error Rate (SER) and the asymptotic SER at high signal-to-noise ratios of the M-ary phase-shift keying signal are studied then. It is shown that a diversity order of (N − Q + 1) can be achieved with a low transmission delay of K time slots, which is more bandwidth efficient than the existing protocols. Simulation results also justify the performance gains. I

    Weightless: Lossy Weight Encoding For Deep Neural Network Compression

    Get PDF
    The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding which complements conventional compression techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, Weightless, can compress DNN weights by up to 496x with the same model accuracy. This results in up to a 1.51x improvement over the state-of-the-art

    Scalable Compression of Deep Neural Networks

    Full text link
    Deep neural networks generally involve some layers with mil- lions of parameters, making them difficult to be deployed and updated on devices with limited resources such as mobile phones and other smart embedded systems. In this paper, we propose a scalable representation of the network parameters, so that different applications can select the most suitable bit rate of the network based on their own storage constraints. Moreover, when a device needs to upgrade to a high-rate network, the existing low-rate network can be reused, and only some incremental data are needed to be downloaded. We first hierarchically quantize the weights of a pre-trained deep neural network to enforce weight sharing. Next, we adaptively select the bits assigned to each layer given the total bit budget. After that, we retrain the network to fine-tune the quantized centroids. Experimental results show that our method can achieve scalable compression with graceful degradation in the performance.Comment: 5 pages, 4 figures, ACM Multimedia 201

    Distributed Space Time Coding for Wireless Two-way Relaying

    Full text link
    We consider the wireless two-way relay channel, in which two-way data transfer takes place between the end nodes with the help of a relay. For the Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et. al. that adaptively changing the network coding map used at the relay greatly reduces the impact of Multiple Access interference at the relay. The harmful effect of the deep channel fade conditions can be effectively mitigated by proper choice of these network coding maps at the relay. Alternatively, in this paper we propose a Distributed Space Time Coding (DSTC) scheme, which effectively removes most of the deep fade channel conditions at the transmitting nodes itself without any CSIT and without any need to adaptively change the network coding map used at the relay. It is shown that the deep fades occur when the channel fade coefficient vector falls in a finite number of vector subspaces of C2\mathbb{C}^2, which are referred to as the singular fade subspaces. DSTC design criterion referred to as the \textit{singularity minimization criterion} under which the number of such vector subspaces are minimized is obtained. Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit low decoding complexity DSTC designs which satisfy the singularity minimization criterion and maximize the coding gain for QAM and PSK signal sets are provided. Simulation results show that at high Signal to Noise Ratio, the DSTC scheme provides large gains when compared to the conventional Exclusive OR network code and performs slightly better than the adaptive network coding scheme proposed by Koike-Akino et. al.Comment: 27 pages, 4 figures, A mistake in the proof of Proposition 3 given in Appendix B correcte

    Measuring spike train synchrony

    Get PDF
    Estimating the degree of synchrony or reliability between two or more spike trains is a frequent task in both experimental and computational neuroscience. In recent years, many different methods have been proposed that typically compare the timing of spikes on a certain time scale to be fixed beforehand. Here, we propose the ISI-distance, a simple complementary approach that extracts information from the interspike intervals by evaluating the ratio of the instantaneous frequencies. The method is parameter free, time scale independent and easy to visualize as illustrated by an application to real neuronal spike trains obtained in vitro from rat slices. In a comparison with existing approaches on spike trains extracted from a simulated Hindemarsh-Rose network, the ISI-distance performs as well as the best time-scale-optimized measure based on spike timing.Comment: 11 pages, 13 figures; v2: minor modifications; v3: minor modifications, added link to webpage that includes the Matlab Source Code for the method (http://inls.ucsd.edu/~kreuz/Source-Code/Spike-Sync.html
    corecore