134,274 research outputs found

    Maximum-Likelihood Sequence Detection of Multiple Antenna Systems over Dispersive Channels via Sphere Decoding

    Get PDF
    Multiple antenna systems are capable of providing high data rate transmissions over wireless channels. When the channels are dispersive, the signal at each receive antenna is a combination of both the current and past symbols sent from all transmit antennas corrupted by noise. The optimal receiver is a maximum-likelihood sequence detector and is often considered to be practically infeasible due to high computational complexity (exponential in number of antennas and channel memory). Therefore, in practice, one often settles for a less complex suboptimal receiver structure, typically with an equalizer meant to suppress both the intersymbol and interuser interference, followed by the decoder. We propose a sphere decoding for the sequence detection in multiple antenna communication systems over dispersive channels. The sphere decoding provides the maximum-likelihood estimate with computational complexity comparable to the standard space-time decision-feedback equalizing (DFE) algorithms. The performance and complexity of the sphere decoding are compared with the DFE algorithm by means of simulations

    Improving Energy Efficiency for IoT Communications in 5G Networks

    Get PDF
    Increase in number of Internet of Things (IoT) devices is quickly changing how mobile networks are being used by shifting more usage to uplink transmissions rather than downlink transmissions. Currently, mobile network uplinks utilize Single Carrier Frequency Division Multiple Access (SC-FDMA) schemes due to the low Peak to Average Power Ratio (PAPR) when compared to Orthogonal Frequency Division Multiple Access (OFDMA). In an IoT perspective, power ratios are highly important in effective battery usage since devices are typically resource-constrained. Fifth Generation (5G) mobile networks are believed to be the future standard network that will handle the influx of IoT device uplinks while preserving the quality of service (QoS) that current Long Term Evolution Advanced (LTE-A) networks provide. In this paper, the Enhanced OEA algorithm was proposed and simulations showed a reduction in the device energy consumption and an increase in the power efficiency of uplink transmissions while preserving the QoS rate provided with SC-FDMA in 5G networks. Furthermore, the computational complexity was reduced through insertion of a sorting step prior to resource allocation

    On Computing Centroids According to the p-Norms of Hamming Distance Vectors

    Get PDF
    In this paper we consider the p-Norm Hamming Centroid problem which asks to determine whether some given strings have a centroid with a bound on the p-norm of its Hamming distances to the strings. Specifically, given a set S of strings and a real k, we consider the problem of determining whether there exists a string s^* with (sum_{s in S} d^{p}(s^*,s))^(1/p) <=k, where d(,) denotes the Hamming distance metric. This problem has important applications in data clustering and multi-winner committee elections, and is a generalization of the well-known polynomial-time solvable Consensus String (p=1) problem, as well as the NP-hard Closest String (p=infty) problem. Our main result shows that the problem is NP-hard for all fixed rational p > 1, closing the gap for all rational values of p between 1 and infty. Under standard complexity assumptions the reduction also implies that the problem has no 2^o(n+m)-time or 2^o(k^(p/(p+1)))-time algorithm, where m denotes the number of input strings and n denotes the length of each string, for any fixed p > 1. The first bound matches a straightforward brute-force algorithm. The second bound is tight in the sense that for each fixed epsilon > 0, we provide a 2^(k^(p/((p+1))+epsilon))-time algorithm. In the last part of the paper, we complement our hardness result by presenting a fixed-parameter algorithm and a factor-2 approximation algorithm for the problem

    Slow Adaptive OFDMA Systems Through Chance Constrained Programming

    Full text link
    Adaptive OFDMA has recently been recognized as a promising technique for providing high spectral efficiency in future broadband wireless systems. The research over the last decade on adaptive OFDMA systems has focused on adapting the allocation of radio resources, such as subcarriers and power, to the instantaneous channel conditions of all users. However, such "fast" adaptation requires high computational complexity and excessive signaling overhead. This hinders the deployment of adaptive OFDMA systems worldwide. This paper proposes a slow adaptive OFDMA scheme, in which the subcarrier allocation is updated on a much slower timescale than that of the fluctuation of instantaneous channel conditions. Meanwhile, the data rate requirements of individual users are accommodated on the fast timescale with high probability, thereby meeting the requirements except occasional outage. Such an objective has a natural chance constrained programming formulation, which is known to be intractable. To circumvent this difficulty, we formulate safe tractable constraints for the problem based on recent advances in chance constrained programming. We then develop a polynomial-time algorithm for computing an optimal solution to the reformulated problem. Our results show that the proposed slow adaptation scheme drastically reduces both computational cost and control signaling overhead when compared with the conventional fast adaptive OFDMA. Our work can be viewed as an initial attempt to apply the chance constrained programming methodology to wireless system designs. Given that most wireless systems can tolerate an occasional dip in the quality of service, we hope that the proposed methodology will find further applications in wireless communications

    Constraining the Parameters of High-Dimensional Models with Active Learning

    Full text link
    Constraining the parameters of physical models with >510>5-10 parameters is a widespread problem in fields like particle physics and astronomy. The generation of data to explore this parameter space often requires large amounts of computational resources. The commonly used solution of reducing the number of relevant physical parameters hampers the generality of the results. In this paper we show that this problem can be alleviated by the use of active learning. We illustrate this with examples from high energy physics, a field where simulations are often expensive and parameter spaces are high-dimensional. We show that the active learning techniques query-by-committee and query-by-dropout-committee allow for the identification of model points in interesting regions of high-dimensional parameter spaces (e.g. around decision boundaries). This makes it possible to constrain model parameters more efficiently than is currently done with the most common sampling algorithms and to train better performing machine learning models on the same amount of data. Code implementing the experiments in this paper can be found on GitHub

    Perturbation damage indicators based on complex modes

    Get PDF
    The papers focusing on dynamic identification of structural damages usually rely on the comparison of two or more responses of the structure; the measure of damage is related to the differences of the vibration signals. Almost all literature methods assume damping proportionality to mass and stiffness; however, this is acceptable for new, undamaged structures, but not for existing, potentially damaged structures, especially when localised damages occur. It is well-known that in non-proportionally damped systems the modes are no longer the same of the undamped system: thus, some authors proposed to use modal complexity as a damage indicator. This contribution presents a perturbation approach that can easily reveal such a modal complexity
    corecore