4,976 research outputs found

    Resource allocation for transmit hybrid beamforming in decoupled millimeter wave multiuser-MIMO downlink

    Get PDF
    This paper presents a study on joint radio resource allocation and hybrid precoding in multicarrier massive multiple-input multiple-output communications for 5G cellular networks. In this paper, we present the resource allocation algorithm to maximize the proportional fairness (PF) spectral efficiency under the per subchannel power and the beamforming rank constraints. Two heuristic algorithms are designed. The proportional fairness hybrid beamforming algorithm provides the transmit precoder with a proportional fair spectral efficiency among users for the desired number of radio-frequency (RF) chains. Then, we transform the number of RF chains or rank constrained optimization problem into convex semidefinite programming (SDP) problem, which can be solved by standard techniques. Inspired by the formulated convex SDP problem, a low-complexity, two-step, PF-relaxed optimization algorithm has been provided for the formulated convex optimization problem. Simulation results show that the proposed suboptimal solution to the relaxed optimization problem is near-optimal for the signal-to-noise ratio SNR <= 10 dB and has a performance gap not greater than 2.33 b/s/Hz within the SNR range 0-25 dB. It also outperforms the maximum throughput and PF-based hybrid beamforming schemes for sum spectral efficiency, individual spectral efficiency, and fairness index

    Positive Semidefinite Metric Learning Using Boosting-like Algorithms

    Get PDF
    The success of many machine learning and pattern recognition methods relies heavily upon the identification of an appropriate distance metric on the input data. It is often beneficial to learn such a metric from the input training data, instead of using a default one such as the Euclidean distance. In this work, we propose a boosting-based technique, termed BoostMetric, for learning a quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance metric requires enforcing the constraint that the matrix parameter to the metric remains positive definite. Semidefinite programming is often used to enforce this constraint, but does not scale well and easy to implement. BoostMetric is instead based on the observation that any positive semidefinite matrix can be decomposed into a linear combination of trace-one rank-one matrices. BoostMetric thus uses rank-one positive semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting methods are easy to implement, efficient, and can accommodate various types of constraints. We extend traditional boosting algorithms in that its weak learner is a positive semidefinite matrix with trace and rank being one rather than a classifier or regressor. Experiments on various datasets demonstrate that the proposed algorithms compare favorably to those state-of-the-art methods in terms of classification accuracy and running time.Comment: 30 pages, appearing in Journal of Machine Learning Researc

    An Efficient Dual Approach to Distance Metric Learning

    Full text link
    Distance metric learning is of fundamental interest in machine learning because the distance metric employed can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. Standard interior-point SDP solvers typically have a complexity of O(D6.5)O(D^{6.5}) (with DD the dimension of input data), and can thus only practically solve problems exhibiting less than a few thousand variables. Since the number of variables is D(D+1)/2D (D+1) / 2 , this implies a limit upon the size of problem that can practically be solved of around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here we propose a significantly more efficient approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is O(D3)O (D ^ 3) , which is significantly lower than that of the SDP approach. Experiments on a variety of datasets demonstrate that the proposed method achieves an accuracy comparable to the state-of-the-art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius-norm regularized SDP problems approximately
    • …
    corecore