2,340 research outputs found

    Quantized Multimode Precoding in Spatially Correlated Multi-Antenna Channels

    Full text link
    Multimode precoding, where the number of independent data-streams is adapted optimally, can be used to maximize the achievable throughput in multi-antenna communication systems. Motivated by standardization efforts embraced by the industry, the focus of this work is on systematic precoder design with realistic assumptions on the spatial correlation, channel state information (CSI) at the transmitter and the receiver, and implementation complexity. For spatial correlation of the channel matrix, we assume a general channel model, based on physical principles, that has been verified by many recent measurement campaigns. We also assume a coherent receiver and knowledge of the spatial statistics at the transmitter along with the presence of an ideal, low-rate feedback link from the receiver to the transmitter. The reverse link is used for codebook-index feedback and the goal of this work is to construct precoder codebooks, adaptable in response to the statistical information, such that the achievable throughput is significantly enhanced over that of a fixed, non-adaptive, i.i.d. codebook design. We illustrate how a codebook of semiunitary precoder matrices localized around some fixed center on the Grassmann manifold can be skewed in response to the spatial correlation via low-complexity maps that can rotate and scale submanifolds on the Grassmann manifold. The skewed codebook in combination with a lowcomplexity statistical power allocation scheme is then shown to bridge the gap in performance between a perfect CSI benchmark and an i.i.d. codebook design.Comment: 30 pages, 4 figures, Preprint to be submitted to IEEE Transactions on Signal Processin

    DMFSGD: A Decentralized Matrix Factorization Algorithm for Network Distance Prediction

    Full text link
    The knowledge of end-to-end network distances is essential to many Internet applications. As active probing of all pairwise distances is infeasible in large-scale networks, a natural idea is to measure a few pairs and to predict the other ones without actually measuring them. This paper formulates the distance prediction problem as matrix completion where unknown entries of an incomplete matrix of pairwise distances are to be predicted. The problem is solvable because strong correlations among network distances exist and cause the constructed distance matrix to be low rank. The new formulation circumvents the well-known drawbacks of existing approaches based on Euclidean embedding. A new algorithm, so-called Decentralized Matrix Factorization by Stochastic Gradient Descent (DMFSGD), is proposed to solve the network distance prediction problem. By letting network nodes exchange messages with each other, the algorithm is fully decentralized and only requires each node to collect and to process local measurements, with neither explicit matrix constructions nor special nodes such as landmarks and central servers. In addition, we compared comprehensively matrix factorization and Euclidean embedding to demonstrate the suitability of the former on network distance prediction. We further studied the incorporation of a robust loss function and of non-negativity constraints. Extensive experiments on various publicly-available datasets of network delays show not only the scalability and the accuracy of our approach but also its usability in real Internet applications.Comment: submitted to IEEE/ACM Transactions on Networking on Nov. 201

    Issue Report Resolution Time Prediction

    Get PDF
    Lühikokkuvõte: Ennustus ajakulu kohta probleemi teatamise ja lahendamise juures on alati olnud tähtis kui samas raske ülesanne. Peamine eesmärk selle töö juures on ehitada modell mis ennustab eelnevate aruannete andmete põhjal probleemi lahendamiseks ja tulemuste saamiseks kuluvat aega. Lisaks täiendavad eesmärgid uurimuse juures määravad millised meetodid on kõige kõrgema usaldusväärsusega ning millised funktsioonid on olulised ennustuseks. Eesmärk miks valiti probleemi lahendamise ajakulu modell oli edasi anrendada juba olemas olevaid modelle lisades erinevaid lisasid. Projekt loodi analüseerimaks, kombineerimaks, võrdlemaks ja edendamaks erinevaid tehnikaid probleemi lahendamise ennustamisel See sisaldab k-means klastreid, k-nearest neighbor klassifikatsiooni, Naïve Bayes klassifikatsiooni, otsustus puid, juhuslikku metsa ja teisi, parima tulemuse saamiseks. uurimuse läbiviimiseks koguti andmed Eesti firmalt Fortumo OÜ. Fortumo andmed sisaldasid 2125 probleemi lahendamise aegasid alates 25 aprillist 2011 aastal kuni esimese jaanuarini 2015 aastal. koos kommentaaridega Fortumo töötajatelt. Andmed näitasid et 50% ajakuludest mis Fortumo töötajad märkisid olid vahemikus ±10% tegelikust ajakulust. Lisaks 67 % nendest omavad kindlat viga ≤ 0.5 tunni võrra. Olemasolevad ettepanekud ei tõstnud probleemi lahendamise kvaliteeti. Vastupidiselt tõid hoopis halvemaid tulemusi. Juhuslik mets ja tellitud logistiline regressioon olles parimad nimetatute hulgas näitasid siiski kuni 12-20% halvemat tulemust kui ekspertide omad. Pärast parimate võimaluste täiendamist, meta-informatsiooni modellid näitasid paremat sobivust kuni 5% võrra. Kuigi, tekstil põhinevad medellid andsid kõrgema kvaliteeti, umbes 20% kõrgema kui ekspertidel. Märksõnad: Masina õpe, data mining, ennustus, k-means, k-nearest neighbours, juhuslik mets, tellitud logistiline regressioon, Naïve Bayes klassifikatsioon, varjatud semantiline analüüs, probleemi reporteerimine, lahendamise aeg.Abstract: Prediction of the resolution time of an issue report has always been an important, but difficult, task. The primary purpose of this study is to build a model that predicts the resolution time of incoming issue reports based on past issue report data. Moreover, additional goals of the research are to determine which existing approaches of resolution time prediction yield the highest levels of accuracy, and which features of issue reports are essential for prediction. The approach chosen for building an issue resolution time prediction model was to improve currently existing models applying additional reports pre-processing. The project was designed to analyse, combine, compare and improve different techniques of resolution time prediction. This includes k-means clustering, k-nearest neighbor classification, Naïve Bayes classification, decision trees, random forest and others, in order to achieve the best results with regards to prediction accuracy. For conducting the current research, data was collected from a repository of the Estonian company Fortumo OÜ. The data provided by Fortumo contained actual resolution times of 2125 issues from 25 Apr 2011 till 1 Jan 2015 along with initial time estimates made by Fortumo employees. The data from the repository indicates that around 50% of the time estimates made by Fortumo employees fall into the range of ±10% of the actual resolution time. In addition, 67% of experts’ estimates have absolute error ≤ 0.5 hour. Existing proposed approaches don’t increase the predictive quality. On the contrary, proposed methods bring worse results. Random Forest and Ordered Logistic Regression, as the best among the proposed models, still produced a prediction quality 12-20% worse than the estimates of the experts. After improvement of the best performing approaches, meta-information-based models yielded a better accuracy than proposed models by up to 5%. However, text-based models produced a higher prediction quality, approximately up to 20% better than estimates made by experts. Keywords: Machine learning, data mining, prediction, k-means, k-nearest neighbours, random forest, ordered logistic regression, Naïve Bayes classifier, latent semantic analysis, issue report, resolution tim

    Challenges in 3D scanning: Focusing on Ears and Multiple View Stereopsis

    Get PDF

    Angular Momentum preserving cell-centered Lagrangian and Eulerian schemes on arbitrary grids

    Get PDF
    We address the conservation of angular momentum for cell-centered discretization of compressible fluid dynamics on general grids. We concentrate on the Lagrangian step which is also sufficient for Eulerian discretization using Lagrange+Remap. Starting from the conservative equation of the angular momentum, we show that a standard Riemann solver (a nodal one in our case) can easily be extended to update the new variable. This new variable allows to reconstruct all solid displacements in a cell, and is analogous to a partial Discontinuous Galerkin (DG) discretization. We detail the coupling with a second- order Muscl extension. All numerical tests show the important enhancement of accuracy for rotation problems, and the reduction of mesh imprint for implosion problems. The generalization to axi-symmetric case is detailed

    Calibration of scanning laser range cameras with applications for machine vision

    Get PDF
    Range images differ from conventional reflectance images because they give direct 3-D information about a scene. The last five years have seen a substantial increase in the use of range imaging technology in the areas of robotics, hazardous materials handling, and manufacturing. This has been fostered by a cost reduction of reliable range scanning products, resulting primarily from advanced development of computing resources. In addition, the improved performance of modern range cameras has spurred an interest in new calibrations which take account of their unconventional design. Calibration implies both modeling and a numerical technique for finding parameters within the model. Researchers often refer to spherical coordinates when modeling range cameras. Spherical coordinates, however, only approximate the behavior of the cameras. We seek, therefore, a more analytical approach based on analysis of the internal scanning mechanisms of the cameras. This research demonstrates that the Householder matrix [14] is a better tool for modeling these devices. We develop a general calibration technique which is both accurate and simple to implement. The method proposed here compares target points taken from range images to the known geometry of the target. The calibration is considered complete if the two point sets can be made to match closely in a least squares sense by iteratively modifying model parameters. The literature, fortunately, is replete with numerical algorithms suited to this task. We have selected the simplex algorithm because it is particularly well suited for solving systems with many unknown parameters. In the course of this research, we implement the proposed calibration. We will find that the error in the range image data can be reduced from more that 60 mm per point rms to less than 10 mm per point. We consider this result to be a success because analysis of the results shows the residual error of 10 mm is due solely to random noise in the range values, not from calibration. This implies that accuracy is limited only by the quality of the range measuring device inside the camera
    corecore