1,725 research outputs found

    Seeing the Unobservable: Channel Learning for Wireless Communication Networks

    Full text link
    Wireless communication networks rely heavily on channel state information (CSI) to make informed decision for signal processing and network operations. However, the traditional CSI acquisition methods is facing many difficulties: pilot-aided channel training consumes a great deal of channel resources and reduces the opportunities for energy saving, while location-aided channel estimation suffers from inaccurate and insufficient location information. In this paper, we propose a novel channel learning framework, which can tackle these difficulties by inferring unobservable CSI from the observable one. We formulate this framework theoretically and illustrate a special case in which the learnability of the unobservable CSI can be guaranteed. Possible applications of channel learning are then described, including cell selection in multi-tier networks, device discovery for device-to-device (D2D) communications, as well as end-to-end user association for load balancing. We also propose a neuron-network-based algorithm for the cell selection problem in multi-tier networks. The performance of this algorithm is evaluated using geometry-based stochastic channel model (GSCM). In settings with 5 small cells, the average cell-selection accuracy is 73% - only a 3.9% loss compared with a location-aided algorithm which requires genuine location information.Comment: 6 pages, 4 figures, accepted by GlobeCom'1

    "Mental Rotation" by Optimizing Transforming Distance

    Full text link
    The human visual system is able to recognize objects despite transformations that can drastically alter their appearance. To this end, much effort has been devoted to the invariance properties of recognition systems. Invariance can be engineered (e.g. convolutional nets), or learned from data explicitly (e.g. temporal coherence) or implicitly (e.g. by data augmentation). One idea that has not, to date, been explored is the integration of latent variables which permit a search over a learned space of transformations. Motivated by evidence that people mentally simulate transformations in space while comparing examples, so-called "mental rotation", we propose a transforming distance. Here, a trained relational model actively transforms pairs of examples so that they are maximally similar in some feature space yet respect the learned transformational constraints. We apply our method to nearest-neighbour problems on the Toronto Face Database and NORB

    Improving Sparse Representation-Based Classification Using Local Principal Component Analysis

    Full text link
    Sparse representation-based classification (SRC), proposed by Wright et al., seeks the sparsest decomposition of a test sample over the dictionary of training samples, with classification to the most-contributing class. Because it assumes test samples can be written as linear combinations of their same-class training samples, the success of SRC depends on the size and representativeness of the training set. Our proposed classification algorithm enlarges the training set by using local principal component analysis to approximate the basis vectors of the tangent hyperplane of the class manifold at each training sample. The dictionary in SRC is replaced by a local dictionary that adapts to the test sample and includes training samples and their corresponding tangent basis vectors. We use a synthetic data set and three face databases to demonstrate that this method can achieve higher classification accuracy than SRC in cases of sparse sampling, nonlinear class manifolds, and stringent dimension reduction.Comment: Published in "Computational Intelligence for Pattern Recognition," editors Shyi-Ming Chen and Witold Pedrycz. The original publication is available at http://www.springerlink.co
    • …
    corecore