8,118 research outputs found

    Performance analysis for cooperative wireless communications

    Get PDF
    Cooperative relaying has been proposed as a promising solution to mitigate and combat the deleterious effects of fading by sending and receiving independent copies of the same signal at different nodes. It has attracted huge attention from both industry and academia. The purpose of this thesis is to provide an analytical performance evaluation of the cooperative wireless systems while taking some realistic conditions into consideration. To achieve this, first, performance analysis of amplify-and-forward (AF) relaying using pilot-aided maximum likelihood estimation is studied in this thesis. Both disintegrated channel estimation (DCE) and cascaded channel estimation (CCE) are considered. Based on this analysis, optimal energy allocation is proposed. Then, performance analysis for AF relaying corrupted by interferers are investigated. Both randomly distributed and fixed interferers are considered. For random interferers, both the number and the locations of the interferers are random while for fixed interferers, both the number and the locations are fixed. Next, multihop relaying and multiple scattering channels over α - ÎŒ fading are analyzed. Channels with interferences and without interferences are considered. Exact results in the form of one-dimensional integral are derived. Also, approximate results with simplified structure and closed-form expressions are provided. Finally, a new hard decision fusion rule that combines arbitrary numbers of bits for different samples taken at different nodes is proposed. The best thresholds for the fusion rules using 2 bits, 3 bits and 4 bits are obtained through simulation. The bit error rate (BER) for hard fusion rule with 1 bit is provided. Numerical results are presented to show the accuracy of our analysis and provide insights. First, they show that our optimal energy allocation methods outperform the conventional system without optimal energy allocation, which could be as large as several dB’s in some cases. Second, with the increase of signal-to-interference-plus-noise ratio (SINR) for AF relaying with interference, the outage probability decreases accordingly for both random and fixed interferers. However, with the change of interference-to-noise ratio (INR) but with the SINR fixed, the outage probability for random interferers change correspondingly while the outage probability for fixed interferers remains almost the same. Third, our newly derived approximate expressions are shown to have acceptable performances in approximating outage probability in wireless multihop relaying system and multiple scattering channel considering interferences and without interferences. Last, our new hard decision fusion rule is shown to achieve better performance with higher energy efficiency. Also they show that there is a tradeoff between performance and energy penalty in the hard decision fusion rule

    Monte Carlo optimization of decentralized estimation networks over directed acyclic graphs under communication constraints

    Get PDF
    Motivated by the vision of sensor networks, we consider decentralized estimation networks over bandwidth–limited communication links, and are particularly interested in the tradeoff between the estimation accuracy and the cost of communications due to, e.g., energy consumption. We employ a class of in–network processing strategies that admits directed acyclic graph representations and yields a tractable Bayesian risk that comprises the cost of communications and estimation error penalty. This perspective captures a broad range of possibilities for processing under network constraints and enables a rigorous design problem in the form of constrained optimization. A similar scheme and the structures exhibited by the solutions have been previously studied in the context of decentralized detection. Under reasonable assumptions, the optimization can be carried out in a message passing fashion. We adopt this framework for estimation, however, the corresponding optimization scheme involves integral operators that cannot be evaluated exactly in general. We develop an approximation framework using Monte Carlo methods and obtain particle representations and approximate computational schemes for both the in–network processing strategies and their optimization. The proposed Monte Carlo optimization procedure operates in a scalable and efficient fashion and, owing to the non-parametric nature, can produce results for any distributions provided that samples can be produced from the marginals. In addition, this approach exhibits graceful degradation of the estimation accuracy asymptotically as the communication becomes more costly, through a parameterized Bayesian risk

    Monte Carlo optimization approach for decentralized estimation networks under communication constraints

    Get PDF
    We consider designing decentralized estimation schemes over bandwidth limited communication links with a particular interest in the tradeoff between the estimation accuracy and the cost of communications due to, e.g., energy consumption. We take two classes of in–network processing strategies into account which yield graph representations through modeling the sensor platforms as the vertices and the communication links by edges as well as a tractable Bayesian risk that comprises the cost of transmissions and penalty for the estimation errors. This approach captures a broad range of possibilities for “online” processing of observations as well as the constraints imposed and enables a rigorous design setting in the form of a constrained optimization problem. Similar schemes as well as the structures exhibited by the solutions to the design problem has been studied previously in the context of decentralized detection. Under reasonable assumptions, the optimization can be carried out in a message passing fashion. We adopt this framework for estimation, however, the corresponding optimization schemes involve integral operators that cannot be evaluated exactly in general. We develop an approximation framework using Monte Carlo methods and obtain particle representations and approximate computational schemes for both classes of in–network processing strategies and their optimization. The proposed Monte Carlo optimization procedures operate in a scalable and efficient fashion and, owing to the non-parametric nature, can produce results for any distributions provided that samples can be produced from the marginals. In addition, this approach exhibits graceful degradation of the estimation accuracy asymptotically as the communication becomes more costly, through a parameterized Bayesian risk

    Monte Carlo optimization approach for decentralized estimation networks under communication constraints

    Get PDF
    We consider designing decentralized estimation schemes over bandwidth limited communication links with a particular interest in the tradeoff between the estimation accuracy and the cost of communications due to, e.g., energy consumption. We take two classes of in–network processing strategies into account which yield graph representations through modeling the sensor platforms as the vertices and the communication links by edges as well as a tractable Bayesian risk that comprises the cost of transmissions and penalty for the estimation errors. This approach captures a broad range of possibilities for “online” processing of observations as well as the constraints imposed and enables a rigorous design setting in the form of a constrained optimization problem. Similar schemes as well as the structures exhibited by the solutions to the design problem has been studied previously in the context of decentralized detection. Under reasonable assumptions, the optimization can be carried out in a message passing fashion. We adopt this framework for estimation, however, the corresponding optimization schemes involve integral operators that cannot be evaluated exactly in general. We develop an approximation framework using Monte Carlo methods and obtain particle representations and approximate computational schemes for both classes of in–network processing strategies and their optimization. The proposed Monte Carlo optimization procedures operate in a scalable and efficient fashion and, owing to the non-parametric nature, can produce results for any distributions provided that samples can be produced from the marginals. In addition, this approach exhibits graceful degradation of the estimation accuracy asymptotically as the communication becomes more costly, through a parameterized Bayesian risk

    Energy Harvesting Wireless Communications: A Review of Recent Advances

    Get PDF
    This article summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed as well as models for energy consumption at the nodes.Comment: To appear in the IEEE Journal of Selected Areas in Communications (Special Issue: Wireless Communications Powered by Energy Harvesting and Wireless Energy Transfer

    Binary Biometric Representation through Pairwise Adaptive Phase Quantization

    Get PDF
    Extracting binary strings from real-valued biometric templates is a fundamental step in template compression and protection systems, such as fuzzy commitment, fuzzy extractor, secure sketch, and helper data systems. Quantization and coding is the straightforward way to extract binary representations from arbitrary real-valued biometric modalities. In this paper, we propose a pairwise adaptive phase quantization (APQ) method, together with a long-short (LS) pairing strategy, which aims to maximize the overall detection rate. Experimental results on the FVC2000 fingerprint and the FRGC face database show reasonably good verification performances.\ud \u

    An Efficient Reconfigurable Architecture for Fingerprint Recognition

    Get PDF
    The fingerprint identification is an efficient biometric technique to authenticate human beings in real-time Big Data Analytics. In this paper, we propose an efficient Finite State Machine (FSM) based reconfigurable architecture for fingerprint recognition. The fingerprint image is resized, and Compound Linear Binary Pattern (CLBP) is applied on fingerprint, followed by histogram to obtain histogram CLBP features. Discrete Wavelet Transform (DWT) Level 2 features are obtained by the same methodology. The novel matching score of CLBP is computed using histogram CLBP features of test image and fingerprint images in the database. Similarly, the DWT matching score is computed using DWT features of test image and fingerprint images in the database. Further, the matching scores of CLBP and DWT are fused with arithmetic equation using improvement factor. The performance parameters such as TSR (Total Success Rate), FAR (False Acceptance Rate), and FRR (False Rejection Rate) are computed using fusion scores with correlation matching technique for FVC2004 DB3 Database. The proposed fusion based VLSI architecture is synthesized on Virtex xc5vlx30T-3 FPGA board using Finite State Machine resulting in optimized parameters
    • 

    corecore