944 research outputs found

    Spectrum and energy efficient multi-antenna spectrum sensing for green UAV communication

    Get PDF
    Unmanned Aerial Vehicle (UAV) communication is a promising technology that provides swift and flexible on-demand wireless connectivity for devices without infrastructure support. With recent developments in UAVs, spectrum and energy efficient green UAV communication has become crucial. To deal with this issue, Spectrum Sharing Policy (SSP) is introduced to support green UAV communication. Spectrum sensing in SSP must be carefully formulated to control interference to the primary users and ground communications. In this paper, we propose spectrum sensing for opportunistic spectrum access in green UAV communication to improve the spectrum utilization efficiency. Different from most existing works, we focus on the problem of spectrum sensing of randomly arriving primary signals in the presence of non-Gaussian noise/interference. We propose a novel and improved p-norm-based spectrum sensing scheme to improve the spectrum utilization efficiency in green UAV communication. Firstly, we construct the p-norm decision statistic based on the assumption that the random arrivals of signals follow a Poisson process. Then, we analyze and derive the approximate analytical expressions of the false-alarm and detection probabilities by utilizing the central limit theorem. Simulation results illustrate the validity and superiority of the proposed scheme when the primary signals are corrupted by additive non-Gaussian noise and arrive randomly during spectrum sensing in the green UAV communication

    Spectrum Sensing in Cognitive Radio: Bootstrap and Sequential Detection Approaches

    Get PDF
    In this thesis, advanced techniques for spectrum sensing in cognitive radio are addressed. The problem of small sample size in spectrum sensing is considered, and resampling-based methods are developed for local and collaborative spectrum sensing. A method to deal with unknown parameters in sequential testing for spectrum sensing is proposed. Moreover, techniques are developed for multiband sensing, spectrum sensing in low signal to noise ratio, and two-bits hard decision combining for collaborative spectrum sensing. The assumption of using large sample size in spectrum sensing often raises a problem when the devised test statistic is implemented with a small sample size. This is because, for small sample sizes, the asymptotic approximation for the distribution of the test statistic under the null hypothesis fails to model the true distribution. Therefore, the probability of false alarm or miss detection of the test statistic is poor. In this respect, we propose to use bootstrap methods, where the distribution of the test statistic is estimated by resampling the observed data. For local spectrum sensing, we propose the null-resampling bootstrap test which exhibits better performances than the pivot bootstrap test and the asymptotic test, as common approaches. For collaborative spectrum sensing, a resampling-based Chair-Varshney fusion rule is developed. At the cognitive radio user, a combination of independent resampling and moving-block resampling is proposed to estimate the local probability of detection. At the fusion center, the parametric bootstrap is applied when the number of cognitive radio users is large. The sequential probability ratio test (SPRT) is designed to test a simple hypothesis against a simple alternative hypothesis. However, the more realistic scenario in spectrum sensing is to deal with composite hypotheses, where the parameters are not uniquely defined. In this thesis, we generalize the sequential probability ratio test to cope with composite hypotheses, wherein the thresholds are updated in an adaptive manner, using the parametric bootstrap. The resulting test avoids the asymptotic assumption made in earlier works. The proposed bootstrap based sequential probability ratio test minimizes decision errors due to errors induced by employing maximum likelihood estimators in the generalized sequential probability ratio test. Hence, the proposed method achieves the sensing objective. The average sample number (ASN) of the proposed method is better than that of the conventional method which uses the asymptotic assumption. Furthermore, we propose a mechanism to reduce the computational cost incurred by the bootstrap, using a convex combination of the latest K bootstrap distributions. The reduction in the computational cost does not impose a significant increase on the ASN, while the protection against decision errors is even better. This work is motivated by the fact that the sequential probability ratio test produces a smaller sensing time than its counterpart of fixed sample size test. A smaller sensing time is preferable to improve the throughput of the cognitive radio network. Moreover, multiband spectrum sensing is addressed, more precisely by using multiple testing procedures. In a context of a fixed sample size, an adaptive Benjamini-Hochberg procedure is suggested to be used, since it produces a better balance between the familywise error rate and the familywise miss detection, than the conventional Benjamini-Hochberg. For the sequential probability ratio test, we devise a method based on ordered stopping times. The results show that our method has smaller ASNs than the Bonferroni procedure. Another issue in spectrum sensing is to detect a signal when the signal to noise ratio is very low. In this case, we derive a locally optimum detector that is based on the assumption that the underlying noise is Student's t-distributed. The resulting scheme outperforms the energy detector in all scenarios. Last but not least, we extend the hard decision combining in collaborative spectrum sensing to include a quality information bit. In this case, the multiple thresholds are determined by a distance measure criterion. The hard decision combining with quality information performs better than the conventional hard decision combining

    Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    Full text link
    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also allow for perturbations in the regression matrix of the least-absolute selection and shrinkage selection operator (Lasso), and endow TLS approaches with ability to cope with sparse, under-determined "errors-in-variables" models. Interesting generalizations can further exploit prior knowledge on the perturbations to obtain novel weighted and structured S-TLS solvers. Analysis and simulations demonstrate the practical impact of S-TLS in calibrating the mismatch effects of contemporary grid-based approaches to cognitive radio sensing, and robust direction-of-arrival estimation using antenna arrays.Comment: 30 pages, 10 figures, submitted to IEEE Transactions on Signal Processin

    Spectrum sensing for cognitive radio and radar systems

    Get PDF
    The use of the radio frequency spectrum is increasing at a rapid rate. Reliable and efficient operation in a crowded radio spectrum requires innovative solutions and techniques. Future wireless communication and radar systems should be aware of their surrounding radio environment in order to have the ability to adapt their operation to the effective situation. Spectrum sensing techniques such as detection, waveform recognition, and specific emitter identification are key sources of information for characterizing the surrounding radio environment and extracting valuable information, and consequently adjusting transceiver parameters for facilitating flexible, efficient, and reliable operation. In this thesis, spectrum sensing algorithms for cognitive radios and radar intercept receivers are proposed. Single-user and collaborative cyclostationarity-based detection algorithms are proposed: Multicycle detectors and robust nonparametric spatial sign cyclic correlation based fixed sample size and sequential detectors are proposed. Asymptotic distributions of the test statistics under the null hypothesis are established. A censoring scheme in which only informative test statistics are transmitted to the fusion center is proposed for collaborative detection. The proposed detectors and methods have the following benefits: employing cyclostationarity enables distinction among different systems, collaboration mitigates the effects of shadowing and multipath fading, using multiple strong cyclic frequencies improves the performance, robust detection provides reliable performance in heavy-tailed non-Gaussian noise, sequential detection reduces the average detection time, and censoring improves energy efficiency. In addition, a radar waveform recognition system for classifying common pulse compression waveforms is developed. The proposed supervised classification system classifies an intercepted radar pulse to one of eight different classes based on the pulse compression waveform: linear frequency modulation, Costas frequency codes, binary codes, as well as Frank, P1, P2, P3, and P4 polyphase codes. A robust M-estimation based method for radar emitter identification is proposed as well. A common modulation profile from a group of intercepted pulses is estimated and used for identifying the radar emitter. The M-estimation based approach provides robustness against preprocessing errors and deviations from the assumed noise model

    Deteksi Sinyal : Overview Model Parametrik menggunakan Kriteria Neyman-Pearson

    Get PDF
    ABSTRAK Deteksi sinyal banyak diimplementasikan dalam sistem pengolahan sinyal yang sangat kompleks. Sebagai contoh digunakan pada sub sistem pengolahan sinyal radar pengintai yang berfungsi untuk deteksi dan pelacakan target. Salah satu implementasi terbaru dari deteksi sinyal adalah untuk fungsi spectrum sensing pada Cognitive Radio. Deteksi sinyal dapat didefinisikan sebagai binary hypothesis testing, yaitu memutuskan satu dari dua keadaan: hanya derau atau tidak ada sinyal (null hypothesis), dan ada sinyal (alternative hypothesis). Teori deteksi sinyal merupakan bidang yang cukup luas, sehingga paper ini fokus pada pendekatan parametrik dengan Teorema Neyman-Pearson. Kedua hypothesis dimodelkan dengan variabel acak dengan distribusi rapat kemungkinan yang sama tetapi mempunyai parameter yang berbeda. Ditunjukkan penurunan test statistic untuk dua skenario, yaitu distribusi dengan diketahui sebagian dan diketahui penuh. Bagian simulasi menunjukkan kinerja detektor sinyal secara analitis mempunyai hasil yang serupa dengan simulasi Monte Carlo. Kata kunci: deteksi sinyal, Neyman-Pearson, hypothesis testing, spectrum sensing, radar.   ABSTRACT Signal detection has been used in many sophisticated signal processing systems, such as for signal processing in surveillance radar which is to detect and to track a radar target. Recently, signal detection is widely used for spectrum sensing in Cognitive Radio. Signal detection is a binary hypothesis testing problem which is to choose one out of two conditions, i.e., noise only or signal absence (null hypothesis), and signal presence (alternative hypothesis). Since signal detection theory is a wide area, this paper only focuses on parametric approach using Neyman-Pearson theorem. The two hypotheses are modeled by random variables having the same distribution but different parameters. The derivations of test statistics (detectors) are shown for two scenarios, i.e., partially known and perfectly known distributions. Analytical results and Monte Carlo simulations of the derived detectors show similar performances. Keywords: signal detection, Neyman-Pearson, hypothesis testing, spectrum sensing, radar

    Spectrum sensing for cognitive radios: Algorithms, performance, and limitations

    Get PDF
    Inefficient use of radio spectrum is becoming a serious problem as more and more wireless systems are being developed to operate in crowded spectrum bands. Cognitive radio offers a novel solution to overcome the underutilization problem by allowing secondary usage of the spectrum resources along with high reliable communication. Spectrum sensing is a key enabler for cognitive radios. It identifies idle spectrum and provides awareness regarding the radio environment which are essential for the efficient secondary use of the spectrum and coexistence of different wireless systems. The focus of this thesis is on the local and cooperative spectrum sensing algorithms. Local sensing algorithms are proposed for detecting orthogonal frequency division multiplexing (OFDM) based primary user (PU) transmissions using their autocorrelation property. The proposed autocorrelation detectors are simple and computationally efficient. Later, the algorithms are extended to the case of cooperative sensing where multiple secondary users (SUs) collaborate to detect a PU transmission. For cooperation, each SU sends a local decision statistic such as log-likelihood ratio (LLR) to the fusion center (FC) which makes a final decision. Cooperative sensing algorithms are also proposed using sequential and censoring methods. Sequential detection minimizes the average detection time while censoring scheme improves the energy efficiency. The performances of the proposed algorithms are studied through rigorous theoretical analyses and extensive simulations. The distributions of the decision statistics at the SU and the test statistic at the FC are established conditioned on either hypothesis. Later, the effects of quantization and reporting channel errors are considered. Main aim in studying the effects of quantization and channel errors on the cooperative sensing is to provide a framework for the designers to choose the operating values of the number of quantization bits and the target bit error probability (BEP) for the reporting channel such that the performance loss caused by these non-idealities is negligible. Later a performance limitation in the form of BEP wall is established for the cooperative sensing schemes in the presence of reporting channel errors. The BEP wall phenomenon is important as it provides the feasible values for the reporting channel BEP used for designing communication schemes between the SUs and the FC

    A Simulation study on Interference in CSMA/CA Ad-Hoc Networks using Point Process

    Get PDF
    Performance of wireless ad-hoc networks is essentially degraded by co-channel interference. Since the interference at a receiver crucially depends on the distribution of the interfering transmitters, mathematical technique is needed to specifically model the network geometry where a number of nodes are randomly spread. This is why stochastic geometry approach is required. In this thesis, we study about stochastic point processes such as Poisson Point Process, Matérn Point Process, and Simple Sequential Inhibition Point Process. The interference distributions resulting from the different point process are compared, and in CSMA/CA networks, point process's limitation issue such as the under-estimation of the node density is discussed. Moreover, we show that the estimated interference distribution obtained by Network Simulator 2, is different with respect to the different point process. Even if there is the existence of gap between the distributions from the point processes and the simulator due to active factors, they all offer similar shape which follows a peak and an asymmetry with a more or less heavy tail. This observation has promoted an interest in characterizing the distribution of the aggregated interference with the Log-normal, Alpha-stable, and Weibull distributions as a family of heavy tail distributions. Even though hypothesis tests have mostly led to the reject of the null assumption, that the interference distribution by the simulator, is a random sample from these heavy tailed distributions, except for the Alpha-stable distribution in high density. The hypothesis statistics systematically yield agreement on the choice of the better approximation. Moreover, the log probability process certainly makes it possible to reliably select the most similar heavy tailed distribution to the empirical data set on the variation of node density

    Reaaliaikainen käännepisteiden havainta hylkäysvirheaste- ja kommunikaatiorajoitteilla

    Get PDF
    In a quickest detection problem, the objective is to detect abrupt changes in a stochastic sequence as quickly as possible, while limiting rate of false alarms. The development of algorithms that after each observation decide to either stop and declare a change as having happened, or to continue the monitoring process has been an active line of research in mathematical statistics. The algorithms seek to optimally balance the inherent trade-off between the average detection delay in declaring a change and the likelihood of declaring a change prematurely. Change-point detection methods have applications in numerous domains, including monitoring the environment or the radio spectrum, target detection, financial markets, and others. Classical quickest detection theory focuses settings where only a single data stream is observed. In modern day applications facilitated by development of sensing technology, one may be tasked with monitoring multiple streams of data for changes simultaneously. Wireless sensor networks or mobile phones are examples of technology where devices can sense their local environment and transmit data in a sequential manner to some common fusion center (FC) or cloud for inference. When performing quickest detection tasks on multiple data streams in parallel, classical tools of quickest detection theory focusing on false alarm probability control may become insufficient. Instead, controlling the false discovery rate (FDR) has recently been proposed as a more useful and scalable error criterion. The FDR is the expected proportion of false discoveries (false alarms) among all discoveries. In this thesis, novel methods and theory related to quickest detection in multiple parallel data streams are presented. The methods aim to minimize detection delay while controlling the FDR. In addition, scenarios where not all of the devices communicating with the FC can remain operational and transmitting to the FC at all times are considered. The FC must choose which subset of data streams it wants to receive observations from at a given time instant. Intelligently choosing which devices to turn on and off may extend the devices’ battery life, which can be important in real-life applications, while affecting the detection performance only slightly. The performance of the proposed methods is demonstrated in numerical simulations to be superior to existing approaches. Additionally, the topic of multiple hypothesis testing in spatial domains is briefly addressed. In a multiple hypothesis testing problem, one tests multiple null hypotheses at once while trying to control a suitable error criterion, such as the FDR. In a spatial multiple hypothesis problem each tested hypothesis corresponds to e.g. a geographical location, and the non-null hypotheses may appear in spatially localized clusters. It is demonstrated that implementing a Bayesian approach that accounts for the spatial dependency between the hypotheses can greatly improve testing accuracy
    corecore