109 research outputs found

    A Novel Detection Scheme with Multiple Observations for Sparse Signal Based on Likelihood Ratio Test with Sparse Estimation

    Get PDF
    Recently, the problem of detecting unknown and arbitrary sparse signals has attracted much attention from researchers in various fields. However, there remains a peck of difficulties and challenges as the key information is only contained in a small fraction of the signal and due to the absence of prior information. In this paper, we consider a more general and practical scenario of multiple observations with no prior information except for the sparsity of the signal. A new detection scheme referred to as the likelihood ratio test with sparse estimation (LRT-SE) is presented. Under the Neyman-Pearson testing framework, LRT-SE estimates the unknown signal by employing the l1-minimization technique from compressive sensing theory. The detection performance of LRT-SE is preliminarily analyzed in terms of error probabilities in finite size and Chernoff consistency in high dimensional condition. The error exponent is introduced to describe the decay rate of the error probability as observations number grows. Finally, these properties of LRT-SE are demonstrated based on the experimental results of synthetic sparse signals and sparse signals from real satellite telemetry data. It could be concluded that the proposed detection scheme performs very close to the optimal detector

    Stochastic resonance in binary composite hypothesis-testing problems in the Neyman-Pearson framework

    Get PDF
    Performance of some suboptimal detectors can be enhanced by adding independent noise to their inputs via the stochastic resonance (SR) effect. In this paper, the effects of SR are studied for binary composite hypothesis-testing problems. A Neyman-Pearson framework is considered, and the maximization of detection performance under a constraint on the maximum probability of false-alarm is studied. The detection performance is quantified in terms of the sum, the minimum, and the maximum of the detection probabilities corresponding to possible parameter values under the alternative hypothesis. Sufficient conditions under which detection performance can or cannot be improved are derived for each case. Also, statistical characterization of optimal additive noise is provided, and the resulting false-alarm probabilities and bounds on detection performance are investigated. In addition, optimization theoretic approaches to obtaining the probability distribution of optimal additive noise are discussed. Finally, a detection example is presented to investigate the theoretical results. © 2012 Elsevier Inc. All rights reserved

    Asynchronous device detection for cognitive device-to-device communications

    Get PDF
    Dynamic spectrum sharing will facilitate the interference coordination in device-to-device (D2D) communications. In the absence of network level coordination, the timing synchronization among D2D users will be unavailable, leading to inaccurate channel state estimation and device detection, especially in time-varying fading environments. In this study, we design an asynchronous device detection/discovery framework for cognitive-D2D applications, which acquires timing drifts and dynamical fading channels when directly detecting the existence of a proximity D2D device (e.g. or primary user). To model and analyze this, a new dynamical system model is established, where the unknown timing deviation follows a random process, while the fading channel is governed by a discrete state Markov chain. To cope with the mixed estimation and detection (MED) problem, a novel sequential estimation scheme is proposed, using the conceptions of statistic Bayesian inference and random finite set. By tracking the unknown states (i.e. varying time deviations and fading gains) and suppressing the link uncertainty, the proposed scheme can effectively enhance the detection performance. The general framework, as a complimentary to a network-aided case with the coordinated signaling, provides the foundation for development of flexible D2D communications along with proximity-based spectrum sharing

    Distributed Inference and Learning with Byzantine Data

    Get PDF
    We are living in an increasingly networked world with sensing networks of varying shapes and sizes: the network often comprises of several tiny devices (or nodes) communicating with each other via different topologies. To make the problem even more complicated, the nodes in the network can be unreliable due to a variety of reasons: noise, faults and attacks, thus, providing corrupted data. Although the area of statistical inference has been an active area of research in the past, distributed learning and inference in a networked setup with potentially unreliable components has only gained attention recently. The emergence of big and dirty data era demands new distributed learning and inference solutions to tackle the problem of inference with corrupted data. Distributed inference networks (DINs) consist of a group of networked entities which acquire observations regarding a phenomenon of interest (POI), collaborate with other entities in the network by sharing their inference via different topologies to make a global inference. The central goal of this thesis is to analyze the effect of corrupted (or falsified) data on the inference performance of DINs and design robust strategies to ensure reliable overall performance for several practical network architectures. Specifically, the inference (or learning) process can be that of detection or estimation or classification, and the topology of the system can be parallel, hierarchical or fully decentralized (peer to peer). Note that, the corrupted data model may seem similar to the scenario where local decisions are transmitted over a Binary Symmetric Channel (BSC) with a certain cross over probability, however, there are fundamental differences. Over the last three decades, research community has extensively studied the impact of transmission channels or faults on the distributed detection system and related problems due to its importance in several applications. However, corrupted (Byzantine) data models considered in this thesis, are philosophically different from the BSC or the faulty sensor cases. Byzantines are intentional and intelligent, therefore, they can optimize over the data corruption parameters. Thus, in contrast to channel aware detection, both the FC and the Byzantines can optimize their utility by choosing their actions based on the knowledge of their opponent’s behavior. Study of these practically motivated scenarios in the presence of Byzantines is of utmost importance, and is missing from the channel aware detection and fault tolerant detection literature. This thesis advances the distributed inference literature by providing fundamental limits of distributed inference with Byzantine data and provides optimal counter-measures (using the insights provided by these fundamental limits) from a network designer’s perspective. Note that, the analysis of problems related to strategical interaction between Byzantines and network designed is very challenging (NP-hard is many cases). However, we show that by utilizing the properties of the network architecture, efficient solutions can be obtained. Specifically, we found that several problems related to the design of optimal counter-measures in the inference context are, in fact, special cases of these NP-hard problems which can be solved in polynomial time. First, we consider the problem of distributed Bayesian detection in the presence of data falsification (or Byzantine) attacks in the parallel topology. Byzantines considered in this thesis are those nodes that are compromised and reprogrammed by an adversary to transmit false information to a centralized fusion center (FC) to degrade detection performance. We show that above a certain fraction of Byzantine attackers in the network, the detection scheme becomes completely incapable (or blind) of utilizing the sensor data for detection. When the fraction of Byzantines is not sufficient to blind the FC, we also provide closed form expressions for the optimal attacking strategies for the Byzantines that most degrade the detection performance. Optimal attacking strategies in certain cases have the minimax property and, therefore, the knowledge of these strategies has practical significance and can be used to implement a robust detector at the FC. In several practical situations, parallel topology cannot be implemented due to limiting factors, such as, the FC being outside the communication range of the nodes and limited energy budget of the nodes. In such scenarios, a multi-hop network is employed, where nodes are organized hierarchically into multiple levels (tree networks). Next, we study the problem of distributed inference in tree topologies in the presence of Byzantines under several practical scenarios. We analytically characterize the effect of Byzantines on the inference performance of the system. We also look at the possible counter-measures from the FC’s perspective to protect the network from these Byzantines. These counter-measures are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. For scenarios where this is not possible, Byzantine tolerant schemes, which use game theory and error-correcting codes, are developed that tolerate the effect of Byzantines while maintaining a reasonably good inference performance in the network. Going a step further, we also consider scenarios where a centralized FC is not available. In such scenarios, a solution is to employ detection approaches which are based on fully distributed consensus algorithms, where all of the nodes exchange information only with their neighbors. For such networks, we analytically characterize the negative effect of Byzantines on the steady-state and transient detection performance of conventional consensus-based detection schemes. To avoid performance deterioration, we propose a distributed weighted average consensus algorithm that is robust to Byzantine attacks. Next, we exploit the statistical distribution of the nodes’ data to devise techniques for mitigating the influence of data falsifying Byzantines on the distributed detection system. Since some parameters of the statistical distribution of the nodes’ data might not be known a priori, we propose learning based techniques to enable an adaptive design of the local fusion or update rules. The above considerations highlight the negative effect of the corrupted data on the inference performance. However, it is possible for a system designer to utilize the corrupted data for network’s benefit. Finally, we consider the problem of detecting a high dimensional signal based on compressed measurements with secrecy guarantees. We consider a scenario where the network operates in the presence of an eavesdropper who wants to discover the state of the nature being monitored by the system. To keep the data secret from the eavesdropper, we propose to use cooperating trustworthy nodes that assist the FC by injecting corrupted data in the system to deceive the eavesdropper. We also design the system by determining the optimal values of parameters which maximize the detection performance at the FC while ensuring perfect secrecy at the eavesdropper

    Spectrum Sensing Algorithms for Cognitive Radio Applications

    Get PDF
    Future wireless communications systems are expected to be extremely dynamic, smart and capable to interact with the surrounding radio environment. To implement such advanced devices, cognitive radio (CR) is a promising paradigm, focusing on strategies for acquiring information and learning. The first task of a cognitive systems is spectrum sensing, that has been mainly studied in the context of opportunistic spectrum access, in which cognitive nodes must implement signal detection techniques to identify unused bands for transmission. In the present work, we study different spectrum sensing algorithms, focusing on their statistical description and evaluation of the detection performance. Moving from traditional sensing approaches we consider the presence of practical impairments, and analyze algorithm design. Far from the ambition of cover the broad spectrum of spectrum sensing, we aim at providing contributions to the main classes of sensing techniques. In particular, in the context of energy detection we studied the practical design of the test, considering the case in which the noise power is estimated at the receiver. This analysis allows to deepen the phenomenon of the SNR wall, providing the conditions for its existence and showing that presence of the SNR wall is determined by the accuracy of the noise power estimation process. In the context of the eigenvalue based detectors, that can be adopted by multiple sensors systems, we studied the practical situation in presence of unbalances in the noise power at the receivers. Then, we shift the focus from single band detectors to wideband sensing, proposing a new approach based on information theoretic criteria. This technique is blind and, requiring no threshold setting, can be adopted even if the statistical distribution of the observed data in not known exactly. In the last part of the thesis we analyze some simple cooperative localization techniques based on weighted centroid strategies

    Signal Processing for Compressed Sensing Multiuser Detection

    Get PDF
    The era of human based communication was longly believed to be the main driver for the development of communication systems. Already nowadays we observe that other types of communication impact the discussions of how future communication system will look like. One emerging technology in this direction is machine to machine (M2M) communication. M2M addresses the communication between autonomous entities without human interaction in mind. A very challenging aspect is the fact that M2M strongly differ from what communication system were designed for. Compared to human based communication, M2M is often characterized by small and sporadic uplink transmissions with limited data-rate constraints. While current communication systems can cope with several 100 transmissions, M2M envisions a massive number of devices that simultaneously communicate to a central base-station. Therefore, future communication systems need to be equipped with novel technologies facilitating the aggregation of massive M2M. The key design challenge lies in the efficient design of medium access technologies that allows for efficient communication with small data packets. Further, novel physical layer aspects have to be considered in order to reliable detect the massive uplink communication. Within this thesis physical layer concepts are introduced for a novel medium access technology tailored to the demands of sporadic M2M. This concept combines advances from the field of sporadic signal processing and communications. The main idea is to exploit the sporadic structure of the M2M traffic to design physical layer algorithms utilizing this side information. This concept considers that the base-station has to jointly detect the activity and the data of the M2M nodes. The whole framework of joint activity and data detection in sporadic M2M is known as Compressed Sensing Multiuser Detection (CS-MUD). This thesis introduces new physical layer concepts for CS-MUD. One important aspect is the question of how the activity detection impacts the data detection. It is shown that activity errors have a fundamentally different impact on the underlying communication system than data errors have. To address this impact, this thesis introduces new algorithms that aim at controlling or even avoiding the activity errors in a system. It is shown that a separate activity and data detection is a possible approach to control activity errors in M2M. This becomes possible by considering the activity detection task in a Bayesian framework based on soft activity information. This concept allows maintaining a constant and predictable activity error rate in a system. Beyond separate activity and data detection, the joint activity and data detection problem is addressed. Here a novel detector based on message passing is introduced. The main driver for this concept is the extrinsic information exchange between different entities being part of a graphical representation of the whole estimation problem. It can be shown that this detector is superior to state-of-the-art concepts for CS-MUD. Besides analyzing the concepts introduced simulatively, this thesis also shows an implementation of CS-MUD on a hardware demonstrator platform using the algorithms developed within this thesis. This implementation validates that the advantages of CS-MUD via over-the-air transmissions and measurements under practical constraints

    Distributed Detection and Estimation in Wireless Sensor Networks

    Full text link
    In this article we consider the problems of distributed detection and estimation in wireless sensor networks. In the first part, we provide a general framework aimed to show how an efficient design of a sensor network requires a joint organization of in-network processing and communication. Then, we recall the basic features of consensus algorithm, which is a basic tool to reach globally optimal decisions through a distributed approach. The main part of the paper starts addressing the distributed estimation problem. We show first an entirely decentralized approach, where observations and estimations are performed without the intervention of a fusion center. Then, we consider the case where the estimation is performed at a fusion center, showing how to allocate quantization bits and transmit powers in the links between the nodes and the fusion center, in order to accommodate the requirement on the maximum estimation variance, under a constraint on the global transmit power. We extend the approach to the detection problem. Also in this case, we consider the distributed approach, where every node can achieve a globally optimal decision, and the case where the decision is taken at a central node. In the latter case, we show how to allocate coding bits and transmit power in order to maximize the detection probability, under constraints on the false alarm rate and the global transmit power. Then, we generalize consensus algorithms illustrating a distributed procedure that converges to the projection of the observation vector onto a signal subspace. We then address the issue of energy consumption in sensor networks, thus showing how to optimize the network topology in order to minimize the energy necessary to achieve a global consensus. Finally, we address the problem of matching the topology of the network to the graph describing the statistical dependencies among the observed variables.Comment: 92 pages, 24 figures. To appear in E-Reference Signal Processing, R. Chellapa and S. Theodoridis, Eds., Elsevier, 201

    Communication-theoretic Approach for Skin Cancer Detection using Dynamic Thermal Imaging

    Get PDF
    Skin cancer is the most common cancer in the United States with over 3.5M annual cases. Statistics from the Americans Cancer Society indicate that 20% of the American population will develop this disease during their lifetime. Presently, visual inspection by a dermatologist has good sensitivity (\u3e90%) but poor specificity (\u3c10%), especially for melanoma conditions, which is the most dangerous type of skin cancer with a five-year survival rate between 16-62%. Over the past few decades, several studies have evaluated the use of infrared imaging to diagnose skin cancer. Here we use dynamic thermal imaging (DTI) to demonstrate a rapid, accurate and non-invasive imaging and processing technique to diagnose melanoma and non-melanoma skin cancer lesions. In DTI, the suspicious lesion is cooled down and the thermal recovery of the skin is monitored with an infrared camera. The proposed algorithm exploits the intrinsic order present in the time evolution of the thermal recoveries of the skin of human subjects to diagnose the malignancy and it achieves outstanding performance for discriminating between benign and malignant skin lesions. In this dissertation we propose a stochastic parametric representation of the thermal recovery curve, which is extracted from a heat equation. The statistics of the random parameters associated with the proposed stochastic model are estimated from measured thermal recovery curves of subjects with known condition. The stochastic model is, in turn, utilized to derive an analytical autocorrelation function (ACF) of the stochastic recovery curves. The analytical ACF is utilized in the context of continuous-time detection theory in order to define an optimal statistical decision rule such that the sensitivity of the algorithm is guaranteed to be at a maximum for every prescribed false-alarm probability. The proposed algorithm was tested in a pilot study including 140 human subjects and we have demonstrated sensitivity in excess of 99% for a prescribed false-alarm probability of 1% (specificity in excess of 99%) for detection of skin cancer. To the best of our knowledge, this is the highest reported accuracy for any non-invasive skin cancer diagnosis method. The proposed algorithm is studied in details for different patient permutations demonstrating robustness in maximizing the probability of detecting those subjects with malignant condition. Moreover, the proposed method is further generalized to include thermal recovery curves of the tissue that surrounds the suspicious lesion as a local reference. Such a local reference permits the compensation of any possible anomalous behavior in the lesion thermal recovery, which, in turn, improves both the theoretical and empirical performance of the method. As a final contribution, we develop a novel edge-detection algorithm--specifically targeted for multispectral (MS) and hyperspectral (HS) imagery--which performs l edge detection based solely on spectral (color) information. More precisely, this algorithm fuses the process of detecting edges through ratios of pixels with critical information resulting from spectral classification of the very image whose edges are to be identified. This algorithm is tested in multicolor (spectral) imagery achieving superior results as compared with other alternatives. The edge-detection algorithm is subsequently utilized in the skin-cancer detection context to define the lesion boundary from a visible color image by exploiting the color contrast between the pigmented tissue and the surrounding skin. With this automated lesion selection, we develop a method to extract spatial features equivalent to those utilized by the dermatologists in diagnosing malignant conditions. These spatial features are fused with the temporal features, obtained from the thermal-recovery method, to yield a spatio-temporal method for skin-cancer detection. While providing a rigorous mathematical foundation for the viability of the dynamic thermal recovery approach for skin-cancer detection, the research completed in this dissertation also provides the first reliable, accurate and non-invasive diagnosis method for preliminary skin-cancer detection. This dissertation, therefore, paves the way for future clinical studies to produce new skin-cancer diagnosis practices that minimize the need for unnecessary biopsies without sacrificing reliability
    • …
    corecore