3,152 research outputs found

    Stable Throughput Region of Cognitive-Relay Networks with Imperfect Sensing and Finite Relaying Buffer

    Full text link
    In this letter, we obtain the stable throughput region for a cognitive relaying scheme with a finite relaying buffer and imperfect sensing. The analysis investigates the effect of the secondary user's finite relaying capabilities under different scenarios of primary, secondary and relaying links outages. Furthermore, we demonstrate the effect of miss detection and false alarm probabilities on the achievable throughput for the primary and secondary users

    Collective Classification of Textual Documents by Guided Self-Organization in T-Cell Cross-Regulation Dynamics

    Full text link
    We present and study an agent-based model of T-Cell cross-regulation in the adaptive immune system, which we apply to binary classification. Our method expands an existing analytical model of T-cell cross-regulation (Carneiro et al. in Immunol Rev 216(1):48-68, 2007) that was used to study the self-organizing dynamics of a single population of T-Cells in interaction with an idealized antigen presenting cell capable of presenting a single antigen. With agent-based modeling we are able to study the self-organizing dynamics of multiple populations of distinct T-cells which interact via antigen presenting cells that present hundreds of distinct antigens. Moreover, we show that such self-organizing dynamics can be guided to produce an effective binary classification of antigens, which is competitive with existing machine learning methods when applied to biomedical text classification. More specifically, here we test our model on a dataset of publicly available full-text biomedical articles provided by the BioCreative challenge (Krallinger in The biocreative ii. 5 challenge overview, p 19, 2009). We study the robustness of our model's parameter configurations, and show that it leads to encouraging results comparable to state-of-the-art classifiers. Our results help us understand both T-cell cross-regulation as a general principle of guided self-organization, as well as its applicability to document classification. Therefore, we show that our bio-inspired algorithm is a promising novel method for biomedical article classification and for binary document classification in general

    Globally Optimal Cooperation in Dense Cognitive Radio Networks

    Full text link
    The problem of calculating the local and global decision thresholds in hard decisions based cooperative spectrum sensing is well known for its mathematical intractability. Previous work relied on simple suboptimal counting rules for decision fusion in order to avoid the exhaustive numerical search required for obtaining the optimal thresholds. However, these simple rules are not globally optimal as they do not maximize the overall global detection probability by jointly selecting local and global thresholds. Instead, they maximize the detection probability for a specific global threshold. In this paper, a globally optimal decision fusion rule for Primary User signal detection based on the Neyman- Pearson (NP) criterion is derived. The algorithm is based on a novel representation for the global performance metrics in terms of the regularized incomplete beta function. Based on this mathematical representation, it is shown that the globally optimal NP hard decision fusion test can be put in the form of a conventional one dimensional convex optimization problem. A binary search for the global threshold can be applied yielding a complexity of O(log2(N)), where N represents the number of cooperating users. The logarithmic complexity is appreciated because we are concerned with dense networks, and thus N is expected to be large. The proposed optimal scheme outperforms conventional counting rules, such as the OR, AND, and MAJORITY rules. It is shown via simulations that, although the optimal rule tends to the simple OR rule when the number of cooperating secondary users is small, it offers significant SNR gain in dense cognitive radio networks with large number of cooperating users

    On the Capacity of the Underwater Acoustic Channel with Dominant Noise Sources

    Full text link
    This paper provides an upper-bound for the capacity of the underwater acoustic (UWA) channel with dominant noise sources and generalized fading environments. Previous works have shown that UWA channel noise statistics are not necessary Gaussian, especially in a shallow water environment which is dominated by impulsive noise sources. In this case, noise is best represented by the Generalized Gaussian (GG) noise model with a shaping parameter β\beta. On the other hand, fading in the UWA channel is generally represented using an α\alpha-μ\mu distribution, which is a generalization of a wide range of well known fading distributions. We show that the Additive White Generalized Gaussian Noise (AWGGN) channel capacity is upper bounded by the AWGN capacity in addition to a constant gap of 12log(β2πe12βΓ(3β)2(Γ(1β))3)\frac{1}{2} \log \left(\frac{\beta^{2} \pi e^{1-\frac{2}{\beta}} \Gamma(\frac{3}{\beta})}{2(\Gamma(\frac{1}{\beta}))^{3}} \right) bits. The same gap also exists when characterizing the ergodic capacity of AWGGN channels with α\alpha-μ\mu fading compared to the faded AWGN channel capacity. We justify our results by revisiting the sphere-packing problem, which represents a geometric interpertation of the channel capacity. Moreover, UWA channel secrecy rates are characterized and the dependency of UWA channel secrecy on the shaping parameters of the legitimate and eavesdropper channels is highlighted

    Bayesian Inference of Individualized Treatment Effects using Multi-task Gaussian Processes

    Full text link
    Predicated on the increasing abundance of electronic health records, we investi- gate the problem of inferring individualized treatment effects using observational data. Stemming from the potential outcomes model, we propose a novel multi- task learning framework in which factual and counterfactual outcomes are mod- eled as the outputs of a function in a vector-valued reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian method for learning the treatment effects using a multi-task Gaussian process (GP) with a linear coregion- alization kernel as a prior over the vvRKHS. The Bayesian approach allows us to compute individualized measures of confidence in our estimates via pointwise credible intervals, which are crucial for realizing the full potential of precision medicine. The impact of selection bias is alleviated via a risk-based empirical Bayes method for adapting the multi-task GP prior, which jointly minimizes the empirical error in factual outcomes and the uncertainty in (unobserved) counter- factual outcomes. We conduct experiments on observational datasets for an inter- ventional social program applied to premature infants, and a left ventricular assist device applied to cardiac patients wait-listed for a heart transplant. In both experi- ments, we show that our method significantly outperforms the state-of-the-art

    Forecasting Individualized Disease Trajectories using Interpretable Deep Learning

    Full text link
    Disease progression models are instrumental in predicting individual-level health trajectories and understanding disease dynamics. Existing models are capable of providing either accurate predictions of patients prognoses or clinically interpretable representations of disease pathophysiology, but not both. In this paper, we develop the phased attentive state space (PASS) model of disease progression, a deep probabilistic model that captures complex representations for disease progression while maintaining clinical interpretability. Unlike Markovian state space models which assume memoryless dynamics, PASS uses an attention mechanism to induce "memoryful" state transitions, whereby repeatedly updated attention weights are used to focus on past state realizations that best predict future states. This gives rise to complex, non-stationary state dynamics that remain interpretable through the generated attention weights, which designate the relationships between the realized state variables for individual patients. PASS uses phased LSTM units (with time gates controlled by parametrized oscillations) to generate the attention weights in continuous time, which enables handling irregularly-sampled and potentially missing medical observations. Experiments on data from a realworld cohort of patients show that PASS successfully balances the tradeoff between accuracy and interpretability: it demonstrates superior predictive accuracy and learns insightful individual-level representations of disease progression

    Bayesian Nonparametric Causal Inference: Information Rates and Learning Algorithms

    Full text link
    We investigate the problem of estimating the causal effect of a treatment on individual subjects from observational data, this is a central problem in various application domains, including healthcare, social sciences, and online advertising. Within the Neyman Rubin potential outcomes model, we use the Kullback Leibler (KL) divergence between the estimated and true distributions as a measure of accuracy of the estimate, and we define the information rate of the Bayesian causal inference procedure as the (asymptotic equivalence class of the) expected value of the KL divergence between the estimated and true distributions as a function of the number of samples. Using Fano method, we establish a fundamental limit on the information rate that can be achieved by any Bayesian estimator, and show that this fundamental limit is independent of the selection bias in the observational data. We characterize the Bayesian priors on the potential (factual and counterfactual) outcomes that achieve the optimal information rate. As a consequence, we show that a particular class of priors that have been widely used in the causal inference literature cannot achieve the optimal information rate. On the other hand, a broader class of priors can achieve the optimal information rate. We go on to propose a prior adaptation procedure (which we call the information based empirical Bayes procedure) that optimizes the Bayesian prior by maximizing an information theoretic criterion on the recovered causal effects rather than maximizing the marginal likelihood of the observed (factual) data. Building on our analysis, we construct an information optimal Bayesian causal inference algorithm

    Encoding Distortion Modeling For DWT-Based Wireless EEG Monitoring System

    Full text link
    Recent advances in wireless body area sensor net- works leverage wireless and mobile communication technologies to facilitate development of innovative medical applications that can significantly enhance healthcare services and improve quality of life. Specifically, Electroencephalography (EEG)-based applications lie at the heart of these promising technologies. However, the design and operation of such applications is challenging. Power consumption requirements of the sensor nodes may turn some of these applications impractical. Hence, implementing efficient encoding schemes are essential to reduce power consumption in such applications. In this paper, we propose an analytical distortion model for the EEG-based encoding systems. Using this model, the encoder can effectively reconfigure its complexity by adjusting its control parameters to satisfy application constraints while maintaining reconstruction accuracy at the receiver side. The simulation results illustrate that the main parameters that affect the distortion are compression ratio and filter length of the considered DWT-based encoder. Furthermore, it is found that the wireless channel variations have a significant influence on the estimated distortion at the receiver side

    Random Aerial Beamforming for Underlay Cognitive Radio with Exposed Secondary Users

    Full text link
    In this paper, we introduce the exposed secondary users problem in underlay cognitive radio systems, where both the secondary-to-primary and primary-to-secondary channels have a Line-of-Sight (LoS) component. Based on a Rician model for the LoS channels, we show, analytically and numerically, that LoS interference hinders the achievable secondary user capacity when interference constraints are imposed at the primary user receiver. This is caused by the poor dynamic range of the interference channels fluctuations when a dominant LoS component exists. In order to improve the capacity of such system, we propose the usage of an Electronically Steerable Parasitic Array Radiator (ESPAR) antennas at the secondary terminals. An ESPAR antenna involves a single RF chain and has a reconfigurable radiation pattern that is controlled by assigning arbitrary weights to M orthonormal basis radiation patterns via altering a set of reactive loads. By viewing the orthonormal patterns as multiple virtual dumb antennas, we randomly vary their weights over time creating artificial channel fluctuations that can perfectly eliminate the undesired impact of LoS interference. This scheme is termed as Random Aerial Beamforming (RAB), and is well suited for compact and low cost mobile terminals as it uses a single RF chain. Moreover, we investigate the exposed secondary users problem in a multiuser setting, showing that LoS interference hinders multiuser interference diversity and affects the growth rate of the SU capacity as a function of the number of users. Using RAB, we show that LoS interference can actually be exploited to improve multiuser diversity via opportunistic nulling

    Some Characterizations on the Normalized Lommel, Struve and Bessel Functions of the First Kind

    Full text link
    In this paper, we introduce new technique for determining some necessary and sufficient conditions of the normalized Bessel functions jνj_{\nu}, normalized Struve functions hνh_{\nu} and normalized Lommel functions sμ,νs_{\mu,\nu} of the first kind, to be in the subclasses of starlike and convex functions of order α\alpha and type β\beta.Comment: arXiv admin note: text overlap with arXiv:1610.03233 by other author
    corecore