3,012 research outputs found

    Distributed Detection over Gaussian Multiple Access Channels with Constant Modulus Signaling

    Full text link
    A distributed detection scheme where the sensors transmit with constant modulus signals over a Gaussian multiple access channel is considered. The deflection coefficient of the proposed scheme is shown to depend on the characteristic function of the sensing noise and the error exponent for the system is derived using large deviation theory. Optimization of the deflection coefficient and error exponent are considered with respect to a transmission phase parameter for a variety of sensing noise distributions including impulsive ones. The proposed scheme is also favorably compared with existing amplify-and-forward and detect-and-forward schemes. The effect of fading is shown to be detrimental to the detection performance through a reduction in the deflection coefficient depending on the fading statistics. Simulations corroborate that the deflection coefficient and error exponent can be effectively used to optimize the error probability for a wide variety of sensing noise distributions.Comment: 30 pages, 12 figure

    Statistical Modeling of SAR Images: A Survey

    Get PDF
    Statistical modeling is essential to SAR (Synthetic Aperture Radar) image interpretation. It aims to describe SAR images through statistical methods and reveal the characteristics of these images. Moreover, statistical modeling can provide a technical support for a comprehensive understanding of terrain scattering mechanism, which helps to develop algorithms for effective image interpretation and creditable image simulation. Numerous statistical models have been developed to describe SAR image data, and the purpose of this paper is to categorize and evaluate these models. We first summarize the development history and the current researching state of statistical modeling, then different SAR image models developed from the product model are mainly discussed in detail. Relevant issues are also discussed. Several promising directions for future research are concluded at last

    Independent component analysis applications in CDMA systems

    Get PDF
    Thesis (Master)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2004Includes bibliographical references (leaves: 56)Text in English; Abstract: Turkish and Englishxi, 96 leavesBlind source separation (BSS) methods, independent component analysis (ICA) and independent factor analysis (IFA) are used for detecting the signal coming to a mobile user which is subject to multiple access interference in a CDMA downlink communication. When CDMA models are studied for different channel characteristics, it is seen that they are similar with BSS/ICA models. It is also showed that if ICA is applied to these CDMA models, desired user.s signal can be estimated successfully without channel information and other users. code sequences. ICA detector is compared with matched filter detector and other conventional detectors using simulation results and it is seen that ICA has some advantages over the other methods.The other BSS method, IFA is applied to basic CDMA downlink model. Since IFA has some convergence and speed problems when the number of sources is large, firstly basic CDMA model with ideal channel assumption is used in IFA application.With simulation of ideal CDMA channel, IFA is compared with ICA and matched filter.Furthermore, Pearson System-based ICA (PS-ICA) method is used forestimating non-Gaussian multipath fading channel coefficients. Considering some fading channel measurements showing that the fading channel coefficients may have an impulsive nature, these coefficients are modeled with an -stable distribution whose shape parameter takes values close to 2 which makes the distributions slightly impulsive. Simulation results are obtained to compare PS-ICA with classical ICA.Also IFA is applied to the single path CDMA downlink model to estimate fading channel by using the advantage of IFA which is the capability to estimate sources with wide class of distributions

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings

    On optimum sensing time over fading channels for Cognitive Radio system

    Get PDF
    Cognitive Radio (CR) is widely expected to be the next Big Bang in wireless communications. In a CR network, the secondary users are allowed to utilize the frequency bands of primary users when these bands are not currently being used. For this, the secondary user should be able to detect the presence of the primary user. Therefore, spectrum sensing is of significant importance in CR networks. In this thesis, we consider the antenna selection problem over fading channels to optimize the trade off between probability of detection and power efficiency of CR systems. We formulate a target function consists of detection probability and power efficiency mathematically, and use energy detection sensing scheme to prove that the formulated problem indeed has one optimal sensing time which yields the highest target function value. Two modelling techniques are used to model the Rayleigh fading channels; one without correlations and one with correlations on temporal and frequency domains. For each model, we provide two scenarios for average SNRs of each channel. In the first scenario, the channels have distinguished level of average SNRs. The second scenario provides a condition in which the channels have similar average SNRs. The antenna selection criterion is based on the received signal strength; each simulation is compared with the worst case simulation, where the antennas are selected randomly. Numerical results have shown that the proposed antenna selection criterion enhanced the detection probability as well as it shortened the optimal sensing time. The target function achieved the higher value while maintaining 0.9 detection probability compared to the worst case simulation. The optimal sensing time is varied by other parameters, such as weighting factor of the target function

    A Study of the Structure of Light Tin Isotopes via Single-Neutron Knockout Reactions

    Get PDF
    The region around 100 Sn [100Sn] is important because of the close proximity to the N=Z=50 magic numbers, the rp process, and the proton drip-line. Alpha decay measurements show a reversal in the spin-parity assignments of the ground and first excited states in 101 Sn [101Sn] compared to 105 Te [105Te]. However, the lightest odd- mass tin isotope with a firm spin-parity assignment is 109 Sn [109Sn]. The d 5/2 [d5/2] and g 7/2 [g7/2] single-particle states above N=50 are near degenerate, evidenced by the excitation energy of the first excited state in 101 Sn at only 172 keV. The correct ordering of these single-particle states and the degree of neutron configuration mixing has been the subject of debate. Spectroscopic studies have been performed close to 100 Sn [100Sn], utilizing the S800 and CAESAR at the NSCL. These studies make use of a single neutron knockout reaction on beams of 108 Sn [108Sn] and 106 Sn [106Sn]. The momentum distributions of the resulting residues reflect the l-value [l-value] of the removed neutron. Additionally, γ-rays [gamma-rays] were measured in coincidence with the momentum distributions allowing for the separation of the knockout channel where the residue is left in an excited state from the channel to the ground state. The odd-mass residue can then be characterized in terms of a hole in the d- or g- orbital with reference to the even-mass nucleus. The relative population of final states in the odd-mass residue are indicative of the mixing in the ground state of 108,106 Sn [108,106Sn]. Comparing the momentum distributions with reaction calculations shows that both 105 Sn [105Sn]and 107 Sn [107Sn] have a J π [J pi] = 5/2 + ground state and a J π [J pi]= 7/2 + first excited state at 200 keV and 151 keV respectively. The exclusive cross sections for one-neutron knockout from 106 Sn [106Sn] and 108 Sn [108Sn] show that the ground state are dominated by the d 5/2 [d5/2] single-particle state

    Modern Data Acquisition, System Design, and Analysis Techniques and their Impact on the Physics-based Understanding of Neutron Coincidence Counters used for International Safeguards (draft)

    Get PDF
    Neutron coincidence counting is a technique widely used in the field of international safeguards for the mass quantification of a fissioning item. It can exploit either passive or active interrogation techniques to assay a wide range of plutonium, uranium, and mixed oxide items present in nuclear facilities worldwide. Because neutrons are highly penetrating, and the time correlation between events provides an identifiable signature, when combined with gamma spectroscopy, it has been used for nondestructive assays of special nuclear material for decades. When neutron coincidence counting was first established, a few system designs emerged as standards for assaying common containers. Over successive decades, new systems were developed for a wider variety of inspection assays. Simultaneously, new system characterization procedures, data acquisition technologies, and performance optimizations were made. The International Atomic Energy Agency has been using many of these original counters for decades, despite the large technological growth in recent years. This is both a testament and an opportunity. This dissertation explores several topics in which the performance of neutron coincidence counting systems is studied such that their behavior may be better understood from physical models, and their applications may be expanded to a greater field of interest. Using modern list mode data acquisition and analysis, procedures are developed, implemented, and exploited to expand the information obtained of both these systems and sources in question in a common measurement. System parameters such as coincidence time windows, dead time, efficiency, die-away time, and non-ideal double pulsing are explored in new ways that are not possible using traditional shift register logic. In addition, modern amplifier electronics are retrofitted in one model, the Uranium Neutron Coincidence Collar, to allow for a count rate-based source spatial response matrix to be measured, ultimately for the identification of diversion in a fresh fuel assembly. The testing, evaluation, and optimization of these electronics is described; they may serve as a more capable alternative to existing electronics used in IAEA systems. Finally, with a thorough understanding of the system characteristics and performance, neutron coincidence counters may be used to self-certify calibration sources with superior precision to national metrological laboratories
    • …
    corecore