3,760 research outputs found

    Time-frequency represetation of radar signals using Doppler-Lag block searching Wigner-Ville distribution

    Get PDF
    Radar signals are time-varying signals where the signal parameters change over time. For these signals, Quadratic Time-Frequency Distribution (QTFD) offers advantages over classical spectrum estimation in terms of frequency and time resolution but it suffers heavily from cross-terms. In generating accurate Time-Frequency Representation (TFR), a kernel function must be able to suppress cross-terms while maintaining auto-terms energy especially in a non-cooperative environment where the parameters of the actual signal are unknown. Thus, a new signal-dependent QTFD is proposed that adaptively estimates the kernel parameters for a wide class of radar signals. The adaptive procedure, Doppler-Lag Block Searching (DLBS) kernel estimation was developed to serve this purpose. Accurate TFRs produced for all simulated radar signals with Instantaneous Frequency (IF) estimation performance are verified using Monte Carlo simulation meeting the requirements of the Cramer-Rao Lower Bound (CRLB) at SNR > 6 dB

    Image embedding and user multi-preference modeling for data collection sampling

    Get PDF
    This work proposes an end-to-end user-centric sampling method aimed at selecting the images from an image collection that are able to maximize the information perceived by a given user. As main contributions, we first introduce novel metrics that assess the amount of perceived information retained by the user when experiencing a set of images. Given the actual information present in a set of images, which is the volume spanned by the set in the corresponding latent space, we show how to take into account the user’s preferences in such a volume calculation to build a user-centric metric for the perceived information. Finally, we propose a sampling strategy seeking the minimum set of images that maximize the information perceived by a given user. Experiments using the coco dataset show the ability of the proposed approach to accurately integrate user preference while keeping a reasonable diversity in the sampled image set

    Signal design and processing for noise radar

    Get PDF
    An efficient and secure use of the electromagnetic spectrum by different telecommunications and radar systems represents, today, a focal research point, as the coexistence of different radio-frequency sources at the same time and in the same frequency band requires the solution of a non-trivial interference problem. Normally, this is addressed with diversity in frequency, space, time, polarization, or code. In some radar applications, a secure use of the spectrum calls for the design of a set of transmitted waveforms highly resilient to interception and exploitation, i.e., with low probability of intercept/ exploitation capability. In this frame, the noise radar technology (NRT) transmits noise-like waveforms and uses correlation processing of radar echoes for their optimal reception. After a review of the NRT as developed in the last decades, the aim of this paper is to show that NRT can represent a valid solution to the aforesaid problems

    Performance analysis of feedback-free collision resolution NDMA protocol

    Get PDF
    To support communications of a large number of deployed devices while guaranteeing limited signaling load, low energy consumption, and high reliability, future cellular systems require efficient random access protocols. However, how to address the collision resolution at the receiver is still the main bottleneck of these protocols. The network-assisted diversity multiple access (NDMA) protocol solves the issue and attains the highest potential throughput at the cost of keeping devices active to acquire feedback and repeating transmissions until successful decoding. In contrast, another potential approach is the feedback-free NDMA (FF-NDMA) protocol, in which devices do repeat packets in a pre-defined number of consecutive time slots without waiting for feedback associated with repetitions. Here, we investigate the FF-NDMA protocol from a cellular network perspective in order to elucidate under what circumstances this scheme is more energy efficient than NDMA. We characterize analytically the FF-NDMA protocol along with the multipacket reception model and a finite Markov chain. Analytic expressions for throughput, delay, capture probability, energy, and energy efficiency are derived. Then, clues for system design are established according to the different trade-offs studied. Simulation results show that FF-NDMA is more energy efficient than classical NDMA and HARQ-NDMA at low signal-to-noise ratio (SNR) and at medium SNR when the load increases.Peer ReviewedPostprint (published version

    Distributed Adaptive Learning of Graph Signals

    Full text link
    The aim of this paper is to propose distributed strategies for adaptive learning of signals defined over graphs. Assuming the graph signal to be bandlimited, the method enables distributed reconstruction, with guaranteed performance in terms of mean-square error, and tracking from a limited number of sampled observations taken from a subset of vertices. A detailed mean square analysis is carried out and illustrates the role played by the sampling strategy on the performance of the proposed method. Finally, some useful strategies for distributed selection of the sampling set are provided. Several numerical results validate our theoretical findings, and illustrate the performance of the proposed method for distributed adaptive learning of signals defined over graphs.Comment: To appear in IEEE Transactions on Signal Processing, 201
    corecore