16 research outputs found

    Multiuser detection of synchronous code-division multiple-access signals by perfect sampling

    No full text

    Gaussian particle filtering

    No full text

    Model Selection by MCMC Computation

    No full text
    MCMC sampling is a methodology that is becoming increasingly important in statistical signal processing. It has been of particular importance to the Bayesian-based approaches to signal processing since it extends significantly the range of problems that they can address. MCMC techniques generate samples from desired distributions by embedding them as limiting distributions of Markov chains. There are many ways of categorizing MCMC methods, but the simplest one is to classify them in one of two groups: the first is used in estimation problems where the unknowns are typically parameters of a model, which is assumed to have generated the observed data; the second is employed in more general scenarios where the unknowns are not only model parameters, but models as well. In this paper, we address the MCMC methods from the second group, which allow for generation of samples from probability distributions de"ned on unions of disjoint spaces of di!erent dimensions. More speci"cally, we show why ..

    An MCMC sampling approach to estimation of nonstationary hidden Markov models

    No full text

    Dynamic Radar Networks of UAVs: A Tutorial Overview and Tracking Performance Comparison with Terrestrial Radar Networks

    No full text
    In the coming years, low aerial space will be crowded with unmanned aerial vehicles (UAVs) providing various services. In this context, an emerging problem is to detect and track unauthorized or malicious mini/microUAVs. In contrast to current solutions mainly based on fixed terrestrial radars, this tutorial puts forth the idea of a dynamic radar network (DRN) composed of UAVs able to smartly adapt their formation and navigation strategy to best track malicious UAVs in real time with high accuracy and in a distributed fashion. To this end, the main methods for target detection and tracking are described, and an optimized navigation scheme according to an information-seeking approach is developed. Further, some examples of simulation results and future directions of work are presented, highlighting the advantages of dynamic and reconfigurable networks over static ones

    Perfect sampling: a review and applications to signal processing

    No full text

    Reinforcement Learning for UAV Autonomous Navigation, Mapping and Target Detection

    No full text
    In this paper, we study a joint detection, mapping and navigation problem for a single unmanned aerial vehicle (UAV) equipped with a low complexity radar and flying in an unknown environment. The goal is to optimize its trajectory with the purpose of maximizing the mapping accuracy and, at the same time, to avoid areas where measurements might not be sufficiently informative from the perspective of a target detection. This problem is formulated as a Markov decision process (MDP) where the UAV is an agent that runs either a state estimator for target detection and for environment mapping, and a reinforcement learning (RL) algorithm to infer its own policy of navigation (i.e., the control law). Numerical results show the feasibility of the proposed idea, highlighting the UAV's capability of autonomously exploring areas with high probability of target detection while reconstructing the surrounding environment

    Study of the Wavelet Basis Selections

    No full text
    corecore