198 research outputs found

    Distributed M-ary hypothesis testing for decision fusion in multiple-input multiple output wireless sensor networks

    Get PDF
    In this study, the authors study binary decision fusion over a shared Rayleigh fading channel with multiple antennas at the decision fusion centre (DFC) in wireless sensor networks. Three fusion rules are derived for the DFC in the case of distributed M-ary hypothesis testing, where M is the number of hypothesis to be classified. Namely, the optimum maximum a posteriori (MAP) rule, the augmented quadratic discriminant analysis (A-QDA) rule and MAP observation bound. A comparative simulation study is carried out between the proposed fusion rules in-terms of detection performance and receiver operating characteristic (ROC) curves, where several parameters are taken into account such as the number of antennas, number of local detectors, number of hypothesis and signal-to-noise ratio. Simulation results show that the optimum (MAP) rule has better detection performance than A-QDA rule. In addition, increasing the number of antennas will improve the detection performance up to a saturation level, while increasing the number of the hypothesis will deteriorate the detection performance

    A Survey of Stochastic Simulation and Optimization Methods in Signal Processing

    Get PDF
    International audienceModern signal processing (SP) methods rely very heavily on probability and statistics to solve challenging SP problems. SP methods are now expected to deal with ever more complex models, requiring ever more sophisticated computational inference techniques. This has driven the development of statistical SP methods based on stochastic simulation and optimization. Stochastic simulation and optimization algorithms are computationally intensive tools for performing statistical inference in models that are anal ytically intractable and beyond the scope of deterministic inference methods. They have been recently successfully applied to many difficult problems involving complex statistical models and sophisticated (often Bayesian) statistical inference techniques. This survey paper offers an introduction to stochastic simulation and optimization methods in signal and image processing. The paper addresses a variety of high-dimensional Markov chain Monte Carlo (MCMC) methods as well as deterministic surrogate methods, such as variational Bayes, the Bethe approach, belief and expectation propagation and approximate message passing algorithms. It also discusses a range of optimization methods that have been adopted to solve stochastic problems, as well as stochastic methods for deterministic optimization. Subsequently, area as of overlap between simulation and optimization, in particular optimization-within-MCMC and MCMC-driven optimization are discussed

    Sampling and Reconstruction of Sparse Signals on Circulant Graphs - An Introduction to Graph-FRI

    Full text link
    With the objective of employing graphs toward a more generalized theory of signal processing, we present a novel sampling framework for (wavelet-)sparse signals defined on circulant graphs which extends basic properties of Finite Rate of Innovation (FRI) theory to the graph domain, and can be applied to arbitrary graphs via suitable approximation schemes. At its core, the introduced Graph-FRI-framework states that any K-sparse signal on the vertices of a circulant graph can be perfectly reconstructed from its dimensionality-reduced representation in the graph spectral domain, the Graph Fourier Transform (GFT), of minimum size 2K. By leveraging the recently developed theory of e-splines and e-spline wavelets on graphs, one can decompose this graph spectral transformation into the multiresolution low-pass filtering operation with a graph e-spline filter, and subsequent transformation to the spectral graph domain; this allows to infer a distinct sampling pattern, and, ultimately, the structure of an associated coarsened graph, which preserves essential properties of the original, including circularity and, where applicable, the graph generating set.Comment: To appear in Appl. Comput. Harmon. Anal. (2017

    A Comparison of Nonlinear Mixing Models for Vegetated Areas Using Simulated and Real Hyperspectral Data

    Get PDF
    International audienceAbstract--Spectral unmixing (SU) is a crucial processing step when analyzing hyperspectral data. In such analysis, most of the work in the literature relies on the widely acknowledged linear mixing model to describe the observed pixels. Unfortunately, this model has been shown to be of limited interest for specific scenes, in particular when acquired over vegetated areas. Consequently, in the past few years, several nonlinear mixing models have been introduced to take nonlinear effects into account while performing SU. These models have been proposed empirically, however, without any thorough validation. In this paper, the authors take advantage of two sets of real and physical-based simulated data to validate the accuracy of various nonlinear models in vegetated areas. These physics-based models, and their corresponding unmixing algorithms, are evaluated with respect to their ability of fitting the measured spectra and providing an accurate estimation of the abundance coefficients, considered as the spatial distribution of the materials in each pixel

    Distributed M-ary hypothesis testing for decision fusion in multiple-input multipleoutput wireless sensor networks

    Get PDF
    In this study, the authors study binary decision fusion over a shared Rayleigh fading channel with multiple antennas at the decision fusion centre (DFC) in wireless sensor networks. Three fusion rules are derived for the DFC in the case of distributed M-ary hypothesis testing, where M is the number of hypothesis to be classified. Namely, the optimum maximum a posteriori (MAP) rule, the augmented quadratic discriminant analysis (A-QDA) rule and MAP observation bound. A comparative simulation study is carried out between the proposed fusion rules in-terms of detection performance and receiver operating characteristic (ROC) curves, where several parameters are taken into account such as the number of antennas, number of local detectors, number of hypothesis and signal-to-noise ratio. Simulation results show that the optimum (MAP) rule has better detection performance than A-QDA rule. In addition, increasing the number of antennas will improve the detection performance up to a saturation level, while increasing the number of the hypothesis will deteriorate the detection performance

    Compressive Sensing in Communication Systems

    Get PDF

    Explainable methods for knowledge graph refinement and exploration via symbolic reasoning

    Get PDF
    Knowledge Graphs (KGs) have applications in many domains such as Finance, Manufacturing, and Healthcare. While recent efforts have created large KGs, their content is far from complete and sometimes includes invalid statements. Therefore, it is crucial to refine the constructed KGs to enhance their coverage and accuracy via KG completion and KG validation. It is also vital to provide human-comprehensible explanations for such refinements, so that humans have trust in the KG quality. Enabling KG exploration, by search and browsing, is also essential for users to understand the KG value and limitations towards down-stream applications. However, the large size of KGs makes KG exploration very challenging. While the type taxonomy of KGs is a useful asset along these lines, it remains insufficient for deep exploration. In this dissertation we tackle the aforementioned challenges of KG refinement and KG exploration by combining logical reasoning over the KG with other techniques such as KG embedding models and text mining. Through such combination, we introduce methods that provide human-understandable output. Concretely, we introduce methods to tackle KG incompleteness by learning exception-aware rules over the existing KG. Learned rules are then used in inferring missing links in the KG accurately. Furthermore, we propose a framework for constructing human-comprehensible explanations for candidate facts from both KG and text. Extracted explanations are used to insure the validity of KG facts. Finally, to facilitate KG exploration, we introduce a method that combines KG embeddings with rule mining to compute informative entity clusters with explanations.Wissensgraphen haben viele Anwendungen in verschiedenen Bereichen, beispielsweise im Finanz- und Gesundheitswesen. Wissensgraphen sind jedoch unvollstĂ€ndig und enthalten auch ungĂŒltige Daten. Hohe Abdeckung und Korrektheit erfordern neue Methoden zur Wissensgraph-Erweiterung und Wissensgraph-Validierung. Beide Aufgaben zusammen werden als Wissensgraph-Verfeinerung bezeichnet. Ein wichtiger Aspekt dabei ist die ErklĂ€rbarkeit und VerstĂ€ndlichkeit von Wissensgraphinhalten fĂŒr Nutzer. In Anwendungen ist darĂŒber hinaus die nutzerseitige Exploration von Wissensgraphen von besonderer Bedeutung. Suchen und Navigieren im Graph hilft dem Anwender, die Wissensinhalte und ihre Limitationen besser zu verstehen. Aufgrund der riesigen Menge an vorhandenen EntitĂ€ten und Fakten ist die Wissensgraphen-Exploration eine Herausforderung. Taxonomische Typsystem helfen dabei, sind jedoch fĂŒr tiefergehende Exploration nicht ausreichend. Diese Dissertation adressiert die Herausforderungen der Wissensgraph-Verfeinerung und der Wissensgraph-Exploration durch algorithmische Inferenz ĂŒber dem Wissensgraph. Sie erweitert logisches Schlussfolgern und kombiniert es mit anderen Methoden, insbesondere mit neuronalen Wissensgraph-Einbettungen und mit Text-Mining. Diese neuen Methoden liefern Ausgaben mit ErklĂ€rungen fĂŒr Nutzer. Die Dissertation umfasst folgende BeitrĂ€ge: Insbesondere leistet die Dissertation folgende BeitrĂ€ge: ‱ Zur Wissensgraph-Erweiterung prĂ€sentieren wir ExRuL, eine Methode zur Revision von Horn-Regeln durch HinzufĂŒgen von Ausnahmebedingungen zum Rumpf der Regeln. Die erweiterten Regeln können neue Fakten inferieren und somit LĂŒcken im Wissensgraphen schließen. Experimente mit großen Wissensgraphen zeigen, dass diese Methode Fehler in abgeleiteten Fakten erheblich reduziert und nutzerfreundliche ErklĂ€rungen liefert. ‱ Mit RuLES stellen wir eine Methode zum Lernen von Regeln vor, die auf probabilistischen ReprĂ€sentationen fĂŒr fehlende Fakten basiert. Das Verfahren erweitert iterativ die aus einem Wissensgraphen induzierten Regeln, indem es neuronale Wissensgraph-Einbettungen mit Informationen aus Textkorpora kombiniert. Bei der Regelgenerierung werden neue Metriken fĂŒr die RegelqualitĂ€t verwendet. Experimente zeigen, dass RuLES die QualitĂ€t der gelernten Regeln und ihrer Vorhersagen erheblich verbessert. ‱ Zur UnterstĂŒtzung der Wissensgraph-Validierung wird ExFaKT vorgestellt, ein Framework zur Konstruktion von ErklĂ€rungen fĂŒr Faktkandidaten. Die Methode transformiert Kandidaten mit Hilfe von Regeln in eine Menge von Aussagen, die leichter zu finden und zu validieren oder widerlegen sind. Die Ausgabe von ExFaKT ist eine Menge semantischer Evidenzen fĂŒr Faktkandidaten, die aus Textkorpora und dem Wissensgraph extrahiert werden. Experimente zeigen, dass die Transformationen die Ausbeute und QualitĂ€t der entdeckten ErklĂ€rungen deutlich verbessert. Die generierten unterstĂŒtzen ErklĂ€rungen unterstĂŒtze sowohl die manuelle Wissensgraph- Validierung durch Kuratoren als auch die automatische Validierung. ‱ Zur UnterstĂŒtzung der Wissensgraph-Exploration wird ExCut vorgestellt, eine Methode zur Erzeugung von informativen EntitĂ€ts-Clustern mit ErklĂ€rungen unter Verwendung von Wissensgraph-Einbettungen und automatisch induzierten Regeln. Eine Cluster-ErklĂ€rung besteht aus einer Kombination von Relationen zwischen den EntitĂ€ten, die den Cluster identifizieren. ExCut verbessert gleichzeitig die Cluster- QualitĂ€t und die Cluster-ErklĂ€rbarkeit durch iteratives VerschrĂ€nken des Lernens von Einbettungen und Regeln. Experimente zeigen, dass ExCut Cluster von hoher QualitĂ€t berechnet und dass die Cluster-ErklĂ€rungen fĂŒr Nutzer informativ sind

    Inférence bayésienne dans des problÚmes inverses, myopes et aveugles en traitement du signal et des images

    Get PDF
    Les activitĂ©s de recherche prĂ©sentĂ©es concernent la rĂ©solution de problĂšmes inverses, myopes et aveugles rencontrĂ©s en traitement du signal et des images. Les mĂ©thodes de rĂ©solution privilĂ©giĂ©es reposent sur une dĂ©marche d'infĂ©rence bayĂ©sienne. Celle-ci offre un cadre d'Ă©tude gĂ©nĂ©rique pour rĂ©gulariser les problĂšmes gĂ©nĂ©ralement mal posĂ©s en exploitant les contraintes inhĂ©rentes aux modĂšles d'observation. L'estimation des paramĂštres d'intĂ©rĂȘt est menĂ©e Ă  l'aide d'algorithmes de Monte Carlo qui permettent d'explorer l'espace des solutions admissibles. Un des domaines d'application visĂ© par ces travaux est l'imagerie hyperspectrale et, plus spĂ©cifiquement, le dĂ©mĂ©lange spectral. Le second travail prĂ©sentĂ© concerne la reconstruction d'images parcimonieuses acquises par un microscope MRFM
    • 

    corecore