17 research outputs found

    Rainfall Map from Attenuation Data Fusion of Satellite Broadcast and Commercial Microwave Links

    Get PDF
    The demand for accurate rainfall rate maps is growing ever more. This paper proposes a novel algorithm to estimate the rainfall rate map from the attenuation measurements coming from both broadcast satellite links (BSLs) and commercial microwave links (CMLs). The approach we pursue is based on an iterative procedure which extends the well-known GMZ algorithm to fuse the attenuation data coming from different links in a three-dimensional scenario, while also accounting for the virga phenomenon as a rain vertical attenuation model. We experimentally prove the convergence of the procedures, showing how the estimation error decreases for every iteration. The numerical results show that adding the BSL links to a pre-existent CML network boosts the accuracy performance of the estimated rainfall map, improving up to 50% the correlation metrics. Moreover, our algorithm is shown to be robust to errors concerning the virga parametrization, proving the possibility of obtaining good estimation performance without the need for precise and real-time estimation of the virga parameters

    Extreme Values of the Fiedler Vector on Trees

    Full text link
    Let GG be a connected tree on nn vertices and let L=D−AL = D-A denote the Laplacian matrix on GG. The second-smallest eigenvalue λ2(G)>0\lambda_{2}(G) > 0, also known as the algebraic connectivity, as well as the associated eigenvector ϕ2\phi_2 have been of substantial interest. We investigate the question of when the maxima and minima of ϕ2\phi_2 are assumed at the endpoints of the longest path in GG. Our results also apply to more general graphs that `behave globally' like a tree but can exhibit more complicated local structure. The crucial new ingredient is a reproducing formula for the eigenvector ϕk\phi_k

    Bayesian Activity Estimation and Uncertainty Quantification of Spent Nuclear Fuel Using Passive Gamma Emission Tomography

    Get PDF
    In this paper, we address the problem of activity estimation in passive gamma emission tomography (PGET) of spent nuclear fuel. Two different noise models are considered and compared, namely, the isotropic Gaussian and the Poisson noise models. The problem is formulated within a Bayesian framework as a linear inverse problem and prior distributions are assigned to the unknown model parameters. In particular, a Bernoulli-truncated Gaussian prior model is considered to promote sparse pin configurations. A Markov chain Monte Carlo (MCMC) method, based on a split and augmented Gibbs sampler, is then used to sample the posterior distribution of the unknown parameters. The proposed algorithm is first validated by simulations conducted using synthetic data, generated using the nominal models. We then consider more realistic data simulated using a bespoke simulator, whose forward model is non-linear and not available analytically. In that case, the linear models used are mis-specified and we analyse their robustness for activity estimation. The results demonstrate superior performance of the proposed approach in estimating the pin activities in different assembly patterns, in addition to being able to quantify their uncertainty measures, in comparison with existing methods

    Explainable methods for knowledge graph refinement and exploration via symbolic reasoning

    Get PDF
    Knowledge Graphs (KGs) have applications in many domains such as Finance, Manufacturing, and Healthcare. While recent efforts have created large KGs, their content is far from complete and sometimes includes invalid statements. Therefore, it is crucial to refine the constructed KGs to enhance their coverage and accuracy via KG completion and KG validation. It is also vital to provide human-comprehensible explanations for such refinements, so that humans have trust in the KG quality. Enabling KG exploration, by search and browsing, is also essential for users to understand the KG value and limitations towards down-stream applications. However, the large size of KGs makes KG exploration very challenging. While the type taxonomy of KGs is a useful asset along these lines, it remains insufficient for deep exploration. In this dissertation we tackle the aforementioned challenges of KG refinement and KG exploration by combining logical reasoning over the KG with other techniques such as KG embedding models and text mining. Through such combination, we introduce methods that provide human-understandable output. Concretely, we introduce methods to tackle KG incompleteness by learning exception-aware rules over the existing KG. Learned rules are then used in inferring missing links in the KG accurately. Furthermore, we propose a framework for constructing human-comprehensible explanations for candidate facts from both KG and text. Extracted explanations are used to insure the validity of KG facts. Finally, to facilitate KG exploration, we introduce a method that combines KG embeddings with rule mining to compute informative entity clusters with explanations.Wissensgraphen haben viele Anwendungen in verschiedenen Bereichen, beispielsweise im Finanz- und Gesundheitswesen. Wissensgraphen sind jedoch unvollstĂ€ndig und enthalten auch ungĂŒltige Daten. Hohe Abdeckung und Korrektheit erfordern neue Methoden zur Wissensgraph-Erweiterung und Wissensgraph-Validierung. Beide Aufgaben zusammen werden als Wissensgraph-Verfeinerung bezeichnet. Ein wichtiger Aspekt dabei ist die ErklĂ€rbarkeit und VerstĂ€ndlichkeit von Wissensgraphinhalten fĂŒr Nutzer. In Anwendungen ist darĂŒber hinaus die nutzerseitige Exploration von Wissensgraphen von besonderer Bedeutung. Suchen und Navigieren im Graph hilft dem Anwender, die Wissensinhalte und ihre Limitationen besser zu verstehen. Aufgrund der riesigen Menge an vorhandenen EntitĂ€ten und Fakten ist die Wissensgraphen-Exploration eine Herausforderung. Taxonomische Typsystem helfen dabei, sind jedoch fĂŒr tiefergehende Exploration nicht ausreichend. Diese Dissertation adressiert die Herausforderungen der Wissensgraph-Verfeinerung und der Wissensgraph-Exploration durch algorithmische Inferenz ĂŒber dem Wissensgraph. Sie erweitert logisches Schlussfolgern und kombiniert es mit anderen Methoden, insbesondere mit neuronalen Wissensgraph-Einbettungen und mit Text-Mining. Diese neuen Methoden liefern Ausgaben mit ErklĂ€rungen fĂŒr Nutzer. Die Dissertation umfasst folgende BeitrĂ€ge: Insbesondere leistet die Dissertation folgende BeitrĂ€ge: ‱ Zur Wissensgraph-Erweiterung prĂ€sentieren wir ExRuL, eine Methode zur Revision von Horn-Regeln durch HinzufĂŒgen von Ausnahmebedingungen zum Rumpf der Regeln. Die erweiterten Regeln können neue Fakten inferieren und somit LĂŒcken im Wissensgraphen schließen. Experimente mit großen Wissensgraphen zeigen, dass diese Methode Fehler in abgeleiteten Fakten erheblich reduziert und nutzerfreundliche ErklĂ€rungen liefert. ‱ Mit RuLES stellen wir eine Methode zum Lernen von Regeln vor, die auf probabilistischen ReprĂ€sentationen fĂŒr fehlende Fakten basiert. Das Verfahren erweitert iterativ die aus einem Wissensgraphen induzierten Regeln, indem es neuronale Wissensgraph-Einbettungen mit Informationen aus Textkorpora kombiniert. Bei der Regelgenerierung werden neue Metriken fĂŒr die RegelqualitĂ€t verwendet. Experimente zeigen, dass RuLES die QualitĂ€t der gelernten Regeln und ihrer Vorhersagen erheblich verbessert. ‱ Zur UnterstĂŒtzung der Wissensgraph-Validierung wird ExFaKT vorgestellt, ein Framework zur Konstruktion von ErklĂ€rungen fĂŒr Faktkandidaten. Die Methode transformiert Kandidaten mit Hilfe von Regeln in eine Menge von Aussagen, die leichter zu finden und zu validieren oder widerlegen sind. Die Ausgabe von ExFaKT ist eine Menge semantischer Evidenzen fĂŒr Faktkandidaten, die aus Textkorpora und dem Wissensgraph extrahiert werden. Experimente zeigen, dass die Transformationen die Ausbeute und QualitĂ€t der entdeckten ErklĂ€rungen deutlich verbessert. Die generierten unterstĂŒtzen ErklĂ€rungen unterstĂŒtze sowohl die manuelle Wissensgraph- Validierung durch Kuratoren als auch die automatische Validierung. ‱ Zur UnterstĂŒtzung der Wissensgraph-Exploration wird ExCut vorgestellt, eine Methode zur Erzeugung von informativen EntitĂ€ts-Clustern mit ErklĂ€rungen unter Verwendung von Wissensgraph-Einbettungen und automatisch induzierten Regeln. Eine Cluster-ErklĂ€rung besteht aus einer Kombination von Relationen zwischen den EntitĂ€ten, die den Cluster identifizieren. ExCut verbessert gleichzeitig die Cluster- QualitĂ€t und die Cluster-ErklĂ€rbarkeit durch iteratives VerschrĂ€nken des Lernens von Einbettungen und Regeln. Experimente zeigen, dass ExCut Cluster von hoher QualitĂ€t berechnet und dass die Cluster-ErklĂ€rungen fĂŒr Nutzer informativ sind

    Advances in Autism Research

    Get PDF
    This book represents one of the most up-to-date collections of articles on clinical practice and research in the field of Autism Spectrum Disorders (ASD). The scholars who contributed to this book are experts in their field, carrying out cutting edge research in prestigious institutes worldwide (e.g., Harvard Medical School, University of California, MIND Institute, King’s College, Karolinska Institute, and many others). The book addressed many topics, including (1) The COVID-19 pandemic; (2) Epidemiology and prevalence; (3) Screening and early behavioral markers; (4) Diagnostic and phenotypic profile; (5) Treatment and intervention; (6) Etiopathogenesis (biomarkers, biology, and genetic, epigenetic, and risk factors); (7) Comorbidity; (8) Adulthood; and (9) Broader Autism Phenotype (BAP). This book testifies to the complexity of performing research in the field of ASD. The published contributions underline areas of progress and ongoing challenges in which more certain data is expected in the coming years. It would be desirable that experts, clinicians, researchers, and trainees could have the opportunity to read this updated text describing the challenging heterogeneity of Autism Spectrum Disorder

    Remote sensing technology applications in forestry and REDD+

    Get PDF
    Advances in close-range and remote sensing technologies are driving innovations in forest resource assessments and monitoring on varying scales. Data acquired with airborne and spaceborne platforms provide high(er) spatial resolution, more frequent coverage, and more spectral information. Recent developments in ground-based sensors have advanced 3D measurements, low-cost permanent systems, and community-based monitoring of forests. The UNFCCC REDD+ mechanism has advanced the remote sensing community and the development of forest geospatial products that can be used by countries for the international reporting and national forest monitoring. However, an urgent need remains to better understand the options and limitations of remote and close-range sensing techniques in the field of forest degradation and forest change. Therefore, we invite scientists working on remote sensing technologies, close-range sensing, and field data to contribute to this Special Issue. Topics of interest include: (1) novel remote sensing applications that can meet the needs of forest resource information and REDD+ MRV, (2) case studies of applying remote sensing data for REDD+ MRV, (3) timeseries algorithms and methodologies for forest resource assessment on different spatial scales varying from the tree to the national level, and (4) novel close-range sensing applications that can support sustainable forestry and REDD+ MRV. We particularly welcome submissions on data fusion

    100% Renewable Energy Transition: Pathways and Implementation

    Get PDF
    Energy markets are already undergoing considerable transitions to accommodate new (renewable) energy forms, new (decentral) energy players, and new system requirements, e.g. flexibility and resilience. Traditional energy markets for fossil fuels are therefore under pressure, while not-yet-mature (renewable) energy markets are emerging. As a consequence, investments in large-scale and capital intensive (traditional) energy production projects are surrounded by high uncertainty, and are difficult to hedge by private entities. Traditional energy production companies are transforming into energy service suppliers and companies aggregating numerous potential market players are emerging, while regulation and system management are playing an increasing role. To address these increasing uncertainties and complexities, economic analysis, forecasting, modeling and investment assessment require fresh approaches and views. Novel research is thus required to simulate multiple actor interplays and idiosyncratic behavior. The required approaches cannot deal only with energy supply, but need to include active demand and cover systemic aspects. Energy market transitions challenge policy-making. Market coordination failure, the removal of barriers hindering restructuring and the combination of market signals with command-and-control policy measures are some of the new aims of policies.The aim of this Special Issue is to collect research papers that address the above issues using novel methods from any adequate perspective, including economic analysis, modeling of systems, behavioral forecasting, and policy assessment.The issue will include, but is not be limited to: Local control schemes and algorithms for distributed generation systems; Centralized and decentralized sustainable energy management strategies; Communication architectures, protocols and properties of practical applications; Topologies of distributed generation systems improving flexibility, efficiency and power quality; Practical issues in the control design and implementation of distributed generation systems; Energy transition studies for optimized pathway options aiming for high levels of sustainabilit

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
    corecore