1,361 research outputs found
Toward RADSCAT measurements over the sea and their interpretation
Investigations into several areas which are essential to the execution and interpretation of suborbital observations by composite radiometer - scatterometer sensor (RADSCAT) are reported. Experiments and theory were developed to demonstrate the remote anemometric capability of the sensor over the sea through various weather conditions. It is shown that weather situations found in extra tropical cyclones are useful for demonstrating the all weather capability of the composite sensor. The large scale fluctuations of the wind over the sea dictate the observational coverage required to correlate measurements with the mean surface wind speed. Various theoretical investigations were performed to establish a premise for the joint interpretation of the experiment data. The effects of clouds and rains on downward radiometric observations over the sea were computed. A method of predicting atmospheric attenuation from joint observations is developed. In other theoretical efforts, the emission and scattering characteristics of the sea were derived. Composite surface theories with coherent and noncoherent assumptions were employed
Joint Causal Inference from Multiple Contexts
The gold standard for discovering causal relations is by means of
experimentation. Over the last decades, alternative methods have been proposed
that can infer causal relations between variables from certain statistical
patterns in purely observational data. We introduce Joint Causal Inference
(JCI), a novel approach to causal discovery from multiple data sets from
different contexts that elegantly unifies both approaches. JCI is a causal
modeling framework rather than a specific algorithm, and it can be implemented
using any causal discovery algorithm that can take into account certain
background knowledge. JCI can deal with different types of interventions (e.g.,
perfect, imperfect, stochastic, etc.) in a unified fashion, and does not
require knowledge of intervention targets or types in case of interventional
data. We explain how several well-known causal discovery algorithms can be seen
as addressing special cases of the JCI framework, and we also propose novel
implementations that extend existing causal discovery methods for purely
observational data to the JCI setting. We evaluate different JCI
implementations on synthetic data and on flow cytometry protein expression data
and conclude that JCI implementations can considerably outperform
state-of-the-art causal discovery algorithms.Comment: Final version, as published by JML
ac Losses in a Finite Z Stack Using an Anisotropic Homogeneous-Medium Approximation
A finite stack of thin superconducting tapes, all carrying a fixed current I,
can be approximated by an anisotropic superconducting bar with critical current
density Jc=Ic/2aD, where Ic is the critical current of each tape, 2a is the
tape width, and D is the tape-to-tape periodicity. The current density J must
obey the constraint \int J dx = I/D, where the tapes lie parallel to the x axis
and are stacked along the z axis. We suppose that Jc is independent of field
(Bean approximation) and look for a solution to the critical state for
arbitrary height 2b of the stack. For c<|x|<a we have J=Jc, and for |x|<c the
critical state requires that Bz=0. We show that this implies \partial
J/\partial x=0 in the central region. Setting c as a constant (independent of
z) results in field profiles remarkably close to the desired one (Bz=0 for
|x|<c) as long as the aspect ratio b/a is not too small. We evaluate various
criteria for choosing c, and we show that the calculated hysteretic losses
depend only weakly on how c is chosen. We argue that for small D/a the
anisotropic homogeneous-medium approximation gives a reasonably accurate
estimate of the ac losses in a finite Z stack. The results for a Z stack can be
used to calculate the transport losses in a pancake coil wound with
superconducting tape.Comment: 21 pages, 17 figures, accepted by Supercond. Sci. Techno
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models
Shapley values underlie one of the most popular model-agnostic methods within
explainable artificial intelligence. These values are designed to attribute the
difference between a model's prediction and an average baseline to the
different features used as input to the model. Being based on solid
game-theoretic principles, Shapley values uniquely satisfy several desirable
properties, which is why they are increasingly used to explain the predictions
of possibly complex and highly non-linear machine learning models. Shapley
values are well calibrated to a user's intuition when features are independent,
but may lead to undesirable, counterintuitive explanations when the
independence assumption is violated.
In this paper, we propose a novel framework for computing Shapley values that
generalizes recent work that aims to circumvent the independence assumption. By
employing Pearl's do-calculus, we show how these 'causal' Shapley values can be
derived for general causal graphs without sacrificing any of their desirable
properties. Moreover, causal Shapley values enable us to separate the
contribution of direct and indirect effects. We provide a practical
implementation for computing causal Shapley values based on causal chain graphs
when only partial information is available and illustrate their utility on a
real-world example.Comment: Accepted at 34th Conference on Neural Information Processing Systems
(NeurIPS 2020
- …