7,650 research outputs found
Context dependent revocation in delegated XACML
The XACML standard defines an XML based language for defining access control policies and a related processing model. Recent work aims to add delegation to XACML in order to express the right to administrate XACML policies within XACML itself. The delegation profile draft explains how to validate the right to issue a policy, but there are no provisions for removing a policy. This paper proposes a revocation model for delegated XACML. A novel feature of this model is that whether a revocation is valid or not, depends not only on who issued the revocation, but also on the context in which an attempt to use the revoked policy is done
Magnetic non-contact friction from domain wall dynamics actuated by oscillatory mechanical motion
Magnetic friction is a form of non-contact friction arising from the
dissipation of energy in a magnet due to spin reorientation in a magnetic
field. In this paper we study magnetic friction in the context of
micromagnetics, using our recent implementation of smooth spring-driven motion
[Phys. Rev. E. 97, 053301 (2018)] to simulate ring-down measurements in two
setups where domain wall dynamics is induced by mechanical motion. These
include a single thin film with a domain wall in an external field and a setup
mimicking a magnetic cantilever tip and substrate, in which the two magnets
interact through dipolar interactions. We investigate how various micromagnetic
parameters influence the domain wall dynamics actuated by the oscillatory
spring-driven mechanical motion and the resulting damping coefficient. Our
simulations show that the magnitude of magnetic friction can be comparable to
other forms of non-contact friction. For oscillation frequencies lower than
those inducing excitations of the internal structure of the domain walls, the
damping coefficient is found to be independent of frequency. Hence, our results
obtained in the frequency range from 8 to 112 MHz are expected to be relevant
also for typical experimental setups operating in the 100 kHz range.Comment: 19 pages, 8 figure
Estimation of AR and ARMA models by stochastic complexity
In this paper the stochastic complexity criterion is applied to estimation of
the order in AR and ARMA models. The power of the criterion for short strings
is illustrated by simulations. It requires an integral of the square root of
Fisher information, which is done by Monte Carlo technique. The stochastic
complexity, which is the negative logarithm of the Normalized Maximum
Likelihood universal density function, is given. Also, exact asymptotic
formulas for the Fisher information matrix are derived.Comment: Published at http://dx.doi.org/10.1214/074921706000000941 in the IMS
Lecture Notes Monograph Series
(http://www.imstat.org/publications/lecnotes.htm) by the Institute of
Mathematical Statistics (http://www.imstat.org
Determining Principal Component Cardinality through the Principle of Minimum Description Length
PCA (Principal Component Analysis) and its variants areubiquitous techniques
for matrix dimension reduction and reduced-dimensionlatent-factor extraction.
One significant challenge in using PCA, is thechoice of the number of principal
components. The information-theoreticMDL (Minimum Description Length) principle
gives objective compression-based criteria for model selection, but it is
difficult to analytically applyits modern definition - NML (Normalized Maximum
Likelihood) - to theproblem of PCA. This work shows a general reduction of NML
prob-lems to lower-dimension problems. Applying this reduction, it boundsthe
NML of PCA, by terms of the NML of linear regression, which areknown.Comment: LOD 201
MDL Denoising Revisited
We refine and extend an earlier MDL denoising criterion for wavelet-based
denoising. We start by showing that the denoising problem can be reformulated
as a clustering problem, where the goal is to obtain separate clusters for
informative and non-informative wavelet coefficients, respectively. This
suggests two refinements, adding a code-length for the model index, and
extending the model in order to account for subband-dependent coefficient
distributions. A third refinement is derivation of soft thresholding inspired
by predictive universal coding with weighted mixtures. We propose a practical
method incorporating all three refinements, which is shown to achieve good
performance and robustness in denoising both artificial and natural signals.Comment: Submitted to IEEE Transactions on Information Theory, June 200
The Minimum Description Length Principle and Model Selection in Spectropolarimetry
It is shown that the two-part Minimum Description Length Principle can be
used to discriminate among different models that can explain a given observed
dataset. The description length is chosen to be the sum of the lengths of the
message needed to encode the model plus the message needed to encode the data
when the model is applied to the dataset. It is verified that the proposed
principle can efficiently distinguish the model that correctly fits the
observations while avoiding over-fitting. The capabilities of this criterion
are shown in two simple problems for the analysis of observed
spectropolarimetric signals. The first is the de-noising of observations with
the aid of the PCA technique. The second is the selection of the optimal number
of parameters in LTE inversions. We propose this criterion as a quantitative
approach for distinguising the most plausible model among a set of proposed
models. This quantity is very easy to implement as an additional output on the
existing inversion codes.Comment: Accepted for publication in the Astrophysical Journa
Complexity Through Nonextensivity
The problem of defining and studying complexity of a time series has
interested people for years. In the context of dynamical systems, Grassberger
has suggested that a slow approach of the entropy to its extensive asymptotic
limit is a sign of complexity. We investigate this idea further by information
theoretic and statistical mechanics techniques and show that these arguments
can be made precise, and that they generalize many previous approaches to
complexity, in particular unifying ideas from the physics literature with ideas
from learning and coding theory; there are even connections of this statistical
approach to algorithmic or Kolmogorov complexity. Moreover, a set of simple
axioms similar to those used by Shannon in his development of information
theory allows us to prove that the divergent part of the subextensive component
of the entropy is a unique complexity measure. We classify time series by their
complexities and demonstrate that beyond the `logarithmic' complexity classes
widely anticipated in the literature there are qualitatively more complex,
`power--law' classes which deserve more attention.Comment: summarizes and extends physics/000707
Statistical inference optimized with respect to the observed sample for single or multiple comparisons
The normalized maximum likelihood (NML) is a recent penalized likelihood that
has properties that justify defining the amount of discrimination information
(DI) in the data supporting an alternative hypothesis over a null hypothesis as
the logarithm of an NML ratio, namely, the alternative hypothesis NML divided
by the null hypothesis NML. The resulting DI, like the Bayes factor but unlike
the p-value, measures the strength of evidence for an alternative hypothesis
over a null hypothesis such that the probability of misleading evidence
vanishes asymptotically under weak regularity conditions and such that evidence
can support a simple null hypothesis. Unlike the Bayes factor, the DI does not
require a prior distribution and is minimax optimal in a sense that does not
involve averaging over outcomes that did not occur. Replacing a (possibly
pseudo-) likelihood function with its weighted counterpart extends the scope of
the DI to models for which the unweighted NML is undefined. The likelihood
weights leverage side information, either in data associated with comparisons
other than the comparison at hand or in the parameter value of a simple null
hypothesis. Two case studies, one involving multiple populations and the other
involving multiple biological features, indicate that the DI is robust to the
type of side information used when that information is assigned the weight of a
single observation. Such robustness suggests that very little adjustment for
multiple comparisons is warranted if the sample size is at least moderate.Comment: Typo in equation (7) of v2 corrected in equation (6) of v3; clarity
improve
- âŠ