1,045 research outputs found
Markov processes follow from the principle of Maximum Caliber
Markov models are widely used to describe processes of stochastic dynamics.
Here, we show that Markov models are a natural consequence of the dynamical
principle of Maximum Caliber. First, we show that when there are different
possible dynamical trajectories in a time-homogeneous process, then the only
type of process that maximizes the path entropy, for any given singlet
statistics, is a sequence of identical, independently distributed (i.i.d.)
random variables, which is the simplest Markov process. If the data is in the
form of sequentially pairwise statistics, then maximizing the caliber dictates
that the process is Markovian with a uniform initial distribution. Furthermore,
if an initial non-uniform dynamical distribution is known, or multiple
trajectories are conditioned on an initial state, then the Markov process is
still the only one that maximizes the caliber. Second, given a model, MaxCal
can be used to compute the parameters of that model. We show that this
procedure is equivalent to the maximum-likelihood method of inference in the
theory of statistics.Comment: 4 page
A New Approach to Time Domain Classification of Broadband Noise in Gravitational Wave Data
Broadband noise in gravitational wave (GW) detectors, also known as triggers,
can often be a deterrant to the efficiency with which astrophysical search
pipelines detect sources. It is important to understand their instrumental or
environmental origin so that they could be eliminated or accounted for in the
data. Since the number of triggers is large, data mining approaches such as
clustering and classification are useful tools for this task. Classification of
triggers based on a handful of discrete properties has been done in the past. A
rich information content is available in the waveform or 'shape' of the
triggers that has had a rather restricted exploration so far. This paper
presents a new way to classify triggers deriving information from both trigger
waveforms as well as their discrete physical properties using a sequential
combination of the Longest Common Sub-Sequence (LCSS) and LCSS coupled with
Fast Time Series Evaluation (FTSE) for waveform classification and the
multidimensional hierarchical classification (MHC) analysis for the grouping
based on physical properties. A generalized k-means algorithm is used with the
LCSS (and LCSS+FTSE) for clustering the triggers using a validity measure to
determine the correct number of clusters in absence of any prior knowledge. The
results have been demonstrated by simulations and by application to a segment
of real LIGO data from the sixth science run.Comment: 16 pages, 16 figure
O ‘Darwinismo Social’ Perante a Questão da Assistência
Tendo como referência o quadro de miséria/ pauperismo do século XIX, o propósito crÃtico deste ensaio é a influência da teoria darwinista na questão social. Após um breve enquadramento dessas reflexões, no quadro da problemática da pobreza, a ênfase é colocada no pensamento de Herbert Spencer que advogava os aspectos positivos da pobreza enquanto instrumento de selecção dos menos capazes. O que está em causa, para o autor deste artigo, é demonstrar como esses mesmos argumentos spencerianos emergiram em defesa de um posicionamento crÃtico no quadro de qualquer tipo de intervenção assistencial. / In the context of the 19th century framework of misery/pauperism, the critical purpose of this article is the influence of the Darwinian theory on the social question. After a brief framing of those reflections, the emphasis is placed on the thought of Herbert Spencer about what he considered the positive aspects of poverty as a selection instrument of the less capable. What is at question, for the author of this article, is to demonstrate how the Spencerian thought on poverty defend, in fact, a critical position in the field of any kind of assistance intervention
Statistical Consequences of Devroye Inequality for Processes. Applications to a Class of Non-Uniformly Hyperbolic Dynamical Systems
In this paper, we apply Devroye inequality to study various statistical
estimators and fluctuations of observables for processes. Most of these
observables are suggested by dynamical systems. These applications concern the
co-variance function, the integrated periodogram, the correlation dimension,
the kernel density estimator, the speed of convergence of empirical measure,
the shadowing property and the almost-sure central limit theorem. We proved in
\cite{CCS} that Devroye inequality holds for a class of non-uniformly
hyperbolic dynamical systems introduced in \cite{young}. In the second appendix
we prove that, if the decay of correlations holds with a common rate for all
pairs of functions, then it holds uniformly in the function spaces. In the last
appendix we prove that for the subclass of one-dimensional systems studied in
\cite{young} the density of the absolutely continuous invariant measure belongs
to a Besov space.Comment: 33 pages; companion of the paper math.DS/0412166; corrected version;
to appear in Nonlinearit
Precursors of extreme increments
We investigate precursors and predictability of extreme increments in a time
series. The events we are focusing on consist in large increments within
successive time steps. We are especially interested in understanding how the
quality of the predictions depends on the strategy to choose precursors, on the
size of the event and on the correlation strength. We study the prediction of
extreme increments analytically in an AR(1) process, and numerically in wind
speed recordings and long-range correlated ARMA data. We evaluate the success
of predictions via receiver operator characteristics (ROC-curves). Furthermore,
we observe an increase of the quality of predictions with increasing event size
and with decreasing correlation in all examples. Both effects can be understood
by using the likelihood ratio as a summary index for smooth ROC-curves
Unfolding dynamics of proteins under applied force
Understanding the mechanisms of protein folding is a major challenge that is being addressed effectively by collaboration between researchers in the physical and life sciences. Recently, it has become possible to mechanically unfold proteins by pulling on their two termini using local force probes such as the atomic force microscope. Here, we present data from experiments in which synthetic protein polymers designed to mimic naturally occurring polyproteins have been mechanically unfolded. For many years protein folding dynamics have been studied using chemical denaturation, and we therefore firstly discuss our mechanical unfolding data in the context of such experiments and show that the two unfolding mechanisms are not the same, at least for the proteins studied here. We also report unexpected observations that indicate a history effect in the observed unfolding forces of polymeric proteins and explain this in terms of the changing number of domains remaining to unfold and the increasing compliance of the lengthening unstructured polypeptide chain produced each time a domain unfolds
Emission-aware Energy Storage Scheduling for a Greener Grid
Reducing our reliance on carbon-intensive energy sources is vital for
reducing the carbon footprint of the electric grid. Although the grid is seeing
increasing deployments of clean, renewable sources of energy, a significant
portion of the grid demand is still met using traditional carbon-intensive
energy sources. In this paper, we study the problem of using energy storage
deployed in the grid to reduce the grid's carbon emissions. While energy
storage has previously been used for grid optimizations such as peak shaving
and smoothing intermittent sources, our insight is to use distributed storage
to enable utilities to reduce their reliance on their less efficient and most
carbon-intensive power plants and thereby reduce their overall emission
footprint. We formulate the problem of emission-aware scheduling of distributed
energy storage as an optimization problem, and use a robust optimization
approach that is well-suited for handling the uncertainty in load predictions,
especially in the presence of intermittent renewables such as solar and wind.
We evaluate our approach using a state of the art neural network load
forecasting technique and real load traces from a distribution grid with 1,341
homes. Our results show a reduction of >0.5 million kg in annual carbon
emissions -- equivalent to a drop of 23.3% in our electric grid emissions.Comment: 11 pages, 7 figure, This paper will appear in the Proceedings of the
ACM International Conference on Future Energy Systems (e-Energy 20) June
2020, Australi
On the entropy production of time series with unidirectional linearity
There are non-Gaussian time series that admit a causal linear autoregressive
moving average (ARMA) model when regressing the future on the past, but not
when regressing the past on the future. The reason is that, in the latter case,
the regression residuals are only uncorrelated but not statistically
independent of the future. In previous work, we have experimentally verified
that many empirical time series indeed show such a time inversion asymmetry.
For various physical systems, it is known that time-inversion asymmetries are
linked to the thermodynamic entropy production in non-equilibrium states. Here
we show that such a link also exists for the above unidirectional linearity.
We study the dynamical evolution of a physical toy system with linear
coupling to an infinite environment and show that the linearity of the dynamics
is inherited to the forward-time conditional probabilities, but not to the
backward-time conditionals. The reason for this asymmetry between past and
future is that the environment permanently provides particles that are in a
product state before they interact with the system, but show statistical
dependencies afterwards. From a coarse-grained perspective, the interaction
thus generates entropy. We quantitatively relate the strength of the
non-linearity of the backward conditionals to the minimal amount of entropy
generation.Comment: 16 page
Random walks - a sequential approach
In this paper sequential monitoring schemes to detect nonparametric drifts
are studied for the random walk case. The procedure is based on a kernel
smoother. As a by-product we obtain the asymptotics of the Nadaraya-Watson
estimator and its as- sociated sequential partial sum process under
non-standard sampling. The asymptotic behavior differs substantially from the
stationary situation, if there is a unit root (random walk component). To
obtain meaningful asymptotic results we consider local nonpara- metric
alternatives for the drift component. It turns out that the rate of convergence
at which the drift vanishes determines whether the asymptotic properties of the
monitoring procedure are determined by a deterministic or random function.
Further, we provide a theoretical result about the optimal kernel for a given
alternative
Of `Cocktail Parties' and Exoplanets
The characterisation of ever smaller and fainter extrasolar planets requires
an intricate understanding of one's data and the analysis techniques used.
Correcting the raw data at the 10^-4 level of accuracy in flux is one of the
central challenges. This can be difficult for instruments that do not feature a
calibration plan for such high precision measurements. Here, it is not always
obvious how to de-correlate the data using auxiliary information of the
instrument and it becomes paramount to know how well one can disentangle
instrument systematics from one's data, given nothing but the data itself. We
propose a non-parametric machine learning algorithm, based on the concept of
independent component analysis, to de-convolve the systematic noise and all
non-Gaussian signals from the desired astrophysical signal. Such a `blind'
signal de-mixing is commonly known as the `Cocktail Party problem' in
signal-processing. Given multiple simultaneous observations of the same
exoplanetary eclipse, as in the case of spectrophotometry, we show that we can
often disentangle systematic noise from the original light curve signal without
the use of any complementary information of the instrument. In this paper, we
explore these signal extraction techniques using simulated data and two data
sets observed with the Hubble-NICMOS instrument. Another important application
is the de-correlation of the exoplanetary signal from time-correlated stellar
variability. Using data obtained by the Kepler mission we show that the desired
signal can be de-convolved from the stellar noise using a single time series
spanning several eclipse events. Such non-parametric techniques can provide
important confirmations of the existent parametric corrections reported in the
literature, and their associated results. Additionally they can substantially
improve the precision exoplanetary light curve analysis in the future.Comment: ApJ accepte
- …