103,211 research outputs found
Dynamical system analysis and forecasting of deformation produced by an earthquake fault
We present a method of constructing low-dimensional nonlinear models
describing the main dynamical features of a discrete 2D cellular fault zone,
with many degrees of freedom, embedded in a 3D elastic solid. A given fault
system is characterized by a set of parameters that describe the dynamics,
rheology, property disorder, and fault geometry. Depending on the location in
the system parameter space we show that the coarse dynamics of the fault can be
confined to an attractor whose dimension is significantly smaller than the
space in which the dynamics takes place. Our strategy of system reduction is to
search for a few coherent structures that dominate the dynamics and to capture
the interaction between these coherent structures. The identification of the
basic interacting structures is obtained by applying the Proper Orthogonal
Decomposition (POD) to the surface deformations fields that accompany
strike-slip faulting accumulated over equal time intervals. We use a
feed-forward artificial neural network (ANN) architecture for the
identification of the system dynamics projected onto the subspace (model space)
spanned by the most energetic coherent structures. The ANN is trained using a
standard back-propagation algorithm to predict (map) the values of the observed
model state at a future time given the observed model state at the present
time. This ANN provides an approximate, large scale, dynamical model for the
fault.Comment: 30 pages, 12 figure
Investigating Cepheid Carinae's Cycle-to-cycle Variations via Contemporaneous Velocimetry and Interferometry
Baade-Wesselink-type (BW) techniques enable geometric distance measurements
of Cepheid variable stars in the Galaxy and the Magellanic clouds. The leading
uncertainties involved concern projection factors required to translate
observed radial velocities (RVs) to pulsational velocities and recently
discovered modulated variability. We carried out an unprecedented observational
campaign involving long-baseline interferometry (VLTI/PIONIER) and spectroscopy
(Euler/Coralie) to search for modulated variability in the long-period (P
35.5 d) Cepheid Carinae. We determine highly precise angular diameters
from squared visibilities and investigate possible differences between two
consecutive maximal diameters, . We characterize the
modulated variability along the line-of-sight using 360 high-precision RVs.
Here we report tentative evidence for modulated angular variability and confirm
cycle-to-cycle differences of Carinae's RV variability. Two successive
maxima yield = 13.1 0.7 (stat.) {\mu}as for
uniform disk models and 22.5 1.4 (stat.) {\mu}as (4% of the total angular
variation) for limb-darkened models. By comparing new RVs with 2014 RVs we show
modulation to vary in strength. Barring confirmation, our results suggest the
optical continuum (traced by interferometry) to be differently affected by
modulation than gas motions (traced by spectroscopy). This implies a previously
unknown time-dependence of projection factors, which can vary by 5% between
consecutive cycles of expansion and contraction. Additional interferometric
data are required to confirm modulated angular diameter variations. By
understanding the origin of modulated variability and monitoring its long-term
behavior, we aim to improve the accuracy of BW distances and further the
understanding of stellar pulsations.Comment: Accepted for publication in MNRAS. 19 pages, 13 figures, 10 table
Exploratory Analysis of Functional Data via Clustering and Optimal Segmentation
We propose in this paper an exploratory analysis algorithm for functional
data. The method partitions a set of functions into clusters and represents
each cluster by a simple prototype (e.g., piecewise constant). The total number
of segments in the prototypes, , is chosen by the user and optimally
distributed among the clusters via two dynamic programming algorithms. The
practical relevance of the method is shown on two real world datasets
An Optimal Linear Time Algorithm for Quasi-Monotonic Segmentation
Monotonicity is a simple yet significant qualitative characteristic. We
consider the problem of segmenting a sequence in up to K segments. We want
segments to be as monotonic as possible and to alternate signs. We propose a
quality metric for this problem using the l_inf norm, and we present an optimal
linear time algorithm based on novel formalism. Moreover, given a
precomputation in time O(n log n) consisting of a labeling of all extrema, we
compute any optimal segmentation in constant time. We compare experimentally
its performance to two piecewise linear segmentation heuristics (top-down and
bottom-up). We show that our algorithm is faster and more accurate.
Applications include pattern recognition and qualitative modeling.Comment: This is the extended version of our ICDM'05 paper (arXiv:cs/0702142
A Model-Based Frequency Constraint for Mining Associations from Transaction Data
Mining frequent itemsets is a popular method for finding associated items in
databases. For this method, support, the co-occurrence frequency of the items
which form an association, is used as the primary indicator of the
associations's significance. A single user-specified support threshold is used
to decided if associations should be further investigated. Support has some
known problems with rare items, favors shorter itemsets and sometimes produces
misleading associations.
In this paper we develop a novel model-based frequency constraint as an
alternative to a single, user-specified minimum support. The constraint
utilizes knowledge of the process generating transaction data by applying a
simple stochastic mixture model (the NB model) which allows for transaction
data's typically highly skewed item frequency distribution. A user-specified
precision threshold is used together with the model to find local frequency
thresholds for groups of itemsets. Based on the constraint we develop the
notion of NB-frequent itemsets and adapt a mining algorithm to find all
NB-frequent itemsets in a database. In experiments with publicly available
transaction databases we show that the new constraint provides improvements
over a single minimum support threshold and that the precision threshold is
more robust and easier to set and interpret by the user
Measuring information-transfer delays
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics
LABOR MARKET BEHAVIOR IN WASHINGTON: A COINTEGRATION APPROACH
In recent years, the research that investigates impact of employment on other labor related variables has a prominent place in regional science. Generally, it is well understood that new business investment brings changes in population, increased labor force participation rate and migration of new residents. There is mixed research results regarding the extent that new migrants tend to account for new employment. Bartik (1993) found that about one-quarter of the new jobs go to local workers because of the increase in the labor force participation rates of local residents in the long run. He considered the long run effects by estimating the effects of 1% job growth in a certain period on the labor force participation rate seventeen years after the period. In contrast Blanchard and Katz's (1992) research reaches the opposite conclusion - in five to seven years the employment response consists entirely of the migration of new migrants. Their finding is that long-run effect of the job growth on the labor force participation rate is negligible. In this study, from the cointegration time series analysis, we found a long run equilibrium relationship among population, labor force participation rate and employment, in which population is positively related to employment and negatively related to labor force participation rate. The long run effect of a unit change of labor force participation rate (1%) is a decrease of 73,880 in population and the long run effect of a unit change in employment (1000) is an increase of 2,190 in population. We decomposed the time series into stationary components and non-stationary components. The pattern of the stationary component of population is quite similar to that of labor force participation rate while that of employment shows a different fluctuation. From the decomposition, it was obvious that the pattern of stationary component of employment and net migration is quite similar, which means net migration is the short run, temporary response to employment change. The patterns of three years delayed stationary components of population are similar to that of employment and net migration, and the plots correspond to changing economic conditions. According to the change in economic conditions population responds three years later than employment and net migration. We interpreted the non-stationary component of labor force participation rate as reflecting the increasing trend of labor force participation rate in Washington mainly due to a considerable increase in the female labor force participation. The impulse responses of population, employment and labor force participation rate to a one standard deviation shock in employment show permanent increase effects. They settle at different equilibrium value after long term periods. The response of the labor force participation rate to an impulse in employment supports Bartik's finding. Obviously the result is the opposite of Blanchard-Katz's finding that the long-run effect of job growth on the labor force participation rate is negligible. However, since the effect of population is also significantly high, we doubt that the effect of increase in labor force participation rate according to the employment shock covers only local resident labor force.Labor and Human Capital,
Practical implementation of nonlinear time series methods: The TISEAN package
Nonlinear time series analysis is becoming a more and more reliable tool for
the study of complicated dynamics from measurements. The concept of
low-dimensional chaos has proven to be fruitful in the understanding of many
complex phenomena despite the fact that very few natural systems have actually
been found to be low dimensional deterministic in the sense of the theory. In
order to evaluate the long term usefulness of the nonlinear time series
approach as inspired by chaos theory, it will be important that the
corresponding methods become more widely accessible. This paper, while not a
proper review on nonlinear time series analysis, tries to make a contribution
to this process by describing the actual implementation of the algorithms, and
their proper usage. Most of the methods require the choice of certain
parameters for each specific time series application. We will try to give
guidance in this respect. The scope and selection of topics in this article, as
well as the implementational choices that have been made, correspond to the
contents of the software package TISEAN which is publicly available from
http://www.mpipks-dresden.mpg.de/~tisean . In fact, this paper can be seen as
an extended manual for the TISEAN programs. It fills the gap between the
technical documentation and the existing literature, providing the necessary
entry points for a more thorough study of the theoretical background.Comment: 27 pages, 21 figures, downloadable software at
http://www.mpipks-dresden.mpg.de/~tisea
- …