594,101 research outputs found
A Variant of Earley Parsing
The Earley algorithm is a widely used parsing method in natural language
processing applications. We introduce a variant of Earley parsing that is based
on a ``delayed'' recognition of constituents. This allows us to start the
recognition of a constituent only in cases in which all of its subconstituents
have been found within the input string. This is particularly advantageous in
several cases in which partial analysis of a constituent cannot be completed
and in general in all cases of productions sharing some suffix of their
right-hand sides (even for different left-hand side nonterminals). Although the
two algorithms result in the same asymptotic time and space complexity, from a
practical perspective our algorithm improves the time and space requirements of
the original method, as shown by reported experimental results.Comment: 12 pages, 1 Postscript figure, uses psfig.tex and llncs.st
Dynamic capabilities, creative action and poetics
Research on dynamic capabilities explores how businesses change enables enterprises to remain competitive. However, theory on dynamic capabilities still struggles to capture novelty, the essence of change. This study argues that a full understanding of strategic change requires us to sharpen our focus on real people and experiences; in turn, we must incorporate other faculties, which almost always operate alongside our logical ones, into our theory. We must pay more attention to the "non-rational" sides of ourselves-including, but not limited to, our imaginations, intuitions, attractions, biographies, preferences, and aesthetic faculties and capabilities. We argue that all such faculties, on the one hand, are central to our abilities to comprehend and cope with complexity and, on the other hand, foster novel understandings, potential responses, and social creativity. This study introduces the possibility of an alternative form of inquiry that highlights the role of poetic faculties in strategic behavior and change
A penalty method for PDE-constrained optimization in inverse problems
Many inverse and parameter estimation problems can be written as
PDE-constrained optimization problems. The goal, then, is to infer the
parameters, typically coefficients of the PDE, from partial measurements of the
solutions of the PDE for several right-hand-sides. Such PDE-constrained
problems can be solved by finding a stationary point of the Lagrangian, which
entails simultaneously updating the paramaters and the (adjoint) state
variables. For large-scale problems, such an all-at-once approach is not
feasible as it requires storing all the state variables. In this case one
usually resorts to a reduced approach where the constraints are explicitly
eliminated (at each iteration) by solving the PDEs. These two approaches, and
variations thereof, are the main workhorses for solving PDE-constrained
optimization problems arising from inverse problems. In this paper, we present
an alternative method that aims to combine the advantages of both approaches.
Our method is based on a quadratic penalty formulation of the constrained
optimization problem. By eliminating the state variable, we develop an
efficient algorithm that has roughly the same computational complexity as the
conventional reduced approach while exploiting a larger search space. Numerical
results show that this method indeed reduces some of the non-linearity of the
problem and is less sensitive the initial iterate
Optimal and fast detection of spatial clusters with scan statistics
We consider the detection of multivariate spatial clusters in the Bernoulli
model with locations, where the design distribution has weakly dependent
marginals. The locations are scanned with a rectangular window with sides
parallel to the axes and with varying sizes and aspect ratios. Multivariate
scan statistics pose a statistical problem due to the multiple testing over
many scan windows, as well as a computational problem because statistics have
to be evaluated on many windows. This paper introduces methodology that leads
to both statistically optimal inference and computationally efficient
algorithms. The main difference to the traditional calibration of scan
statistics is the concept of grouping scan windows according to their sizes,
and then applying different critical values to different groups. It is shown
that this calibration of the scan statistic results in optimal inference for
spatial clusters on both small scales and on large scales, as well as in the
case where the cluster lives on one of the marginals. Methodology is introduced
that allows for an efficient approximation of the set of all rectangles while
still guaranteeing the statistical optimality results described above. It is
shown that the resulting scan statistic has a computational complexity that is
almost linear in .Comment: Published in at http://dx.doi.org/10.1214/09-AOS732 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
ECCH: A novel color coocurrence histogram
In this paper, a novel color cooccurrence histogram method, named eCCH which stands for color cooccurrence histogram at edge points, is proposed to describe the spatial-color joint distribution of images. Unlike all existing ideas, we only investigate the color distribution of pixels located at the two sides of edge points on gradient direction lines. When measuring the similarity of two eCCHs, the Gaussian weighted histogram intersection method is adopted, where both identical and similar color pairs are considered to compensate color variations. Comparative experimental results demonstrate the performance of the proposed eCCH in terms of robustness to color variance and small computational complexity. ©2010 IEEE
Entropy and Complexity of Polygonal Billiards with Spy Mirrors
We prove that a polygonal billiard with one-sided mirrors has zero
topological entropy. In certain cases we show sub exponential and for other
polynomial estimates on the complexity
Титульні сторінки та зміст
htmlabstractSeismic waveform inversion aims at obtaining detailed estimates of subsurface medium parameters, such as the spatial distribution of soundspeed, from multiexperiment seismic data. A formulation of this inverse problem in the frequency domain leads to an optimization problem constrained by a Helmholtz equation with many right-hand sides. Application of this technique to industry-scale problems faces several challenges: First, we need to solve the Helmholtz equation for high wave numbers over large computational domains. Second, the data consist of many independent experiments, leading to a large number of PDE solves. This results in high computational complexity both in terms of memory and CPU time as well as input/output costs. Finally, the inverse problem is highly nonlinear and a lot of art goes into preprocessing and regularization. Ideally, an inversion needs to be run several times with different initial guesses and/or tuning parameters. In this paper, we discuss the requirements of the various components (PDE solver, optimization method, \dots) when applied to large-scale three-dimensional seismic waveform inversion and combine several existing approaches into a flexible inversion scheme for seismic waveform inversion. The scheme is based on the idea that in the early stages of the inversion we do not need all the data or very accurate PDE solves. We base our method on an existing preconditioned Krylov solver (CARP-CG) and use ideas from stochastic optimization to formulate a gradient-based (quasi-Newton) optimization algorithm that works with small subsets of the right-hand sides and uses inexact PDE solves for the gradient calculations. We propose novel heuristics to adaptively control both the accuracy and the number of right-hand sides. We illustrate the algorithms on synthetic benchmark models for which significant computational gains can be made without being sensitive to noise and without losing the accuracy of the inverted model
- …