1,313 research outputs found
Evaluating two methods for Treebank grammar compaction
Treebanks, such as the Penn Treebank, provide a basis for the automatic creation of broad coverage grammars. In the simplest case, rules can simply be âread offâ the parse-annotations of the corpus, producing either a simple or probabilistic context-free grammar. Such grammars, however, can be very large, presenting problems for the subsequent computational costs of parsing under the grammar.
In this paper, we explore ways by which a treebank grammar can be reduced in size or âcompactedâ, which involve the use of two kinds of technique: (i) thresholding of rules by their number of occurrences; and (ii) a method of rule-parsing, which has both probabilistic and non-probabilistic variants. Our results show that by a combined use of these two techniques, a probabilistic context-free grammar can be reduced in size by 62% without any loss in parsing performance, and by 71% to give a gain in recall, but some loss in precision
Dialogue based interfaces for universal access.
Conversation provides an excellent means of communication for almost all people. Consequently, a conversational interface is an excellent mechanism for allowing people to interact with systems. Conversational systems are an active research area, but a wide range of systems can be developed with current technology. More sophisticated interfaces can take considerable effort, but simple interfaces can be developed quite rapidly. This paper gives an introduction to the current state of the art of conversational systems and interfaces. It describes a methodology for developing conversational interfaces and gives an example of an interface for a state benefits web site. The paper discusses how this interface could improve access for a wide range of people, and how further development of this interface would allow a larger range of people to use the system and give them more functionality
Electromagnetic energy penetration in the self-induced transparency regime of relativistic laser-plasma interactions
Two scenarios for the penetration of relativistically intense laser radiation
into an overdense plasma, accessible by self-induced transparency, are
presented. For supercritical densities less than 1.5 times the critical one,
penetration of laser energy occurs by soliton-like structures moving into the
plasma. At higher background densities laser light penetrates over a finite
length only, that increases with the incident intensity. In this regime
plasma-field structures represent alternating electron layers separated by
about half a wavelength by depleted regions.Comment: 9 pages, 4 figures, submitted for publication to PR
A spectrum deconvolution method based on grey relational analysis
The extensive usage of X ray spectroscopies in studying complex material systems is not only intended to reveal underlying mechanisms that govern physical phenomena, but also used in applied studies focused on an insight driven performance improvement of a wide range of devices. However, the traditional analysis methods for X ray spectroscopic data are rather time consuming and sensitive to errors in data pre processing e.g., normalization or background subtraction . In this study, a method based on grey relational analysis, a multi variable statistical method, is proposed to analyze and extract information from X ray spectroscopic data. As a showcase, the valence bands of microcrystalline silicon suboxides probed by hard X ray photoelectron spectroscopy HAXPES were investigated. The results obtained by the proposed method agree well with conventionally derived composition information e.g., curve fit of Si 2p core level of the silicon suboxides . Furthermore, the uncertainty of chemical compositions derived by the proposed method is smaller than that of traditional analysis methods e.g., the least square fit , when artificial linear functions are introduced to simulate the errors in data pre processing. This suggests that the proposed method is capable of providing more reliable and accurate results, especially for data containing significant noise contributions or that is subject to inconsistent data pre processing. Since the proposed method is less experience driven and error prone, it offers a novel approach for automate data analysis, which is of great interest for various applications, such as studying combinatorial material librarie
Wishart and Anti-Wishart random matrices
We provide a compact exact representation for the distribution of the matrix
elements of the Wishart-type random matrices , for any finite
number of rows and columns of , without any large N approximations. In
particular we treat the case when the Wishart-type random matrix contains
redundant, non-random information, which is a new result. This representation
is of interest for a procedure of reconstructing the redundant information
hidden in Wishart matrices, with potential applications to numerous models
based on biological, social and artificial intelligence networks.Comment: 11 pages; v2: references updated + some clarifications added; v3:
version to appear in J. Phys. A, Special Issue on Random Matrix Theor
Large Deviations of the Maximum Eigenvalue in Wishart Random Matrices
We compute analytically the probability of large fluctuations to the left of
the mean of the largest eigenvalue in the Wishart (Laguerre) ensemble of
positive definite random matrices. We show that the probability that all the
eigenvalues of a (N x N) Wishart matrix W=X^T X (where X is a rectangular M x N
matrix with independent Gaussian entries) are smaller than the mean value
=N/c decreases for large N as , where \beta=1,2 correspond respectively to
real and complex Wishart matrices, c=N/M < 1 and \Phi_{-}(x;c) is a large
deviation function that we compute explicitly. The result for the Anti-Wishart
case (M < N) simply follows by exchanging M and N. We also analytically
determine the average spectral density of an ensemble of constrained Wishart
matrices whose eigenvalues are forced to be smaller than a fixed barrier. The
numerical simulations are in excellent agreement with the analytical
predictions.Comment: Published version. References and appendix adde
Scattering of second sound waves by quantum vorticity
A new method of detection and measurement of quantum vorticity by scattering
second sound off quantized vortices in superfluid Helium is suggested.
Theoretical calculations of the relative amplitude of the scattered second
sound waves from a single quantum vortex, a vortex ring, and bulk vorticity are
presented. The relevant estimates show that an experimental verification of the
method is feasible. Moreover, it can even be used for the detection of a single
quantum vortex.Comment: Latex file, 9 page
Surface Oscillations in Overdense Plasmas Irradiated by Ultrashort Laser Pulses
The generation of electron surface oscillations in overdense plasmas
irradiated at normal incidence by an intense laser pulse is investigated.
Two-dimensional (2D) particle-in-cell simulations show a transition from a
planar, electrostatic oscillation at , with the laser
frequency, to a 2D electromagnetic oscillation at frequency and
wavevector . A new electron parametric instability, involving the
decay of a 1D electrostatic oscillation into two surface waves, is introduced
to explain the basic features of the 2D oscillations. This effect leads to the
rippling of the plasma surface within a few laser cycles, and is likely to have
a strong impact on laser interaction with solid targets.Comment: 9 pages (LaTeX, Revtex4), 4 GIF color figures, accepted for
publication in Phys. Rev. Let
Setting limits on Effective Field Theories: the case of Dark Matter
The usage of Effective Field Theories (EFT) for LHC new physics searches is
receiving increasing attention. It is thus important to clarify all the aspects
related with the applicability of the EFT formalism in the LHC environment,
where the large available energy can produce reactions that overcome the
maximal range of validity, i.e. the cutoff, of the theory. We show that this
does forbid to set rigorous limits on the EFT parameter space through a
modified version of the ordinary binned likelihood hypothesis test, which we
design and validate. Our limit-setting strategy can be carried on in its
full-fledged form by the LHC experimental collaborations, or performed
externally to the collaborations, through the Simplified Likelihood approach,
by relying on certain approximations. We apply it to the recent CMS mono-jet
analysis and derive limits on a Dark Matter (DM) EFT model. DM is selected as a
case study because the limited reach on the DM production EFT Wilson
coefficient and the structure of the theory suggests that the cutoff might be
dangerously low, well within the LHC reach. However our strategy can also be
applied to EFT's parametrising the indirect effects of heavy new physics in the
Electroweak and Higgs sectors
Assessment of ion kinetic effects in shock-driven inertial confinement fusion implosions using fusion burn imaging
The significance and nature of ion kinetic effects in D3He-filled, shock-driven inertial confinement
fusion implosions are assessed through measurements of fusion burn profiles. Over this series of
experiments, the ratio of ion-ion mean free path to minimum shell radius (the Knudsen number,
NK) was varied from 0.3 to 9 in order to probe hydrodynamic-like to strongly kinetic plasma
conditions; as the Knudsen number increased, hydrodynamic models increasingly failed to match
measured yields, while an empirically-tuned, first-step model of ion kinetic effects better captured
the observed yield trends [Rosenberg et al., Phys. Rev. Lett. 112, 185001 (2014)]. Here, spatially
resolved measurements of the fusion burn are used to examine kinetic ion transport effects in
greater detail, adding an additional dimension of understanding that goes beyond zero-dimensional
integrated quantities to one-dimensional profiles. In agreement with the previous findings, a comparison
of measured and simulated burn profiles shows that models including ion transport effects
are able to better match the experimental results. In implosions characterized by large Knudsen
numbers (NK3), the fusion burn profiles predicted by hydrodynamics simulations that exclude
ion mean free path effects are peaked far from the origin, in stark disagreement with the experimentally
observed profiles, which are centrally peaked. In contrast, a hydrodynamics simulation that
includes a model of ion diffusion is able to qualitatively match the measured profile shapes.
Therefore, ion diffusion or diffusion-like processes are identified as a plausible explanation of the
observed trends, though further refinement of the models is needed for a more complete and
quantitative understanding of ion kinetic effects
- âŠ