110,300 research outputs found
Data processing techniques used with MST radars: A review
The data processing methods used in high power radar probing of the middle atmosphere are examined. The radar acts as a spatial filter on the small scale refractivity fluctuations in the medium. The characteristics of the received signals are related to the statistical properties of these fluctuations. A functional outline of the components of a radar system is given. Most computation intensive tasks are carried out by the processor. The processor computes a statistical function of the received signals, simultaneously for a large number of ranges. The slow fading of atmospheric signals is used to reduce the data input rate to the processor by coherent integration. The inherent range resolution of the radar experiments can be improved significant with the use of pseudonoise phase codes to modulate the transmitted pulses and a corresponding decoding operation on the received signals. Commutability of the decoding and coherent integration operations is used to obtain a significant reduction in computations. The limitations of the processors are outlined. At the next level of data reduction, the measured function is parameterized by a few spectral moments that can be related to physical processes in the medium. The problems encountered in estimating the spectral moments in the presence of strong ground clutter, external interference, and noise are discussed. The graphical and statistical analysis of the inferred parameters are outlined. The requirements for special purpose processors for MST radars are discussed
Hierarchical fractional-step approximations and parallel kinetic Monte Carlo algorithms
We present a mathematical framework for constructing and analyzing parallel
algorithms for lattice Kinetic Monte Carlo (KMC) simulations. The resulting
algorithms have the capacity to simulate a wide range of spatio-temporal scales
in spatially distributed, non-equilibrium physiochemical processes with complex
chemistry and transport micro-mechanisms. The algorithms can be tailored to
specific hierarchical parallel architectures such as multi-core processors or
clusters of Graphical Processing Units (GPUs). The proposed parallel algorithms
are controlled-error approximations of kinetic Monte Carlo algorithms,
departing from the predominant paradigm of creating parallel KMC algorithms
with exactly the same master equation as the serial one.
Our methodology relies on a spatial decomposition of the Markov operator
underlying the KMC algorithm into a hierarchy of operators corresponding to the
processors' structure in the parallel architecture. Based on this operator
decomposition, we formulate Fractional Step Approximation schemes by employing
the Trotter Theorem and its random variants; these schemes, (a) determine the
communication schedule} between processors, and (b) are run independently on
each processor through a serial KMC simulation, called a kernel, on each
fractional step time-window.
Furthermore, the proposed mathematical framework allows us to rigorously
justify the numerical and statistical consistency of the proposed algorithms,
showing the convergence of our approximating schemes to the original serial
KMC. The approach also provides a systematic evaluation of different processor
communicating schedules.Comment: 34 pages, 9 figure
Automatic generation of named entity taggers leveraging parallel corpora
The lack of hand curated data is a major impediment to developing statistical semantic
processors for many of the world languages. A major issue of semantic processors in Nat-
ural Language Processing (NLP) is that they require manually annotated data to perform
accurately. Our work aims to address this issue by leveraging existing annotations and
semantic processors from multiple source languages by projecting their annotations via
statistical word alignments traditionally used in Machine Translation. Taking the Named
Entity Recognition (NER) task as a use case of semantic processing, this work presents
a method to automatically induce Named Entity taggers using parallel data, without any
manual intervention. Our method leverages existing semantic processors and annotations
to overcome the lack of annotation data for a given language. The intuition is to transfer
or project semantic annotations, from multiple sources to a target language, by statistical
word alignment methods applied to parallel texts (Och and Ney, 2000; Liang et al., 2006).
The projected annotations can then be used to automatically generate semantic processors
for the target language. In this way we would be able to provide NLP processors with-
out training data for the target language. The experiments are focused on 4 languages:
German, English, Spanish and Italian, and our empirical evaluation results show that our
method obtains competitive results when compared with models trained on gold-standard
out-of-domain data. This shows that our projection algorithm is effective to transport NER
annotations across languages via parallel data thus providing a fully automatic method to
obtain NER taggers for as many as the number of languages aligned via parallel corpora
Data dependent energy modelling for worst case energy consumption analysis
Safely meeting Worst Case Energy Consumption (WCEC) criteria requires
accurate energy modeling of software. We investigate the impact of instruction
operand values upon energy consumption in cacheless embedded processors.
Existing instruction-level energy models typically use measurements from random
input data, providing estimates unsuitable for safe WCEC analysis.
We examine probabilistic energy distributions of instructions and propose a
model for composing instruction sequences using distributions, enabling WCEC
analysis on program basic blocks. The worst case is predicted with statistical
analysis. Further, we verify that the energy of embedded benchmarks can be
characterised as a distribution, and compare our proposed technique with other
methods of estimating energy consumption
- …