2,987 research outputs found
Algorithms to Compute the Lyndon Array
We first describe three algorithms for computing the Lyndon array that have
been suggested in the literature, but for which no structured exposition has
been given. Two of these algorithms execute in quadratic time in the worst
case, the third achieves linear time, but at the expense of prior computation
of both the suffix array and the inverse suffix array of x. We then go on to
describe two variants of a new algorithm that avoids prior computation of
global data structures and executes in worst-case n log n time. Experimental
evidence suggests that all but one of these five algorithms require only linear
execution time in practice, with the two new algorithms faster by a small
factor. We conjecture that there exists a fast and worst-case linear-time
algorithm to compute the Lyndon array that is also elementary (making no use of
global data structures such as the suffix array)
Ground-state energy of biquadratic spin systems (S=3/2) in the (1/z)1-approximation
Corrections to the molecular-field ground- state energies of the Heisenberg model with isotropic biquadrati c interactions (spin S = 3= 2) are calculated in the ( 1= z ) 1 -approximation using the diagrammatic technique based on the Wick reduction theorem (z is the numb er of spins interacting with any given spin) . The present results for the antiferri- and antiferromagnetic phases complete the previously obtained data for the antiquadrupolar, ferriand ferromagnetic phases. From among the boundaries between different
ground states only that b etw een the antiferri- and antiferromagnetic phases is shifted with respect to its molecular-field value
k-Dirac operator and parabolic geometries
The principal group of a Klein geometry has canonical left action on the
homogeneous space of the geometry and this action induces action on the spaces
of sections of vector bundles over the homogeneous space. This paper is about
construction of differential operators invariant with respect to the induced
action of the principal group of a particular type of parabolic geometry. These
operators form sequences which are related to the minimal resolutions of the
k-Dirac operators studied in Clifford analysis
The BaBar Event Building and Level-3 Trigger Farm Upgrade
The BaBar experiment is the particle detector at the PEP-II B-factory
facility at the Stanford Linear Accelerator Center. During the summer shutdown
2002 the BaBar Event Building and Level-3 trigger farm were upgraded from 60
Sun Ultra-5 machines and 100MBit/s Ethernet to 50 Dual-CPU 1.4GHz Pentium-III
systems with Gigabit Ethernet. Combined with an upgrade to Gigabit Ethernet on
the source side and a major feature extraction software speedup, this pushes
the performance of the BaBar event builder and L3 filter to 5.5kHz at current
background levels, almost three times the original design rate of 2kHz. For our
specific application the new farm provides 8.5 times the CPU power of the old
system.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, 1 eps figure, PSN MOGT00
Model Checking a C++ Software Framework, a Case Study
This paper presents a case study on applying two model checkers, SPIN and
DIVINE, to verify key properties of a C++ software framework, known as ADAPRO,
originally developed at CERN. SPIN was used for verifying properties on the
design level. DIVINE was used for verifying simple test applications that
interacted with the implementation. Both model checkers were found to have
their own respective sets of pros and cons, but the overall experience was
positive. Because both model checkers were used in a complementary manner, they
provided valuable new insights into the framework, which would arguably have
been hard to gain by traditional testing and analysis tools only. Translating
the C++ source code into the modeling language of the SPIN model checker helped
to find flaws in the original design. With DIVINE, defects were found in parts
of the code base that had already been subject to hundreds of hours of unit
tests, integration tests, and acceptance tests. Most importantly, model
checking was found to be easy to integrate into the workflow of the software
project and bring added value, not only as verification, but also validation
methodology. Therefore, using model checking for developing library-level code
seems realistic and worth the effort.Comment: In Proceedings of the 27th ACM Joint European Software Engineering
Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE
'19), August 26-30, 2019, Tallinn, Estonia. ACM, New York, NY, USA, 11 page
Efficient Online Timed Pattern Matching by Automata-Based Skipping
The timed pattern matching problem is an actively studied topic because of
its relevance in monitoring of real-time systems. There one is given a log
and a specification (given by a timed word and a timed automaton
in this paper), and one wishes to return the set of intervals for which the log
, when restricted to the interval, satisfies the specification
. In our previous work we presented an efficient timed pattern
matching algorithm: it adopts a skipping mechanism inspired by the classic
Boyer--Moore (BM) string matching algorithm. In this work we tackle the problem
of online timed pattern matching, towards embedded applications where it is
vital to process a vast amount of incoming data in a timely manner.
Specifically, we start with the Franek-Jennings-Smyth (FJS) string matching
algorithm---a recent variant of the BM algorithm---and extend it to timed
pattern matching. Our experiments indicate the efficiency of our FJS-type
algorithm in online and offline timed pattern matching
Generalised median of a set of correspondences based on the hamming distance.
A correspondence is a set of mappings that establishes a relation between the elements of two data structures (i.e. sets of points, strings, trees or graphs). If we consider several correspondences between the same two structures, one option to define a representative of them is through the generalised median correspondence. In general, the computation of the generalised median is an NP-complete task. In this paper, we present two methods to calculate the generalised median correspondence of multiple correspondences. The first one obtains the optimal solution in cubic time, but it is restricted to the Hamming distance. The second one obtains a sub-optimal solution through an iterative approach, but does not have any restrictions with respect to the used distance. We compare both proposals in terms of the distance to the true generalised median and runtime
- …