2,909 research outputs found
LNCS
We provide a procedure for detecting the sub-segments of an incrementally observed Boolean signal ω that match a given temporal pattern ϕ. As a pattern specification language, we use timed regular expressions, a formalism well-suited for expressing properties of concurrent asynchronous behaviors embedded in metric time. We construct a timed automaton accepting the timed language denoted by ϕ and modify it slightly for the purpose of matching. We then apply zone-based reachability computation to this automaton while it reads ω, and retrieve all the matching segments from the results. Since the procedure is automaton based, it can be applied to patterns specified by other formalisms such as timed temporal logics reducible to timed automata or directly encoded as timed automata. The procedure has been implemented and its performance on synthetic examples is demonstrated
Two is Not Always Better Than one Modeling Evidence for a Single Structure-Building System
A challenge for grammatical theories and models of language processing alike is to explain conflicting online and offline judgments about the acceptability of sentences. A prominent example of the online/offline mismatch involves “agreement attraction” in sentences like *The key to the cabinets were rusty, which are often erroneously treated as acceptable in time-restricted “online” measures, but judged as less acceptable in untimed “offline” tasks. The prevailing assumption is that online/offline mismatches are the product of two linguistic analyzers: one analyzer for rapid communication (the “parser”) and another, slower analyzer that classifies grammaticality (the “grammar”). A competing hypothesis states that online/offline mismatches reflect a single linguistic analyzer implemented in a noisy memory architecture that creates the opportunity for errors and conflicting judgments at different points in time. A challenge for the single-analyzer account is to explain why online and offline tasks sometimes yield conflicting responses if they are mediated by the same analyzer. The current study addresses this challenge by showing how agreement attraction effects might come and go over time in a single-analyzer architecture. Experiments 1 and 2 use an agreement attraction paradigm to directly compare online and offline judgments, and confirm that the online/offline contrast reflects the time restriction in online tasks. Experiment 3 then uses computational modeling to capture the mapping from online to offline responses as a process of sequential memory sampling in a single-analyzer framework. This demonstration provides some proof-of-concept for the single-analyzer account and offers an explicit process model for the mapping between online and offline responses
Formal Design of Asynchronous Fault Detection and Identification Components using Temporal Epistemic Logic
Autonomous critical systems, such as satellites and space rovers, must be
able to detect the occurrence of faults in order to ensure correct operation.
This task is carried out by Fault Detection and Identification (FDI)
components, that are embedded in those systems and are in charge of detecting
faults in an automated and timely manner by reading data from sensors and
triggering predefined alarms. The design of effective FDI components is an
extremely hard problem, also due to the lack of a complete theoretical
foundation, and of precise specification and validation techniques. In this
paper, we present the first formal approach to the design of FDI components for
discrete event systems, both in a synchronous and asynchronous setting. We
propose a logical language for the specification of FDI requirements that
accounts for a wide class of practical cases, and includes novel aspects such
as maximality and trace-diagnosability. The language is equipped with a clear
semantics based on temporal epistemic logic, and is proved to enjoy suitable
properties. We discuss how to validate the requirements and how to verify that
a given FDI component satisfies them. We propose an algorithm for the synthesis
of correct-by-construction FDI components, and report on the applicability of
the design approach on an industrial case-study coming from aerospace.Comment: 33 pages, 20 figure
Design, Commissioning and Performance of the PIBETA Detector at PSI
We describe the design, construction and performance of the PIBETA detector
built for the precise measurement of the branching ratio of pion beta decay,
pi+ -> pi0 e+ nu, at the Paul Scherrer Institute. The central part of the
detector is a 240-module spherical pure CsI calorimeter covering 3*pi sr solid
angle. The calorimeter is supplemented with an active collimator/beam degrader
system, an active segmented plastic target, a pair of low-mass cylindrical wire
chambers and a 20-element cylindrical plastic scintillator hodoscope. The whole
detector system is housed inside a temperature-controlled lead brick enclosure
which in turn is lined with cosmic muon plastic veto counters. Commissioning
and calibration data were taken during two three-month beam periods in
1999/2000 with pi+ stopping rates between 1.3*E3 pi+/s and 1.3*E6 pi+/s. We
examine the timing, energy and angular detector resolution for photons,
positrons and protons in the energy range of 5-150 MeV, as well as the response
of the detector to cosmic muons. We illustrate the detector signatures for the
assorted rare pion and muon decays and their associated backgrounds.Comment: 117 pages, 48 Postscript figures, 5 tables, Elsevier LaTeX, submitted
to Nucl. Instrum. Meth.
Real-Time RGB-D Camera Pose Estimation in Novel Scenes using a Relocalisation Cascade
Camera pose estimation is an important problem in computer vision. Common
techniques either match the current image against keyframes with known poses,
directly regress the pose, or establish correspondences between keypoints in
the image and points in the scene to estimate the pose. In recent years,
regression forests have become a popular alternative to establish such
correspondences. They achieve accurate results, but have traditionally needed
to be trained offline on the target scene, preventing relocalisation in new
environments. Recently, we showed how to circumvent this limitation by adapting
a pre-trained forest to a new scene on the fly. The adapted forests achieved
relocalisation performance that was on par with that of offline forests, and
our approach was able to estimate the camera pose in close to real time. In
this paper, we present an extension of this work that achieves significantly
better relocalisation performance whilst running fully in real time. To achieve
this, we make several changes to the original approach: (i) instead of
accepting the camera pose hypothesis without question, we make it possible to
score the final few hypotheses using a geometric approach and select the most
promising; (ii) we chain several instantiations of our relocaliser together in
a cascade, allowing us to try faster but less accurate relocalisation first,
only falling back to slower, more accurate relocalisation as necessary; and
(iii) we tune the parameters of our cascade to achieve effective overall
performance. These changes allow us to significantly improve upon the
performance our original state-of-the-art method was able to achieve on the
well-known 7-Scenes and Stanford 4 Scenes benchmarks. As additional
contributions, we present a way of visualising the internal behaviour of our
forests and show how to entirely circumvent the need to pre-train a forest on a
generic scene.Comment: Tommaso Cavallari, Stuart Golodetz, Nicholas Lord and Julien Valentin
assert joint first authorshi
Testing Beam-Induced Quench Levels of LHC Superconducting Magnets
In the years 2009-2013 the Large Hadron Collider (LHC) has been operated with
the top beam energies of 3.5 TeV and 4 TeV per proton (from 2012) instead of
the nominal 7 TeV. The currents in the superconducting magnets were reduced
accordingly. To date only seventeen beam-induced quenches have occurred; eight
of them during specially designed quench tests, the others during injection.
There has not been a single beam- induced quench during normal collider
operation with stored beam. The conditions, however, are expected to become
much more challenging after the long LHC shutdown. The magnets will be
operating at near nominal currents, and in the presence of high energy and high
intensity beams with a stored energy of up to 362 MJ per beam. In this paper we
summarize our efforts to understand the quench levels of LHC superconducting
magnets. We describe beam-loss events and dedicated experiments with beam, as
well as the simulation methods used to reproduce the observable signals. The
simulated energy deposition in the coils is compared to the quench levels
predicted by electro-thermal models, thus allowing to validate and improve the
models which are used to set beam-dump thresholds on beam-loss monitors for Run
2.Comment: 19 page
- …