712,543 research outputs found
Hybrid measurement-based WCET analysis at the source level using object-level traces
Hybrid measurement-based approaches to worst-case execution time (WCET) analysis combine measured execution times of small program segments using static analysis of the larger software structure. In order to make the necessary measurements, instrumentation code is added to generate a timestamped trace from the running program. The intrusive presence of this instrumentation code incurs a timing penalty, widely referred to as the probe effect. However, recent years have seen the emergence of trace capability at the hardware level, effectively opening the door to probe-free analysis.
Relying on hardware support forces the WCET analysis to the object-code level, since that is all that is known by the hardware. A major disadvantage of this is that it is expensive for a typical software engineer to interpret the results, since most engineers are familiar with the source code but not the object code. Meaningful WCET analysis involves not just running a tool to obtain an overall WCET value but also understanding which sections of code consume most of the WCET in order that corrective actions, such as optimisation, can be applied if the WCET value is too large.
The main contribution of this paper is a mechanism by which hybrid WCET analysis can still be performed at the source level when the timestamped trace has been collected at the object level by state-of-the-art hardware. This allows existing, commercial tools, such as rapitime{}, to operate without the need for intrusive instrumentation and thus without the probe effect
Message Flow Analysis with Complex Causal Links for Distributed ROS 2 Systems
Distributed robotic systems rely heavily on the publish-subscribe
communication paradigm and middleware frameworks that support it, such as the
Robot Operating System (ROS), to efficiently implement modular computation
graphs. The ROS 2 executor, a high-level task scheduler which handles ROS 2
messages, is a performance bottleneck. We extend ros2_tracing, a framework with
instrumentation and tools for real-time tracing of ROS 2, with the analysis and
visualization of the flow of messages across distributed ROS 2 systems. Our
method detects one-to-many and many-to-many causal links between input and
output messages, including indirect causal links through simple user-level
annotations. We validate our method on both synthetic and real robotic systems,
and demonstrate its low runtime overhead. Moreover, the underlying intermediate
execution representation database can be further leveraged to extract
additional metrics and high-level results. This can provide valuable timing and
scheduling information to further study and improve the ROS 2 executor as well
as optimize any ROS 2 system. The source code is available at:
https://github.com/christophebedard/ros2-message-flow-analysis.Comment: 14 pages, 12 figure
The timing and funding of CHAPS sterling payments
Real-time gross settlement (RTGS) systems such as CHAPS Sterling require large amounts of liquidity to support payment activity. To meet their liquidity needs, RTGS participants borrow from the central bank or rely on incoming payments from other participants. Both options can prove costly -- the latter in particular if participants delay outgoing payments until incoming ones arrive. This article presents an empirical analysis of the timing and funding of payments in CHAPS. The authors seek to identify the factors driving the intraday profile of payment activity and the extent to which incoming funds are used as a funding source, a process known as liquidity recycling. They show that the level of liquidity recycling in CHAPS is high and stable throughout the day, and attribute this result to several features of the system. First, the settlement of time-critical payments provides liquidity to the system early in the settlement day; this liquidity can be recycled for the funding of less urgent payments. Second, CHAPS throughput guidelines provide a centralised coordination mechanism, in effect limiting any tendency toward payment delay. Third, the relatively small direct membership of CHAPS facilitates coordination between members, for example, through the use of bilateral net sender limits. Coordination encourages banks to maintain a relatively constant flux of payments throughout the day. The authors also argue that the high level of recycling helps to reduce liquidity risk, and that the relatively smooth intraday distribution of payments serves to mitigate operational risk associated with highly concentrated payment activity. They note, however, that the benefits of liquidity recycling are not evenly distributed between members of CHAPS.Payment systems ; Bank liquidity ; Risk ; Electronic funds transfers
Dynamic safety analysis of decommissioning and abandonment of offshore oil and gas installations
The global oil and gas industry have seen an increase in the number of installations moving towards decommissioning. Offshore decommissioning is a complex, challenging and costly activity, making safety one of the major concerns. The decommissioning operation is, therefore, riskier than capital projects, partly due to the uniqueness of every offshore installation, and mainly because these installations were not designed for removal during their development phases. The extent of associated risks is deep and wide due to limited data and incomplete knowledge of the equipment conditions. For this reason, it is important to capture every uncertainty that can be introduced at the operational level, or existing hazards due to the hostile environment, technical difficulties, and the timing of the decommissioning operations. Conventional accident modelling techniques cannot capture the complex interactions among contributing elements. To assess the safety risks, a dynamic safety analysis of the accident is, thus, necessary. In this thesis, a dynamic integrated safety analysis model is proposed and developed to capture both planned and evolving risks during the various stages of decommissioning. First, the failure data are obtained from source-to-source and are processed utilizing Hierarchical Bayesian
Analysis. Then, the system failure and potential accident scenarios are built on bowtie model which is mapped into a Bayesian network with advanced relaxation techniques. The Dynamic Integrated Safety Analysis (DISA) allows for the combination of reliability tools to identify safetycritical causals and their evolution into single undesirable failure through the utilisation of source to-source variability, time-dependent prediction, diagnostic, and economic risk assessment to support effective recommendations and decisions-making. The DISA framework is applied to the Elgin platform well abandonment and Brent Alpha jacket structure decommissioning and the results are validated through sensitivity analysis. Through a dynamic-diagnostic and multi-factor regression analysis, the loss values of accident contributory factors are also presented. The study shows that integrating Hierarchical Bayesian Analysis (HBA) and dynamic Bayesian networks (DBN) application to modelling time-variant risks are essential to achieve a well-informed decommissioning decision through the identification of safety critical barriers that could be mitigated against to drive down the cost of remediation.The global oil and gas industry have seen an increase in the number of installations moving towards decommissioning. Offshore decommissioning is a complex, challenging and costly activity, making safety one of the major concerns. The decommissioning operation is, therefore, riskier than capital projects, partly due to the uniqueness of every offshore installation, and mainly because these installations were not designed for removal during their development phases. The extent of associated risks is deep and wide due to limited data and incomplete knowledge of the equipment conditions. For this reason, it is important to capture every uncertainty that can be introduced at the operational level, or existing hazards due to the hostile environment, technical difficulties, and the timing of the decommissioning operations. Conventional accident modelling techniques cannot capture the complex interactions among contributing elements. To assess the safety risks, a dynamic safety analysis of the accident is, thus, necessary. In this thesis, a dynamic integrated safety analysis model is proposed and developed to capture both planned and evolving risks during the various stages of decommissioning. First, the failure data are obtained from source-to-source and are processed utilizing Hierarchical Bayesian
Analysis. Then, the system failure and potential accident scenarios are built on bowtie model which is mapped into a Bayesian network with advanced relaxation techniques. The Dynamic Integrated Safety Analysis (DISA) allows for the combination of reliability tools to identify safetycritical causals and their evolution into single undesirable failure through the utilisation of source to-source variability, time-dependent prediction, diagnostic, and economic risk assessment to support effective recommendations and decisions-making. The DISA framework is applied to the Elgin platform well abandonment and Brent Alpha jacket structure decommissioning and the results are validated through sensitivity analysis. Through a dynamic-diagnostic and multi-factor regression analysis, the loss values of accident contributory factors are also presented. The study shows that integrating Hierarchical Bayesian Analysis (HBA) and dynamic Bayesian networks (DBN) application to modelling time-variant risks are essential to achieve a well-informed decommissioning decision through the identification of safety critical barriers that could be mitigated against to drive down the cost of remediation
Modeling and visualizing networked multi-core embedded software energy consumption
In this report we present a network-level multi-core energy model and a
software development process workflow that allows software developers to
estimate the energy consumption of multi-core embedded programs. This work
focuses on a high performance, cache-less and timing predictable embedded
processor architecture, XS1. Prior modelling work is improved to increase
accuracy, then extended to be parametric with respect to voltage and frequency
scaling (VFS) and then integrated into a larger scale model of a network of
interconnected cores. The modelling is supported by enhancements to an open
source instruction set simulator to provide the first network timing aware
simulations of the target architecture. Simulation based modelling techniques
are combined with methods of results presentation to demonstrate how such work
can be integrated into a software developer's workflow, enabling the developer
to make informed, energy aware coding decisions. A set of single-,
multi-threaded and multi-core benchmarks are used to exercise and evaluate the
models and provide use case examples for how results can be presented and
interpreted. The models all yield accuracy within an average +/-5 % error
margin
A Further Drop Into Quiescence By The Eclipsing Neutron Star 4U 2129+47
The low-mass X-ray binary 4U 2129+47 was discovered during a previous X-ray outburst phase and was classified as an accretion disk corona source. A 1% delay between two mid-eclipse epochs measured ~22 days apart was reported from two XMM-Newton observations taken in 2005, providing support to the previous suggestion that 4U 2129+47 might be in a hierarchical triple system. In this work, we present timing and spectral analysis of three recent XMM-Newton observations of 4U 2129+47, carried out between 2007 November and 2008 January. We found that except for the two 2005 XMM-Newton observations, all other observations are consistent with a linear ephemeris with a constant period of 18 857.63 s; however, we confirm the time delay reported for the two 2005 XMM-Newton observations. Compared to a Chandra observation taken in 2000, these new observations also confirm the disappearance of the sinusoidal modulation of the light curve as reported from two 2005 XMM-Newton observations. We further show that, compared to the Chandra observation, all of the XMM-Newton observations have 40% lower 0.5-2 keV absorbed fluxes, and the most recent XMM-Newton observations have a combined 2-6 keV flux that is nearly 80% lower. Taken as a whole, the timing results support the hypothesis that the system is in a hierarchical triple system (with a third body period of at least 175 days). The spectral results raise the question of whether the drop in soft X-ray flux is solely attributable to the loss of the hard X-ray tail (which might be related to the loss of sinusoidal orbital modulation), or is indicative of further cooling of the quiescent neutron star after cessation of residual, low-level accretion.United States. National Aeronautics and Space Administration (Grant NNX08AC66G)United States. National Aeronautics and Space Administration (Grant SV3-73016
The Sleeping Monster: NuSTAR observations of SGR 1806-20, 11 years after the Giant Flare
We report the analysis of 5 NuSTAR observations of SGR 1806-20 spread over a
year from April 2015 to April 2016, more than 11 years following its Giant
Flare (GF) of 2004. The source spin frequency during the NuSTAR observations
follows a linear trend with a frequency derivative
Hz s, implying a surface dipole
equatorial magnetic field G. Thus, SGR 1806-20 has
finally returned to its historical minimum torque level measured between 1993
and 1998. The source showed strong timing noise for at least 12 years starting
in 2000, with increasing one order of magnitude between 2005 and
2011, following its 2004 major bursting episode and GF. SGR 1806-20 has not
shown strong transient activity since 2009 and we do not find short bursts in
the NuSTAR data. The pulse profile is complex with a pulsed fraction of
with no indication of energy dependence. The NuSTAR spectra are well
fit with an absorbed blackbody, keV, plus a power-law,
. We find no evidence for variability among the 5
observations, indicating that SGR 1806-20 has reached a persistent and
potentially its quiescent X-ray flux level after its 2004 major bursting
episode. Extrapolating the NuSTAR model to lower energies, we find that the
0.5-10 keV flux decay follows an exponential form with a characteristic
timescale days. Interestingly, the NuSTAR flux in this energy
range is a factor of weaker than the long-term average measured between
1993 and 2003, a behavior also exhibited in SGR . We discuss our
findings in the context of the magnetar model.Comment: 10 pages, 5 figures, accepted for publication in Ap
A Model-Derivation Framework for Software Analysis
Model-based verification allows to express behavioral correctness conditions
like the validity of execution states, boundaries of variables or timing at a
high level of abstraction and affirm that they are satisfied by a software
system. However, this requires expressive models which are difficult and
cumbersome to create and maintain by hand. This paper presents a framework that
automatically derives behavioral models from real-sized Java programs. Our
framework builds on the EMF/ECore technology and provides a tool that creates
an initial model from Java bytecode, as well as a series of transformations
that simplify the model and eventually output a timed-automata model that can
be processed by a model checker such as UPPAAL. The framework has the following
properties: (1) consistency of models with software, (2) extensibility of the
model derivation process, (3) scalability and (4) expressiveness of models. We
report several case studies to validate how our framework satisfies these
properties.Comment: In Proceedings MARS 2017, arXiv:1703.0581
Performance of two Askaryan Radio Array stations and first results in the search for ultra-high energy neutrinos
Ultra-high energy neutrinos are interesting messenger particles since, if
detected, they can transmit exclusive information about ultra-high energy
processes in the Universe. These particles, with energies above
, interact very rarely. Therefore, detectors that
instrument several gigatons of matter are needed to discover them. The ARA
detector is currently being constructed at South Pole. It is designed to use
the Askaryan effect, the emission of radio waves from neutrino-induced cascades
in the South Pole ice, to detect neutrino interactions at very high energies.
With antennas distributed among 37 widely-separated stations in the ice, such
interactions can be observed in a volume of several hundred cubic kilometers.
Currently 3 deep ARA stations are deployed in the ice of which two have been
taking data since the beginning of the year 2013. In this publication, the ARA
detector "as-built" and calibrations are described. Furthermore, the data
reduction methods used to distinguish the rare radio signals from overwhelming
backgrounds of thermal and anthropogenic origin are presented. Using data from
only two stations over a short exposure time of 10 months, a neutrino flux
limit of is
calculated for a particle energy of 10^{18}eV, which offers promise for the
full ARA detector.Comment: 21 pages, 34 figures, 1 table, includes supplementary materia
Spectral and timing properties of IGR J00291+5934 during its 2015 outburst
We report on the spectral and timing properties of the accreting millisecond
X-ray pulsar IGR J00291+5934 observed by XMM-Newton and NuSTAR during its 2015
outburst. The source is in a hard state dominated at high energies by a
comptonization of soft photons ( keV) by an electron population with
kT keV, and at lower energies by a blackbody component with
kT keV. A moderately broad, neutral Fe emission line and four narrow
absorption lines are also found. By investigating the pulse phase evolution, we
derived the best-fitting orbital solution for the 2015 outburst. Comparing the
updated ephemeris with those of the previous outbursts, we set a
confidence level interval s/s s/s on the orbital period derivative. Moreover, we
investigated the pulse profile dependence on energy finding a peculiar
behaviour of the pulse fractional amplitude and lags as a function of energy.
We performed a phase-resolved spectroscopy showing that the blackbody component
tracks remarkably well the pulse-profile, indicating that this component
resides at the neutron star surface (hot-spot).Comment: 9 pages, 7 figures. Accepted for publication in MNRA
- …