284,298 research outputs found
Combining hardware and software instrumentation to classify program executions
Several research efforts have studied ways to infer properties of software systems from program spectra gathered from the running systems, usually with software-level instrumentation. While these efforts appear to produce accurate classifications, detailed understanding of their costs and potential cost-benefit tradeoffs is lacking. In this work we present a hybrid instrumentation approach which uses hardware performance counters to gather program spectra at very low cost. This underlying data is further augmented with data captured by minimal amounts of software-level instrumentation. We also
evaluate this hybrid approach by comparing it to other existing approaches. We conclude that these hybrid spectra can reliably distinguish failed executions from successful executions at a fraction of the runtime overhead cost of using software-based execution data
Software Engineering as Instrumentation for the Long Tail of Scientific Software
The vast majority of the long tail of scientific software, the myriads of
tools that implement the many analysis and visualization methods for different
scientific fields, is highly specialized, purpose-built for a research project,
and has to rely on community uptake and reuse for its continued development and
maintenance. Although uptake cannot be controlled over even guaranteed, some of
the key factors that influence whether new users or developers decide to adopt
an existing tool or start a new one are about how easy or difficult it is to
use or enhance a tool for a purpose for which it was not originally designed.
The science of software engineering has produced techniques and practices that
would reduce or remove a variety of barriers to community uptake of software,
but for a variety of reasons employing trained software engineers as part of
the development of long tail scientific software has proven to be challenging.
As a consequence, community uptake of long tail tools is often far more
difficult than it would need to be, even though opportunities for reuse abound.
We discuss likely reasons why employing software engineering in the long tail
is challenging, and propose that many of those obstacles could be addressed in
the form of a cross-cutting non-profit center of excellence that makes software
engineering broadly accessible as a shared service, conceptually and in its
effect similar to shared instrumentation.Comment: 4 page
Inherent Limitations of Hybrid Transactional Memory
Several Hybrid Transactional Memory (HyTM) schemes have recently been
proposed to complement the fast, but best-effort, nature of Hardware
Transactional Memory (HTM) with a slow, reliable software backup. However, the
fundamental limitations of building a HyTM with nontrivial concurrency between
hardware and software transactions are still not well understood.
In this paper, we propose a general model for HyTM implementations, which
captures the ability of hardware transactions to buffer memory accesses, and
allows us to formally quantify and analyze the amount of overhead
(instrumentation) of a HyTM scheme. We prove the following: (1) it is
impossible to build a strictly serializable HyTM implementation that has both
uninstrumented reads and writes, even for weak progress guarantees, and (2)
under reasonable assumptions, in any opaque progressive HyTM, a hardware
transaction must incur instrumentation costs linear in the size of its data
set. We further provide two upper bound implementations whose instrumentation
costs are optimal with respect to their progress guarantees. In sum, this paper
captures for the first time an inherent trade-off between the degree of
concurrency a HyTM provides between hardware and software transactions, and the
amount of instrumentation overhead the implementation must incur
LO-FAT: Low-Overhead Control Flow ATtestation in Hardware
Attacks targeting software on embedded systems are becoming increasingly
prevalent. Remote attestation is a mechanism that allows establishing trust in
embedded devices. However, existing attestation schemes are either static and
cannot detect control-flow attacks, or require instrumentation of software
incurring high performance overheads. To overcome these limitations, we present
LO-FAT, the first practical hardware-based approach to control-flow
attestation. By leveraging existing processor hardware features and
commonly-used IP blocks, our approach enables efficient control-flow
attestation without requiring software instrumentation. We show that our
proof-of-concept implementation based on a RISC-V SoC incurs no processor
stalls and requires reasonable area overhead.Comment: Authors' pre-print version to appear in DAC 2017 proceeding
Using a graphical programming language to write CAMAC/GPIB instrument drivers
To reduce the complexities of conventional programming, graphical software was used in the development of instrumentation drivers. The graphical software provides a standard set of tools (graphical subroutines) which are sufficient to program the most sophisticated CAMAC/GPIB drivers. These tools were used and instrumentation drivers were successfully developed for operating CAMAC/GPIB hardware from two different manufacturers: LeCroy and DSP. The use of these tools is presented for programming a LeCroy A/D Waveform Analyzer
A software control system for the ACTS high-burst-rate link evaluation terminal
Control and performance monitoring of NASA's High Burst Rate Link Evaluation Terminal (HBR-LET) is accomplished by using several software control modules. Different software modules are responsible for controlling remote radio frequency (RF) instrumentation, supporting communication between a host and a remote computer, controlling the output power of the Link Evaluation Terminal and data display. Remote commanding of microwave RF instrumentation and the LET digital ground terminal allows computer control of various experiments, including bit error rate measurements. Computer communication allows system operators to transmit and receive from the Advanced Communications Technology Satellite (ACTS). Finally, the output power control software dynamically controls the uplink output power of the terminal to compensate for signal loss due to rain fade. Included is a discussion of each software module and its applications
EACOF: A Framework for Providing Energy Transparency to enable Energy-Aware Software Development
Making energy consumption data accessible to software developers is an
essential step towards energy efficient software engineering. The presence of
various different, bespoke and incompatible, methods of instrumentation to
obtain energy readings is currently limiting the widespread use of energy data
in software development. This paper presents EACOF, a modular Energy-Aware
Computing Framework that provides a layer of abstraction between sources of
energy data and the applications that exploit them. EACOF replaces platform
specific instrumentation through two APIs - one accepts input to the framework
while the other provides access to application software. This allows developers
to profile their code for energy consumption in an easy and portable manner
using simple API calls. We outline the design of our framework and provide
details of the API functionality. In a use case, where we investigate the
impact of data bit width on the energy consumption of various sorting
algorithms, we demonstrate that the data obtained using EACOF provides
interesting, sometimes counter-intuitive, insights. All the code is available
online under an open source license. http://github.com/eaco
Mixing Hardware and Software Reversibility for Speculative Parallel Discrete Event Simulation
Speculative parallel discrete event simulation requires a support for reversing processed events, also called state recovery, when causal inconsistencies are revealed. In this article we present an approach where state recovery relies on a mix of hardware- and software-based techniques. We exploit the Hardware Transactional Memory (HTM) support, as offered by Intel Haswell CPUs, to process events as in-memory transactions, which are possibly committed only after their causal consistency is verified. At the same time, we exploit an innovative software-based reversibility technique, fully relying on transparent software instrumentation targeting x86/ELF objects, which enables undoing side effects by events with no actual backward re-computation. Each thread within our speculative processing engine dynamically (on a per-event basis) selects which recovery mode to rely on (hardware vs software) depending on varying runtime dynamics. The latter are captured by a lightweight analytic model indicating to what extent the HTM support (not paying any instrumentation cost) is efficient, and after what level of events’ parallelism it starts degrading its performance, e.g., due to excessive data conflicts while manipulating causality meta-data within HTM-based transactions. We released our implementation as open source software and provide experimental results for an assessment of its effectiveness. © Springer International Publishing Switzerland 2016
Control System for the LEDA 6.7-MeV Proton Beam Halo Experiment
Measurement of high-power proton beam-halo formation is the ongoing
scientific experiment for the Low Energy Demonstration Accelerator (LEDA)
facility. To attain this measurement goal, a 52-magnet beam line containing
several types of beam diagnostic instrumentation is being installed. The
Experimental Physics and Industrial Control System (EPICS) and commercial
software applications are presently being integrated to provide a real-time,
synchronous data acquisition and control system. This system is comprised of
magnet control, vacuum control, motor control, data acquisition, and data
analysis. Unique requirements led to the development and integration of
customized software and hardware. EPICS real-time databases, Interactive Data
Language (IDL) programs, LabVIEW Virtual Instruments (VI), and State Notation
Language (SNL) sequences are hosted on VXI, PC, and UNIX-based platforms which
interact using the EPICS Channel Access (CA) communication protocol.
Acquisition and control hardware technology ranges from DSP-based diagnostic
instrumentation to the PLC-controlled vacuum system. This paper describes the
control system hardware and software design, and implementation.Comment: LINAC2000 Conference, 4 pg
- …