1,207 research outputs found
Characterization and Verification Environment for the RD53A Pixel Readout Chip in 65 nm CMOS
The RD53 collaboration is currently designing a large scale prototype pixel
readout chip in 65 nm CMOS technology for the phase 2 upgrades at the HL-LHC.
The RD53A chip will be available by the end of the year 2017 and will be
extensively tested to confirm if the circuit and the architecture make a solid
foundation for the final pixel readout chips for the experiments at the HL-LHC.
A test and data acquisition system for the RD53A chip is currently under
development to perform single-chip and multi-chip module measurements. In
addition, the verification of the RD53A design is performed in a dedicated
simulation environment. The concept and the implementation of the test and data
acquisition system and the simulation environment, which are based on a modular
data acquisition and system testing framework, are presented in this work
Advances in Architectures and Tools for FPGAs and their Impact on the Design of Complex Systems for Particle Physics
The continual improvement of semiconductor technology has provided rapid advancements in device frequency and density. Designers of electronics systems for high-energy physics (HEP) have benefited from these advancements, transitioning many designs from fixed-function ASICs to more flexible FPGA-based platforms. Today’s FPGA devices provide a significantly higher amount of resources than those available during the initial Large Hadron Collider design phase. To take advantage of the capabilities of future FPGAs in the next generation of HEP experiments, designers must not only anticipate further improvements in FPGA hardware, but must also adopt design tools and methodologies that can scale along with that hardware. In this paper, we outline the major trends in FPGA hardware, describe the design challenges these trends will present to developers of HEP electronics, and discuss a range of techniques that can be adopted to overcome these challenges
Real-time data acquisition and processing system for MHz repetition rate image sensors
An electro-optic detector is one of the diagnostic setups used in particle accelerators. It employs an electro-optic crystal to encode the longitudinal beam charge profile in the spectrum of a light pulse. The charge distribution is then reconstructed using data captured by a fast spectrometer. The measurement repetition rate should match or exceed the machine bunching frequency, which is often in the range of several MHz. A high-speed optical line detector (HOLD) is a linear camera designed for easy integration with scientific experiments. The use of modern FPGA circuits helps in the efficient collection and processing of data. The solution is based on Xilinx 7-Series FPGA circuits and implements a custom latency-optimized architecture utilizing the AXI4 family of interfaces. HOLD is one of the fastest line cameras in the world. Thanks to its hardware architecture and a powerful KALYPSO sensor from KIT, it outperforms the fastest comparable commercial devices
When a Patch is Not Enough - HardFails: Software-Exploitable Hardware Bugs
In this paper, we take a deep dive into microarchitectural security from a
hardware designer's perspective by reviewing the existing approaches to detect
hardware vulnerabilities during the design phase. We show that a protection gap
currently exists in practice that leaves chip designs vulnerable to
software-based attacks. In particular, existing verification approaches fail to
detect specific classes of vulnerabilities, which we call HardFails: these bugs
evade detection by current verification techniques while being exploitable from
software. We demonstrate such vulnerabilities in real-world SoCs using RISC-V
to showcase and analyze concrete instantiations of HardFails. Patching these
hardware bugs may not always be possible and can potentially result in a
product recall. We base our findings on two extensive case studies: the recent
Hack@DAC 2018 hardware security competition, where 54 independent teams of
researchers competed world-wide over a period of 12 weeks to catch inserted
security bugs in SoC RTL designs, and an in-depth systematic evaluation of
state-of-the-art verification approaches. Our findings indicate that even
combinations of techniques will miss high-impact bugs due to the large number
of modules with complex interdependencies and fundamental limitations of
current detection approaches. We also craft a real-world software attack that
exploits one of the RTL bugs from Hack@DAC that evaded detection and discuss
novel approaches to mitigate the growing problem of cross-layer bugs at design
time
RIPencapsulation: Defeating IP Encapsulation on TI MSP Devices
Internet of Things (IoT) devices sit at the intersection of unwieldy software
complexity and unprecedented attacker access. This unique position comes with a
daunting security challenge: how can I protect both proprietary code and
confidential data on a device that the attacker has unfettered access to?
Trusted Execution Environments (TEEs) promise to solve this challenge through
hardware-based separation of trusted and untrusted computation and data. While
TEEs do an adequate job of protecting secrets on desktop-class devices, we
reveal that trade-offs made in one of the most widely-used commercial IoT
devices undermine their TEE's security.
This paper uncovers two fundamental weaknesses in IP Encapsulation (IPE), the
TEE deployed by Texas Instruments for MSP430 and MSP432 devices. We observe
that lack of call site enforcement and residual state after unexpected TEE
exits enable an attacker to reveal all proprietary code and secret data within
the IPE. We design and implement an attack called RIPencapsulation, which
systematically executes portions of code within the IPE and uses the partial
state revealed through the register file to exfiltrate secret data and to
identify gadget instructions. The attack then uses gadget instructions to
reveal all proprietary code within the IPE. Our evaluation with commodity
devices and a production compiler and settings shows that -- even after
following all manufacturer-recommended secure coding practices --
RIPencapsultaion reveals, within minutes, both the code and keys from
third-party cryptographic implementations protected by the IPE.Comment: 13 pages, 3 figures, 6 table
SW-VHDL Co-Verification Environment Using Open Source Tools
The verification of complex digital designs often involves the use of expensive simulators.
The present paper proposes an approach to verify a specific family of complex hardware/software
systems, whose hardware part, running on an FPGA, communicates with a software counterpart
executed on an external processor, such as a user/operator software running on an external PC.
The hardware is described in VHDL and the software may be described in any computer language
that can be interpreted or compiled into a (Linux) executable file. The presented approach uses open
source tools, avoiding expensive license costs and usage restrictions.Unión Europea 68722
The state of adoption and the challenges of systematic variability management in industry
Handling large-scale software variability is still a challenge for many organizations. After decades of research on variability management concepts, many industrial organizations have introduced techniques known from research, but still lament that pure textbook approaches are not applicable or efficient. For instance, software product line engineering—an approach to systematically develop portfolios of products—is difficult to adopt given the high upfront investments; and even when adopted, organizations are challenged by evolving their complex product lines. Consequently, the research community now mainly focuses on re-engineering and evolution techniques for product lines; yet, understanding the current state of adoption and the industrial challenges for organizations is necessary to conceive effective techniques. In this multiple-case study, we analyze the current adoption of variability management techniques in twelve medium- to large-scale industrial cases in domains such as automotive, aerospace or railway systems. We identify the current state of variability management, emphasizing the techniques and concepts they adopted. We elicit the needs and challenges expressed for these cases, triangulated with results from a literature review. We believe our results help to understand the current state of adoption and shed light on gaps to address in industrial practice.This work is supported by Vinnova Sweden, Fond Unique Interminist´eriel (FUI) France, and the Swedish Research Council.
Open access funding provided by University of Gothenbur
- …