3,765 research outputs found
Extending systems-on-chip to the third dimension : performance, cost and technological tradeoffs.
Because of the today's market demand for high-performance, high-density portable hand-held applications, electronic system design technology has shifted the focus from 2-D planar SoC single-chip solutions to different alternative options as tiled silicon and single-level embedded modules as well as 3-D integration. Among the various choices, finding an optimal solution for system implementation dealt usually with cost, performance and other technological trade-off analysis at the system conceptual level. It has been identified that the decisions made within the first 20% of the total design cycle time will ultimately result up to 80% of the final product cost. In this paper, we discuss appropriate and realistic metric for performance and cost trade-off analysis both at system conceptual level (up-front in the design phase) and at implementation phase for verification in the three-dimensional integration. In order to validate the methodology, two ubiquitous electronic systems are analyzed under various implementation schemes and discuss the pros and cons of each of them
Construction and commissioning of a technological prototype of a high-granularity semi-digital hadronic calorimeter
A large prototype of 1.3m3 was designed and built as a demonstrator of the
semi-digital hadronic calorimeter (SDHCAL) concept proposed for the future ILC
experiments. The prototype is a sampling hadronic calorimeter of 48 units. Each
unit is built of an active layer made of 1m2 Glass Resistive Plate
Chamber(GRPC) detector placed inside a cassette whose walls are made of
stainless steel. The cassette contains also the electronics used to read out
the GRPC detector. The lateral granularity of the active layer is provided by
the electronics pick-up pads of 1cm2 each. The cassettes are inserted into a
self-supporting mechanical structure built also of stainless steel plates
which, with the cassettes walls, play the role of the absorber. The prototype
was designed to be very compact and important efforts were made to minimize the
number of services cables to optimize the efficiency of the Particle Flow
Algorithm techniques to be used in the future ILC experiments. The different
components of the SDHCAL prototype were studied individually and strict
criteria were applied for the final selection of these components. Basic
calibration procedures were performed after the prototype assembling. The
prototype is the first of a series of new-generation detectors equipped with a
power-pulsing mode intended to reduce the power consumption of this highly
granular detector. A dedicated acquisition system was developed to deal with
the output of more than 440000 electronics channels in both trigger and
triggerless modes. After its completion in 2011, the prototype was commissioned
using cosmic rays and particles beams at CERN.Comment: 49 pages, 41 figure
Desynchronization: Synthesis of asynchronous circuits from synchronous specifications
Asynchronous implementation techniques, which measure logic delays at run time and activate registers accordingly, are inherently more robust than their synchronous counterparts, which estimate worst-case delays at design time, and constrain the clock cycle accordingly. De-synchronization is a new paradigm to automate the design of asynchronous circuits from synchronous specifications, thus permitting widespread adoption of asynchronicity, without requiring special design skills or tools. In this paper, we first of all study different protocols for de-synchronization and formally prove their correctness, using techniques originally developed for distributed deployment of synchronous language specifications. We also provide a taxonomy of existing protocols for asynchronous latch controllers, covering in particular the four-phase handshake protocols devised in the literature for micro-pipelines. We then propose a new controller which exhibits provably maximal concurrency, and analyze the performance of desynchronized circuits with respect to the original synchronous optimized implementation. We finally prove the feasibility and effectiveness of our approach, by showing its application to a set of real designs, including a complete implementation of the DLX microprocessor architectur
Recommended from our members
Efficient FPGA implementation and power modelling of image and signal processing IP cores
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Field Programmable Gate Arrays (FPGAs) are the technology of choice in a number ofimage
and signal processing application areas such as consumer electronics, instrumentation,
medical data processing and avionics due to their reasonable energy consumption, high performance, security, low design-turnaround time and reconfigurability. Low power FPGA
devices are also emerging as competitive solutions for mobile and thermally constrained platforms. Most computationally intensive image and signal processing algorithms also consume a lot of power leading to a number of issues including reduced mobility, reliability concerns and increased design cost among others. Power dissipation has become one of the most important challenges, particularly for FPGAs. Addressing this problem requires optimisation and awareness at all levels in the design flow. The key achievements of the
work presented in this thesis are summarised here. Behavioural level optimisation strategies have been used for implementing matrix product and inner product through the use of mathematical techniques such as Distributed Arithmetic (DA) and its variations including offset binary coding, sparse factorisation and novel vector level transformations. Applications to test the impact of these algorithmic and arithmetic transformations include the fast Hadamard/Walsh transforms and Gaussian mixture models. Complete design space exploration has been performed on these cores, and where appropriate, they have been shown to clearly outperform comparable existing implementations. At the architectural level, strategies such as parallelism, pipelining and systolisation have been successfully applied for the design and optimisation of a number of
cores including colour space conversion, finite Radon transform, finite ridgelet transform and circular convolution. A pioneering study into the influence of supply voltage scaling for FPGA based designs, used in conjunction with performance enhancing strategies such as parallelism and pipelining has been performed. Initial results are very promising and indicated significant potential for future research in this area.
A key contribution of this work includes the development of a novel high level power macromodelling technique for design space exploration and characterisation of custom IP cores for FPGAs, called Functional Level Power Analysis and Modelling (FLPAM). FLPAM
is scalable, platform independent and compares favourably with existing approaches. A hybrid, top-down design flow paradigm integrating FLPAM with commercially available design tools for systematic optimisation of IP cores has also been developed
The Heavy Photon Search test detector
The Heavy Photon Search (HPS), an experiment to search for a hidden sector photon in fixed target electroproduction, is preparing for installation at the Thomas Jefferson National Accelerator Facility (JLab) in the Fall of 2014. As the first stage of this project, the HPS Test Run apparatus was constructed and operated in 2012 to demonstrate the experiment׳s technical feasibility and to confirm that the trigger rates and occupancies are as expected. This paper describes the HPS Test Run apparatus and readout electronics and its performance. In this setting, a heavy photon can be identified as a narrow peak in the e+e− invariant mass spectrum above the trident background or as a narrow invariant mass peak with a decay vertex displaced from the production target, so charged particle tracking and vertexing are needed for its detection. In the HPS Test Run, charged particles are measured with a compact forward silicon microstrip tracker inside a dipole magnet. Electromagnetic showers are detected in a PbW04 crystal calorimeter situated behind the magnet, and are used to trigger the experiment and identify electrons and positrons. Both detectors are placed close to the beam line and split top-bottom. This arrangement provides sensitivity to low-mass heavy photons, allows clear passage of the unscattered beam, and avoids the spray of degraded electrons coming from the target. The discrimination between prompt and displaced e+e− pairs requires the first layer of silicon sensors be placed only 10 cm downstream of the target. The expected signal is small, and the trident background huge, so the experiment requires very large statistics. Accordingly, the HPS Test Run utilizes high-rate readout and data acquisition electronics and a fast trigger to exploit the essentially 100% duty cycle of the CEBAF accelerator at JLab
- …