30,409 research outputs found
AutoAccel: Automated Accelerator Generation and Optimization with Composable, Parallel and Pipeline Architecture
CPU-FPGA heterogeneous architectures are attracting ever-increasing attention
in an attempt to advance computational capabilities and energy efficiency in
today's datacenters. These architectures provide programmers with the ability
to reprogram the FPGAs for flexible acceleration of many workloads.
Nonetheless, this advantage is often overshadowed by the poor programmability
of FPGAs whose programming is conventionally a RTL design practice. Although
recent advances in high-level synthesis (HLS) significantly improve the FPGA
programmability, it still leaves programmers facing the challenge of
identifying the optimal design configuration in a tremendous design space.
This paper aims to address this challenge and pave the path from software
programs towards high-quality FPGA accelerators. Specifically, we first propose
the composable, parallel and pipeline (CPP) microarchitecture as a template of
accelerator designs. Such a well-defined template is able to support efficient
accelerator designs for a broad class of computation kernels, and more
importantly, drastically reduce the design space. Also, we introduce an
analytical model to capture the performance and resource trade-offs among
different design configurations of the CPP microarchitecture, which lays the
foundation for fast design space exploration. On top of the CPP
microarchitecture and its analytical model, we develop the AutoAccel framework
to make the entire accelerator generation automated. AutoAccel accepts a
software program as an input and performs a series of code transformations
based on the result of the analytical-model-based design space exploration to
construct the desired CPP microarchitecture. Our experiments show that the
AutoAccel-generated accelerators outperform their corresponding software
implementations by an average of 72x for a broad class of computation kernels
Offshoring and the onshore composition of tasks and skills
We analyze the relationship between offshoring and the onshore workforce composition in
German multinational enterprises (MNEs), using plant data that allow us to discern tasks,
occupations, and workforce skills. Offshoring is associated with a statistically significant shift
towards more non-routine and more interactive tasks, and with a shift towards highly educated
workers. The shift towards highly educated workers is in excess of what is implied by changes
in either the task or the occupational composition. Offshoring to low-income countriesâwith
the exception of Central and Eastern European countriesâis associated with stronger onshore
responses. We find offshoring to predict between 10 and 15 percent of observed changes
in wage-bill shares of highly educated workers and measures of non-routine and interactive
tasks
The engineering design integration (EDIN) system
A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described
Recommended from our members
The National Transport Data Framework
Report by Professor Peter Landshoff (Cambridge University) and
Professor John Polak (Imperial College London) on a project for
the Department for Transport.
emails: [email protected] [email protected] NTDF is designed to be a resource for data owners to deposit descriptions
into a central catalogue, so that people can search for data and find data
and understand their characteristics. The value of this is to individuals, to
commercial organizations, and to public bodies. For example, services that
provide better information to travellers will help to make their journey
less stressful and persuade them to make more use of public transport.
Transport operators need very diverse information to help them
plan developments to their services: demographic, geographical, economic etc.
And policy makers need a similar range of information to help them decide
how to divide their budget and afterwards to evaluate how valuable it has
been.This work was supported by the Department for Transport (DfT)
Air Traffic Safety: continued evolution or a new Paradigm.
The context here is Transport Risk Management. Is the philosophy of Air Traffic Safety different from other modes of transport? â yes, in many ways, it is. The focus is on Air Traffic Management (ATM), covering (eg) air traffic control and airspace structures, which is the part of the aviation system that is most likely to be developed through new paradigms. The primary goal of the ATM system is to control accident risk. ATM safety has improved over the decades for many reasons, from better equipment to additional safety defences. But ATM safety targets, improving on current performance, are now extremely demanding. What are the past and current methodologies for ATM risk assessment; and will they work effectively for the kinds of future systems that people are now imagining and planning? The title contrasts âContinued Evolutionâ and a âNew Paradigmâ. How will system designers/operators assure safety with traffic growth and operational/technical changes that are more than continued evolution from the current system? What are the design implications for ânew paradigmsâ, such as the USAâs âNext Generation Air Transportation Systemâ (NextGen) and Europeâs Single European Sky ATM Research Programme (SESAR)? Achieving and proving safety for NextGen and SESAR is an enormously tough challenge. For example, it will need to cover system resilience, human/automation issues, software/hardware performance/ground/air protection systems. There will be a need for confidence building programmes regarding system design/resilience, eg Human-in-the-Loop simulations with âseeded errorsâ
Recommended from our members
A decision support environment for behavioral synthesis
We present a specification of a general environment for behavioral synthesis centered around the user/designer as the primary motivator for decisions in design development. At each stage of the design process, the user can perform transformations on the design description through graphical user interfaces. Quality measures, physical estimates, and design hints are given to the user at each stage
Hardware/Software Codesign
The current state of the art technology in integrated circuits allows the incorporation of multiple processor cores and memory arrays, in addition to application specific hardware, on a single substrate. As silicon technology has become more advanced, allowing the implementation of more complex designs, systems have begun to incorporate considerable amounts of embedded software [3]. Thus it becomes increasingly necessary for the system designers to have knowledge on both hardware and software to make efficient design tradeoffs. This is where hardware/software codesign comes into existence
- âŚ