6 research outputs found
Floating-point exponential functions for DSP-enabled FPGAs
International audienceThis article presents a floating-point exponential operator generator targeting recent FPGAs with embedded memories and DSP blocks. A single-precision operator consumes just one DSP block, 18Kbits of dual-port memory, and 392 slices on Virtex-4. For larger precisions, a generic approach based on polynomial approximation is used and proves more resource-efficient than the literature. For instance a double-precision operator consumes 5 BlockRAM and 12 DSP48 blocks on Virtex-5, or 10 M9k and 22 18x18 multipliers on Stratix III. This approach is flexible, scales well beyond double-precision, and enables frequencies close to the FPGA's nominal frequency. All the proposed architectures are last-bit accurate for all the floating-point range.They are available in the open-source FloPoCo framework
Computing floating-point logarithms with fixed-point operations
International audienceElementary functions from the mathematical library input and output floating-point numbers. However it is possible to implement them purely using integer/fixed-point arithmetic. This option was not attractive between 1985 and 2005, because mainstream processor hardware supported 64-bit floating-point, but only 32-bit integers. Besides, conversions between floating-point and integer were costly. This has changed in recent years, in particular with the generalization of native 64-bit integer support. The purpose of this article is therefore to reevaluate the relevance of computing floating-point functions in fixed-point. For this, several variants of the double-precision logarithm function are implemented and evaluated. Formulating the problem as a fixed-point one is easy after the range has been (classically) reduced. Then, 64-bit integers provide slightly more accuracy than 53-bit mantissa, which helps speed up the evaluation. Finally, multi-word arithmetic, critical for accurate implementations, is much faster in fixed-point, and natively supported by recent compilers. Novel techniques of argument reduction and rounding test are introduced in this context. Thanks to all this, a purely integer implementation of the correctly rounded double-precision logarithm outperforms the previous state of the art, with the worst-case execution time reduced by a factor 5. This work also introduces variants of the logarithm that input a floating-point number and output the result in fixed-point. These are shown to be both more accurate and more efficient than the traditional floating-point functions for some applications
Customizing floating-point units for FPGAs: Area-performance-standard trade-offs
The high integration density of current nanometer technologies allows the implementation of complex floating-point applications in a single FPGA. In this work the intrinsic complexity of floating-point operators is addressed targeting configurable devices and making design decisions providing the most suitable performance-standard compliance trade-offs. A set of floating-point libraries composed of adder/subtracter, multiplier, divisor, square root, exponential, logarithm and power function are presented. Each library has been designed taking into account special characteristics of current FPGAs, and with this purpose we have adapted the IEEE floating-point standard (software-oriented) to a custom FPGA-oriented format. Extended experimental results validate the design decisions made and prove the usefulness of reducing the format complexit
Recommended from our members
Suitability of FPGA-based computing for cyber-physical systems
textCyber-Physical Systems theory is a new concept that is about to revolutionize
the way computers interact with the physical world by integrating
physical knowledge into the computing systems and tailoring such computing
systems in a way that is more compatible with the way processes happen in
the physical world. In this master’s thesis, Field Programmable Gate Arrays
(FPGA) are studied as a potential technological asset that may contribute to
the enablement of the Cyber-Physical paradigm. As an example application
that may benefit from cyber-physical system support, the Electro-Slag Remelting
process - a process for remelting metals into better alloys - has been chosen
due to the maturity of its related physical models and controller designs. In
particular, the Particle Filter that estimates the state of the process is studied
as a candidate for FPGA-based computing enhancements. In comparison
with CPUs, through the designs and experiments carried in relationship with
this study, the FPGA reveals itself as a serious contender in the arsenal of
v
computing means for Cyber-Physical Systems, due to its capacity to mimic
the ubiquitous parallelism of physical processes.Electrical and Computer Engineerin
Uso eficiente de aritmética redundante en FPGAs
Hasta hace pocos años, la utilización de aritmética redundante en FPGAs había
sido descartada por dos razones principalmente. En primer lugar, por el buen
rendimiento que ofrecían los sumadores de acarreo propagado, gracias a la lógica de
de acarreo que poseían de fábrica y al pequeño tamaño de los operandos en las
aplicaciones típicas para FPGAs. En segundo lugar, el excesivo consumo de área que
las herramientas de síntesis obtenían cuando mapeaban unidades que trabajan en carrysave.
En este trabajo, se muestra que es posible la utilización de aritmética redundante
carry-save en FPGAs de manera eficiente, consiguiendo un aumento en la velocidad de
operación con un consumo de recursos razonable. Se ha introducido un nuevo formato
redundante doble carry-save y se ha demostrado que la manera óptima para la
realización de multiplicadores de elevado ancho de palabra es la combinación de
multiplicadores empotrados con sumadores carry-save.Till a few years ago, redundant arithmetic had been discarded to be use in FPGA
mainly for two reasons. First, the efficient results obtained using carry-propagate adders
thanks to the carry-logic embedded in FPGAs and the small sizes of operands in typical
FPGA applications. Second, the high number of resources that the synthesis tools
utilizes to implement carry-save circuits.
In this work, it is demonstrated that carry-save arithmetic can be efficiently used
in FPGA, obtaining an important speed improvement with a reasonable area cost. A
new redundant format, double carry-save, has been introduced, and the optimal
implementation of large size multipliers has been shown based on embedded multipliers
and carry-save adders
Return of the hardware floating-point elementary function
The study of specific hardware circuits for the evaluation of floating-point elementary functions was once an an active research area, until it was realized that these functions were not frequent enough to justify dedicating silicon to them. Research then turned to software functions. This situation may be about to change again with the advent of reconfigurable co-processors based on field-programmable gate arrays. Such co-processors now have a capacity that allows to accomodate double-precision floating-point computing. Hardware operators for elementary functions targeted to such platforms have the potential to vastly outperform software functions, and will not permanently waste silicon resources. This article studies the optimization, for this target technology, of operators for the exponential and logarithm functions up to double-precision. Keywords: Floating-point elementary functions, hardware operator, FPGA, exponential, logarithm. Résumé L’implantation en matériel dans les processeurs des fonctions élémentaires e