40 research outputs found

    Issues with rounding in the GCC implementation of the ISO 18037:2008 standard fixed-point arithmetic

    Full text link
    We describe various issues caused by the lack of round-to-nearest mode in the \textit{gcc} compiler implementation of the fixed-point arithmetic data types and operations. We demonstrate that round-to-nearest is not performed in the conversion of constants, conversion from one numerical type to a less precise type and results of multiplications. Furthermore, we show that mixed-precision operations in fixed-point arithmetic lose precision on arguments, even before carrying out arithmetic operations. The ISO 18037:2008 standard was created to standardize C language extensions, including fixed-point arithmetic, for embedded systems. Embedded systems are usually based on ARM processors, of which approximately 100 billion have been manufactured by now. Therefore, the observations about numerical issues that we discuss in this paper can be rather dangerous and are important to address, given the wide ranging type of applications that these embedded systems are running.Comment: To appear in the proceedings of the 27th IEEE Symposium on Computer Arithmeti

    Stochastic rounding and reduced-precision fixed-point arithmetic for solving neural ordinary differential equations

    Get PDF
    Although double-precision floating-point arithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of reduced-precision fixed-point arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of Ordinary Differential Equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, fixed-point arithmetic with stochastic rounding consistently results in smaller errors compared to single precision floating-point and fixed-point arithmetic with round-to-nearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the least significant bit (LSB) in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by Partial Differential Equations (PDEs).Comment: Submitted to Philosophical Transactions of the Royal Society

    A Team-based Experiential Learning in Supply Chain Sourcing: Purchasing and Negotiation Exercises

    Get PDF
    This paper presents a web-hosted Google Sheet model of supply chain sourcing game focusing on negotiation activities which has been used in the classroom. The negotiation exercise was designed to place students in a position where they must make team based purchasing and selling decisions. The exercise was broken into three separate games. The first game is a group based on purchasing strategies. In this activity, they learn the basics of making purchasing decisions while considering demand, holding and ordering costs, and variable (tier based) pricing. The next two games are round based games. In the class, teams are formed (an even number) and half the teams are assigned to be a purchaser while the other half are suppliers in the first round. Each team must make decisions that will make them better off than other teams. Once the first round is complete, the teams switch roles with a different scenario and continue negotiating. The team with the most profit is the winner of the exercise. The purpose of this article is to describe the structure of the exercise, how we deliver it as a pedagogical game in our classes, and share this easy to apply activities that capture the dynamics of negotiation and the resulting benefits to supply to supply chain members

    MATLAB Simulator of Level-Index Arithmetic

    Get PDF

    Neuromodulated synaptic plasticity on the SpiNNaker neuromorphic system

    Get PDF
    SpiNNaker is a digital neuromorphic architecture, designed specifically for the low power simulation of large-scale spiking neural networks at speeds close to biological real-time. Unlike other neuromorphic systems, SpiNNaker allows users to develop their own neuron and synapse models as well as specify arbitrary connectivity. As a result SpiNNaker has proved to be a powerful tool for studying different neuron models as well as synaptic plasticity—believed to be one of the main mechanisms behind learning and memory in the brain. A number of Spike-Timing-Dependent-Plasticity(STDP) rules have already been implemented on SpiNNaker and have been shown to be capable of solving various learning tasks in real-time. However, while STDP is an important biological theory of learning, it is a form of Hebbian or unsupervised learning and therefore does not explain behaviors that depend on feedback from the environment. Instead, learning rules based on neuromodulated STDP (three-factor learning rules) have been shown to be capable of solving reinforcement learning tasks in a biologically plausible manner. In this paper we demonstrate for the first time how a model of three-factor STDP, with the third-factor representing spikes from dopaminergic neurons, can be implemented on the SpiNNaker neuromorphic system. Using this learning rule we first show how reward and punishment signals can be delivered to a single synapse before going on to demonstrate it in a larger network which solves the credit assignment problem in a Pavlovian conditioning experiment. Because of its extra complexity, we find that our three-factor learning rule requires approximately 2× as much processing time as the existing SpiNNaker STDP learning rules. However, we show that it is still possible to run our Pavlovian conditioning model with up to 1 × 104 neurons in real-time, opening up new research opportunities for modeling behavioral learning on SpiNNaker

    "Do not resuscitate' orders and end-of-life care planning

    No full text

    Monotonicity of Multi-Term Floating-Point Adders

    No full text
    In the literature on algorithms for computing multi-term addition sn=∑ni=1xi in floating-point arithmetic it is often shown that a hardware unit that has single normalization and rounding improves precision, area, latency, and power consumption, compared with the use of standard add or fused multiply–add units. However, non-monotonicity can appear when computing sums with a subclass of multi-term addition units, which is currently not explored in the literature. We prove that computing multi-term floating-point addition with n ≥ 4, without normalization of intermediate quantities, can result in non-monotonicity—increasing one of the addends xi decreases the sum sn . Summation is required in dot product and matrix multiplication operations, operations that are increasingly appearing in the hardware of high-performance computers, and knowing where monotonicity is preserved can be of interest to the developers and users. Non-monotonicity of summation in existent hardware devices that implement a specific class of multi-term adders may have appeared unintentionally as a consequence of design choices that reduce circuit area and other metrics. To demonstrate our findings we simulate non-monotonic multi-term adders in MATLAB using the CPFloat custom-precision floating-point simulator

    Stochastic Rounding: Algorithms and Hardware Accelerator

    No full text
    corecore