871 research outputs found

    Integer multipliers with overflow detection

    Full text link

    Training deep neural networks with low precision multiplications

    Full text link
    Multipliers are the most space and power-hungry arithmetic operators of the digital implementation of deep neural networks. We train a set of state-of-the-art neural networks (Maxout networks) on three benchmark datasets: MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training. We find that very low precision is sufficient not just for running trained networks but also for training them. For example, it is possible to train Maxout networks with 10 bits multiplications.Comment: 10 pages, 5 figures, Accepted as a workshop contribution at ICLR 201

    Development of Urban Electric Bus Drivetrain

    Get PDF
    The development of the drivetrain for a new series of urban electric buses is presented in the paper. The traction and design properties of several drive variants are compared. The efficiency of the drive was tested using simulation calculations of the vehicle rides based on data from real bus lines in Prague. The results of the design work and simulation calculations are presented in the paper

    Low Power RTTY and PSK31 Decoder for Ham Radio Applications

    Get PDF
    corecore