2 research outputs found

    Measuring Human Perception to Improve Handwritten Document Transcription

    Full text link
    The subtleties of human perception, as measured by vision scientists through the use of psychophysics, are important clues to the internal workings of visual recognition. For instance, measured reaction time can indicate whether a visual stimulus is easy for a subject to recognize, or whether it is hard. In this paper, we consider how to incorporate psychophysical measurements of visual perception into the loss function of a deep neural network being trained for a recognition task, under the assumption that such information can enforce consistency with human behavior. As a case study to assess the viability of this approach, we look at the problem of handwritten document transcription. While good progress has been made towards automatically transcribing modern handwriting, significant challenges remain in transcribing historical documents. Here we describe a general enhancement strategy, underpinned by the new loss formulation, which can be applied to the training regime of any deep learning-based document transcription system. Through experimentation, reliable performance improvement is demonstrated for the standard IAM and RIMES datasets for three different network architectures. Further, we go on to show feasibility for our approach on a new dataset of digitized Latin manuscripts, originally produced by scribes in the Cloister of St. Gall in the the 9th century

    DLBC: A Deep Learning-Based Consensus in Blockchains for Deep Learning Services

    Full text link
    With the increasing artificial intelligence application, deep neural network (DNN) has become an emerging task. However, to train a good deep learning model will suffer from enormous computation cost and energy consumption. Recently, blockchain has been widely used, and during its operation, a huge amount of computation resources are wasted for the Proof of Work (PoW) consensus. In this paper, we propose DLBC to exploit the computation power of miners for deep learning training as proof of useful work instead of calculating hash values. it distinguishes itself from recent proof of useful work mechanisms by addressing various limitations of them. Specifically, DLBC handles multiple tasks, larger model and training datasets, and introduces a comprehensive ranking mechanism that considers tasks difficulty(e.g., model complexity, network burden, data size, queue length). We also applied DNN-watermark [1] to improve the robustness. In Section V, the average overhead of digital signature is 1.25, 0.001, 0.002 and 0.98 seconds, respectively, and the average overhead of network is 3.77, 3.01, 0.37 and 0.41 seconds, respectively. Embedding a watermark takes 3 epochs and removing a watermark takes 30 epochs. This penalty of removing watermark will prevent attackers from stealing, improving, and resubmitting DL models from honest miners
    corecore