10,169 research outputs found
Comprehensive Evaluation of OpenCL-based Convolutional Neural Network Accelerators in Xilinx and Altera FPGAs
Deep learning has significantly advanced the state of the art in artificial intelligence, gaining wide popularity from both industry and academia. Special interest is around Convolutional Neural Networks (CNN), which take inspiration from the hierarchical structure of the visual cortex, to form deep layers of convolutional operations, along with fully connected classifiers. Hardware implementations of these deep CNN architectures are challenged with memory bottlenecks that require many convolution and fully-connected layers demanding large amount of communication for parallel computation. Multi-core CPU based solutions have demonstrated their inadequacy for this problem due to the memory wall and low parallelism. Many-core GPU architectures show superior performance but they consume high power and also have memory constraints due to inconsistencies between cache and main memory. FPGA design solutions are also actively being explored, which allow implementing the memory hierarchy using embedded BlockRAM. This boosts the parallel use of shared memory elements between multiple processing units, avoiding data replicability and inconsistencies. This makes FPGAs potentially powerful solutions for real-time classification of CNNs. Both Altera and Xilinx have adopted OpenCL co-design framework from GPU for FPGA designs as a pseudo-automatic development solution. In this paper, a comprehensive evaluation and comparison of Altera and Xilinx OpenCL frameworks for a 5-layer deep CNN is presented. Hardware resources, temporal performance and the OpenCL architecture for CNNs are discussed. Xilinx demonstrates faster synthesis, better FPGA resource utilization and more compact boards. Altera provides multi-platforms tools, mature design community and better execution times
Learning Representations from Persian Handwriting for Offline Signature Verification, a Deep Transfer Learning Approach
Offline Signature Verification (OSV) is a challenging pattern recognition
task, especially when it is expected to generalize well on the skilled
forgeries that are not available during the training. Its challenges also
include small training sample and large intra-class variations. Considering the
limitations, we suggest a novel transfer learning approach from Persian
handwriting domain to multi-language OSV domain. We train two Residual CNNs on
the source domain separately based on two different tasks of word
classification and writer identification. Since identifying a person signature
resembles identifying ones handwriting, it seems perfectly convenient to use
handwriting for the feature learning phase. The learned representation on the
more varied and plentiful handwriting dataset can compensate for the lack of
training data in the original task, i.e. OSV, without sacrificing the
generalizability. Our proposed OSV system includes two steps: learning
representation and verification of the input signature. For the first step, the
signature images are fed into the trained Residual CNNs. The output
representations are then used to train SVMs for the verification. We test our
OSV system on three different signature datasets, including MCYT (a Spanish
signature dataset), UTSig (a Persian one) and GPDS-Synthetic (an artificial
dataset). On UT-SIG, we achieved 9.80% Equal Error Rate (EER) which showed
substantial improvement over the best EER in the literature, 17.45%. Our
proposed method surpassed state-of-the-arts by 6% on GPDS-Synthetic, achieving
6.81%. On MCYT, EER of 3.98% was obtained which is comparable to the best
previously reported results
Deep learning for extracting protein-protein interactions from biomedical literature
State-of-the-art methods for protein-protein interaction (PPI) extraction are
primarily feature-based or kernel-based by leveraging lexical and syntactic
information. But how to incorporate such knowledge in the recent deep learning
methods remains an open question. In this paper, we propose a multichannel
dependency-based convolutional neural network model (McDepCNN). It applies one
channel to the embedding vector of each word in the sentence, and another
channel to the embedding vector of the head of the corresponding word.
Therefore, the model can use richer information obtained from different
channels. Experiments on two public benchmarking datasets, AIMed and BioInfer,
demonstrate that McDepCNN compares favorably to the state-of-the-art
rich-feature and single-kernel based methods. In addition, McDepCNN achieves
24.4% relative improvement in F1-score over the state-of-the-art methods on
cross-corpus evaluation and 12% improvement in F1-score over kernel-based
methods on "difficult" instances. These results suggest that McDepCNN
generalizes more easily over different corpora, and is capable of capturing
long distance features in the sentences.Comment: Accepted for publication in Proceedings of the 2017 Workshop on
Biomedical Natural Language Processing, 10 pages, 2 figures, 6 table
- …