5,330 research outputs found

    Computation-Performance Optimization of Convolutional Neural Networks with Redundant Kernel Removal

    Full text link
    Deep Convolutional Neural Networks (CNNs) are widely employed in modern computer vision algorithms, where the input image is convolved iteratively by many kernels to extract the knowledge behind it. However, with the depth of convolutional layers getting deeper and deeper in recent years, the enormous computational complexity makes it difficult to be deployed on embedded systems with limited hardware resources. In this paper, we propose two computation-performance optimization methods to reduce the redundant convolution kernels of a CNN with performance and architecture constraints, and apply it to a network for super resolution (SR). Using PSNR drop compared to the original network as the performance criterion, our method can get the optimal PSNR under a certain computation budget constraint. On the other hand, our method is also capable of minimizing the computation required under a given PSNR drop.Comment: This paper was accepted by 2018 The International Symposium on Circuits and Systems (ISCAS

    Orderly Spanning Trees with Applications

    Full text link
    We introduce and study the {\em orderly spanning trees} of plane graphs. This algorithmic tool generalizes {\em canonical orderings}, which exist only for triconnected plane graphs. Although not every plane graph admits an orderly spanning tree, we provide an algorithm to compute an {\em orderly pair} for any connected planar graph GG, consisting of a plane graph HH of GG, and an orderly spanning tree of HH. We also present several applications of orderly spanning trees: (1) a new constructive proof for Schnyder's Realizer Theorem, (2) the first area-optimal 2-visibility drawing of GG, and (3) the best known encodings of GG with O(1)-time query support. All algorithms in this paper run in linear time.Comment: 25 pages, 7 figures, A preliminary version appeared in Proceedings of the 12th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2001), Washington D.C., USA, January 7-9, 2001, pp. 506-51

    Calcified amorphous tumor of left atrium

    Get PDF

    Why people adopt VR English language learning systems: An extended perspective of task-technology fit

    Get PDF
    Virtual Reality (VR) techniques involving immersion, interaction, and imagination, not only can improve conventional teaching methods, but also can enhance the transmission of education training contents through the interaction and simulation characteristics of VR. Incorporating information technology (IT) with English teaching has become an important issue in the academic field. Emerging after computer-assisted teaching, interactive network learning, distance education, and mobile learning in the early days, virtual reality techniques have been regarded as a new trend of merging technology with education. To explore the factors affecting users’ adoption intention of VR English language learning systems (VRELLS), this study has sought to build a theoretical framework based on the task-technology fit theory (extrinsic motivation) combining users’ needs (internal and external needs) and satisfaction to put forward an integrated research model (perceived needs-technology fit model), which explicates people’s adoption behaviors of VRELLS. An online questionnaire was employed to collect empirical data. A total of 291 samples were analyzed using a structural equation modeling (SEM) approach. The results of the study showed that both perceived needs-technology fit and satisfaction play a significant role in the user’ adoption intention of VRELLS services. In addition, the utilitarian and hedonic needs have a positive impact on the user’s perceived needs-technology fit. Also, it was found that relative advantage, service compatibility and complexity are important factors in influencing individuals’ perceived needs-technology fit. The implications of these findings are discussed along with suggestions for future research

    Exposing the Functionalities of Neurons for Gated Recurrent Unit Based Sequence-to-Sequence Model

    Full text link
    The goal of this paper is to report certain scientific discoveries about a Seq2Seq model. It is known that analyzing the behavior of RNN-based models at the neuron level is considered a more challenging task than analyzing a DNN or CNN models due to their recursive mechanism in nature. This paper aims to provide neuron-level analysis to explain why a vanilla GRU-based Seq2Seq model without attention can achieve token-positioning. We found four different types of neurons: storing, counting, triggering, and outputting and further uncover the mechanism for these neurons to work together in order to produce the right token in the right position.Comment: 9 pages (excluding reference), 10 figure
    • 

    corecore