35,547 research outputs found

    3d numerical model of a confined fracture tests in concrete

    Get PDF
    The paper deals with the numerical simulation of a confined fracture test in concrete. The test is part of the experimental work carried out at ETSECCPB-UPC in order to elucidate the existence of a second mode of fracture under shear and high compression, and evaluate the associated fracture energy. The specimen is a short cylinder with also cylindrical coaxial notches similar the one proposed by Luong (1990), which is introduced in a largecapacity triaxial cell, protected with membranes and subject to different levels of confining pressure prior to vertical loading. In the experiments, the main crack follows the preestablished cylindrical notch path, which is in itself a significant achievement. The loaddisplacement curves for various confining pressures also seem to follow the expected trend according to the underlying conceptual model. The FE model developed includes zerothickness interface elements with fracture-based constitutive laws, which are pre-inserted along the cylindrical ligament and the potential radial crack plane. The results reproduce reasonably well the overall force-displacement curves of the test for various confinement levels, and make it possible to identify the fracture parameters including the fracture energies in modes I and IIa

    Effective Approaches to Attention-based Neural Machine Translation

    Full text link
    An attentional mechanism has lately been used to improve neural machine translation (NMT) by selectively focusing on parts of the source sentence during translation. However, there has been little work exploring useful architectures for attention-based NMT. This paper examines two simple and effective classes of attentional mechanism: a global approach which always attends to all source words and a local one that only looks at a subset of source words at a time. We demonstrate the effectiveness of both approaches over the WMT translation tasks between English and German in both directions. With local attention, we achieve a significant gain of 5.0 BLEU points over non-attentional systems which already incorporate known techniques such as dropout. Our ensemble model using different attention architectures has established a new state-of-the-art result in the WMT'15 English to German translation task with 25.9 BLEU points, an improvement of 1.0 BLEU points over the existing best system backed by NMT and an n-gram reranker.Comment: 11 pages, 7 figures, EMNLP 2015 camera-ready version, more training detail

    Deep Neural Machine Translation with Linear Associative Unit

    Full text link
    Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures. However NMT systems with deep architecture in their encoder or decoder RNNs often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often make the optimization much more difficult. To address this problem we propose novel linear associative units (LAU) to reduce the gradient propagation length inside the recurrent unit. Different from conventional approaches (LSTM unit and GRU), LAUs utilizes linear associative connections between input and output of the recurrent unit, which allows unimpeded information flow through both space and time direction. The model is quite simple, but it is surprisingly effective. Our empirical study on Chinese-English translation shows that our model with proper configuration can improve by 11.7 BLEU upon Groundhog and the best reported results in the same setting. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art.Comment: 10 pages, ACL 201
    corecore