12,911 research outputs found

    Design pipe bracket for vessel with using Titanium metal in marine environment

    Get PDF
    The research investigates the design and utilization of the pipe bracket with titanium metal for the ocean going vessel to confront marine environment. The main aim of this report is to study the performance of titanium metal compared with other materials when they are being used in marine environment .Another aim of this report is to design pipe bracket for the ocean going vessel, then did the simulation and calculation of the loads which applied on the pipe bracket. The studying of my aims were targeted during all the phases of this project. This report has gone through several stages so that be achieved. The first phase was referring the gathering information about the primary mechanical properties of titanium metal as light weight, flexible and strong resistance to corrosion. The different corrosion properties of pipe material and how they interact together with titanium metal or sea water. The second phase was concerning three different pipe types (rigid support, adjustable support, elastic support) and choose the type of adjustable due to it makes easily assemble due to nuts and bolts could be rearranged for adjusting the support when using on the vessel. Mention the Standard pipe size for using in different place and having a design drawing of my pipe bracket. The next phase was doing mechanical analysis of my bracket model on the Solidworks program and maximum loads which were applied on the bracket were calculated by using related formulas. The last phase was considering the manufacturing process for the pipe bracket and having the primary cost for making and selling it

    Hunting for New Physics with Unitarity Boomerangs

    Get PDF
    Although the unitarity triangles (UTsUTs) carry information about the Kobayashi-Maskawa (KM) quark mixing matrix, it explicitly contains just three parameters which is one short to completely fix the KM matrix. It has been shown recently, by us, that the unitarity boomerangs (UBUB) formed using two UTsUTs, with a common inner angle, can completely determine the KM matrix and, therefore, better represents, quark mixing. Here, we study detailed properties of the UBsUBs, of which there are a total 18 possible. Among them, there is only one which does not involve very small angles and is the ideal one for practical uses. Although the UBsUBs have different areas, there is an invariant quantity, for all UBsUBs, which is equal to a quarter of the Jarlskog parameter JJ squared. Hunting new physics, with a unitarity boomerang, can reveal more information, than just using a unitarity triangle.Comment: Latex 9 pages with two figures. References updated

    Question-Answering with Grammatically-Interpretable Representations

    Full text link
    We introduce an architecture, the Tensor Product Recurrent Network (TPRN). In our application of TPRN, internal representations learned by end-to-end optimization in a deep neural network performing a textual question-answering (QA) task can be interpreted using basic concepts from linguistic theory. No performance penalty need be paid for this increased interpretability: the proposed model performs comparably to a state-of-the-art system on the SQuAD QA task. The internal representation which is interpreted is a Tensor Product Representation: for each input word, the model selects a symbol to encode the word, and a role in which to place the symbol, and binds the two together. The selection is via soft attention. The overall interpretation is built from interpretations of the symbols, as recruited by the trained model, and interpretations of the roles as used by the model. We find support for our initial hypothesis that symbols can be interpreted as lexical-semantic word meanings, while roles can be interpreted as approximations of grammatical roles (or categories) such as subject, wh-word, determiner, etc. Fine-grained analysis reveals specific correspondences between the learned roles and parts of speech as assigned by a standard tagger (Toutanova et al. 2003), and finds several discrepancies in the model's favor. In this sense, the model learns significant aspects of grammar, after having been exposed solely to linguistically unannotated text, questions, and answers: no prior linguistic knowledge is given to the model. What is given is the means to build representations using symbols and roles, with an inductive bias favoring use of these in an approximately discrete manner

    EEF: Exponentially Embedded Families with Class-Specific Features for Classification

    Full text link
    In this letter, we present a novel exponentially embedded families (EEF) based classification method, in which the probability density function (PDF) on raw data is estimated from the PDF on features. With the PDF construction, we show that class-specific features can be used in the proposed classification method, instead of a common feature subset for all classes as used in conventional approaches. We apply the proposed EEF classifier for text categorization as a case study and derive an optimal Bayesian classification rule with class-specific feature selection based on the Information Gain (IG) score. The promising performance on real-life data sets demonstrates the effectiveness of the proposed approach and indicates its wide potential applications.Comment: 9 pages, 3 figures, to be published in IEEE Signal Processing Letter. IEEE Signal Processing Letter, 201

    Tensor Product Generation Networks for Deep NLP Modeling

    Full text link
    We present a new approach to the design of deep networks for natural language processing (NLP), based on the general technique of Tensor Product Representations (TPRs) for encoding and processing symbol structures in distributed neural networks. A network architecture --- the Tensor Product Generation Network (TPGN) --- is proposed which is capable in principle of carrying out TPR computation, but which uses unconstrained deep learning to design its internal representations. Instantiated in a model for image-caption generation, TPGN outperforms LSTM baselines when evaluated on the COCO dataset. The TPR-capable structure enables interpretation of internal representations and operations, which prove to contain considerable grammatical content. Our caption-generation model can be interpreted as generating sequences of grammatical categories and retrieving words by their categories from a plan encoded as a distributed representation
    corecore