1,261 research outputs found

    Accelerating Random Kaczmarz Algorithm Based on Clustering Information

    Full text link
    Kaczmarz algorithm is an efficient iterative algorithm to solve overdetermined consistent system of linear equations. During each updating step, Kaczmarz chooses a hyperplane based on an individual equation and projects the current estimate for the exact solution onto that space to get a new estimate. Many vairants of Kaczmarz algorithms are proposed on how to choose better hyperplanes. Using the property of randomly sampled data in high-dimensional space, we propose an accelerated algorithm based on clustering information to improve block Kaczmarz and Kaczmarz via Johnson-Lindenstrauss lemma. Additionally, we theoretically demonstrate convergence improvement on block Kaczmarz algorithm

    Gas pressure sintering of BN/Si3N4 wave-transparent material with Y2O3–MgO nanopowders addition

    Get PDF
    AbstractBN/Si3N4 ceramics performed as wave-transparent material in spacecraft were fabricated with boron nitride powders, silicon nitride powders and Y2O3–MgO nanopowders by gas pressure sintering at 1700°C under 6MPa in N2 atmosphere. The effects of Y2O3–MgO nanopowders on densification, phase evolution, microstructure and mechanical properties of BN/Si3N4 material were investigated. The addition of Y2O3–MgO nanopowders was found beneficial to the mechanical properties of BN/Si3N4 composites. The BN/Si3N4 ceramics with 8wt% Y2O3–MgO nanopowders showed a relative density of 80.2%, combining a fracture toughness of 4.6MPam1/2 with an acceptable flexural strength of 396.5MPa

    Kirchhoff index of composite graphs

    Get PDF
    AbstractLet G1+G2, G1∘G2 and G1{G2} be the join, corona and cluster of graphs G1 and G2, respectively. In this paper, Kirchhoff index formulae of these composite graphs are given

    Suero de queso entero y desgrasado para la terminación de cerdos

    Get PDF
    p.25-28Se llevó a cabo un experimento con un diseño completamente aleatorio con 40 machos castrados, con el fin de comparar la respuesta en crecimiento y conversión alimenticia del suero de queso entero (1 por ciento de grasa) vs el desgrasado, en la alimentación de cerdos en terminación. La composición de suero base en g-litro fue: extracto seco 60; lactosa 49,7; proteínas 6,9; grasas 1 ,0; minerales 9; calcio 0,3 9: fósforo 0,13. Ambos grupos recibieron una suplementación de grano de sorgo (materia seca 91 por ciento, proteína cruda 11,3 por ciento, grasa 3 por ciento, fibra 2,1 por ciento, calcio 0,04 por ciento, fósforo 0,32 por ciento) a razón de 1,5 kg-cabeza desde los 60 a los 80 kg de peso vivo y de 2 kg-cabeza desde los 80 kg hasta finalizar el ensayo, a los 44 días. El suero se ofreció diariamente a razón de 10 litros por animal durante el primer periodo y de 20 litros por animal en el segundo. Con las informaciones recogidas del crecimiento de los grupos se realizó una prueba de paralelismo entre rectas de regresión a partir de las siguientes ecuaciones: Y1 igual a-59,99 + 0,681 x (suero desgrasado) y Y2 igual a 62 ,28 + 0,889 x (suero entero), hallándose diferencias a nivel de P menor a 0,10 a favor del suero entero. La conversión alimenticia total de los grupos en materia seca fue 3,63:1 (suero desgrasado) y 3,13:1 (suero entero)

    Learning to Prove Trigonometric Identities

    Full text link
    Automatic theorem proving with deep learning methods has attracted attentions recently. In this paper, we construct an automatic proof system for trigonometric identities. We define the normalized form of trigonometric identities, design a set of rules for the proof and put forward a method which can generate theoretically infinite trigonometric identities. Our goal is not only to complete the proof, but to complete the proof in as few steps as possible. For this reason, we design a model to learn proof data generated by random BFS (rBFS), and it is proved theoretically and experimentally that the model can outperform rBFS after a simple imitation learning. After further improvement through reinforcement learning, we get AutoTrig, which can give proof steps for identities in almost as short steps as BFS (theoretically shortest method), with a time cost of only one-thousandth. In addition, AutoTrig also beats Sympy, Matlab and human in the synthetic dataset, and performs well in many generalization tasks
    corecore