890 research outputs found

    Note On Certain Inequalities for Neuman Means

    Full text link
    In this paper, we give the explicit formulas for the Neuman means NAHN_{AH}, NHAN_{HA}, NACN_{AC} and NCAN_{CA}, and present the best possible upper and lower bounds for theses means in terms of the combinations of harmonic mean HH, arithmetic mean AA and contraharmonic mean CC.Comment: 9 page

    Monotonicity of the Ratio of the Power and Second Seiffert Means with Applications

    Get PDF
    We present the necessary and sufficient condition for the monotonicity of the ratio of the power and second Seiffert means. As applications, we get the sharp upper and lower bounds for the second Seiffert mean in terms of the power mean

    Preparation of FeO(OH) Modified with Polyethylene Glycol and Its Catalytic Activity on the Reduction of Nitrobenzene with Hydrazine Hydrate

    Get PDF
    Iron oxyhydroxide was prepared by dropping ammonia water to Fe(NO3)3.9H2O dispersed in polyethylene glycol (PEG) 1000. The catalyst was characterized by X-ray powder diffraction, Fourier transform infrared spectroscopy and laser particle size analyzer. The results showed the catalyst modified with polyethylene glycol was amorphous. The addition of PEG during the preparation make the particle size of the catalyst was smaller and more uniform. The catalytic performance was tested in the reduction of nitroarenes to corresponding amines with hydrazine hydrate, and the catalyst showed excellent activity and stability.

    A HINT from Arithmetic: On Systematic Generalization of Perception, Syntax, and Semantics

    Full text link
    Inspired by humans' remarkable ability to master arithmetic and generalize to unseen problems, we present a new dataset, HINT, to study machines' capability of learning generalizable concepts at three different levels: perception, syntax, and semantics. In particular, concepts in HINT, including both digits and operators, are required to learn in a weakly-supervised fashion: Only the final results of handwriting expressions are provided as supervision. Learning agents need to reckon how concepts are perceived from raw signals such as images (i.e., perception), how multiple concepts are structurally combined to form a valid expression (i.e., syntax), and how concepts are realized to afford various reasoning tasks (i.e., semantics). With a focus on systematic generalization, we carefully design a five-fold test set to evaluate both the interpolation and the extrapolation of learned concepts. To tackle this challenging problem, we propose a neural-symbolic system by integrating neural networks with grammar parsing and program synthesis, learned by a novel deduction--abduction strategy. In experiments, the proposed neural-symbolic system demonstrates strong generalization capability and significantly outperforms end-to-end neural methods like RNN and Transformer. The results also indicate the significance of recursive priors for extrapolation on syntax and semantics.Comment: Preliminary wor

    Neural-Symbolic Recursive Machine for Systematic Generalization

    Full text link
    Despite the tremendous success, existing machine learning models still fall short of human-like systematic generalization -- learning compositional rules from limited data and applying them to unseen combinations in various domains. We propose Neural-Symbolic Recursive Machine (NSR) to tackle this deficiency. The core representation of NSR is a Grounded Symbol System (GSS) with combinatorial syntax and semantics, which entirely emerges from training data. Akin to the neuroscience studies suggesting separate brain systems for perceptual, syntactic, and semantic processing, NSR implements analogous separate modules of neural perception, syntactic parsing, and semantic reasoning, which are jointly learned by a deduction-abduction algorithm. We prove that NSR is expressive enough to model various sequence-to-sequence tasks. Superior systematic generalization is achieved via the inductive biases of equivariance and recursiveness embedded in NSR. In experiments, NSR achieves state-of-the-art performance in three benchmarks from different domains: SCAN for semantic parsing, PCFG for string manipulation, and HINT for arithmetic reasoning. Specifically, NSR achieves 100% generalization accuracy on SCAN and PCFG and outperforms state-of-the-art models on HINT by about 23%. Our NSR demonstrates stronger generalization than pure neural networks due to its symbolic representation and inductive biases. NSR also demonstrates better transferability than existing neural-symbolic approaches due to less domain-specific knowledge required
    • …
    corecore