22 research outputs found

    Green finance and industrial pollution: Empirical research based on spatial perspective

    Get PDF
    Green finance is an important means to promote industrial pollution reduction. This paper uses the entropy method and the TOPSIS model to calculate the green finance development index and industrial pollution index of 30 provinces in China from 2011 to 2019, and further uses the panel regression model and SDM model to test the impact of green finance on industrial pollution. The study found that there is a significant positive spatial correlation between green finance and industrial pollution; At the same time, green finance generally inhibits industrial pollution. The effect decomposition results show that the former has a significant negative direct effect and spatial spillover effect on the latter

    Long-term Leap Attention, Short-term Periodic Shift for Video Classification

    Full text link
    Video transformer naturally incurs a heavier computation burden than a static vision transformer, as the former processes TT times longer sequence than the latter under the current attention of quadratic complexity (T2N2)(T^2N^2). The existing works treat the temporal axis as a simple extension of spatial axes, focusing on shortening the spatio-temporal sequence by either generic pooling or local windowing without utilizing temporal redundancy. However, videos naturally contain redundant information between neighboring frames; thereby, we could potentially suppress attention on visually similar frames in a dilated manner. Based on this hypothesis, we propose the LAPS, a long-term ``\textbf{\textit{Leap Attention}}'' (LA), short-term ``\textbf{\textit{Periodic Shift}}'' (\textit{P}-Shift) module for video transformers, with (2TN2)(2TN^2) complexity. Specifically, the ``LA'' groups long-term frames into pairs, then refactors each discrete pair via attention. The ``\textit{P}-Shift'' exchanges features between temporal neighbors to confront the loss of short-term dynamics. By replacing a vanilla 2D attention with the LAPS, we could adapt a static transformer into a video one, with zero extra parameters and neglectable computation overhead (∼\sim2.6\%). Experiments on the standard Kinetics-400 benchmark demonstrate that our LAPS transformer could achieve competitive performances in terms of accuracy, FLOPs, and Params among CNN and transformer SOTAs. We open-source our project in \sloppy \href{https://github.com/VideoNetworks/LAPS-transformer}{\textit{\color{magenta}{https://github.com/VideoNetworks/LAPS-transformer}}} .Comment: Accepted by ACM Multimedia 2022, 10 pages, 4 figure

    Cross-Modality High-Frequency Transformer for MR Image Super-Resolution

    Full text link
    Improving the resolution of magnetic resonance (MR) image data is critical to computer-aided diagnosis and brain function analysis. Higher resolution helps to capture more detailed content, but typically induces to lower signal-to-noise ratio and longer scanning time. To this end, MR image super-resolution has become a widely-interested topic in recent times. Existing works establish extensive deep models with the conventional architectures based on convolutional neural networks (CNN). In this work, to further advance this research field, we make an early effort to build a Transformer-based MR image super-resolution framework, with careful designs on exploring valuable domain prior knowledge. Specifically, we consider two-fold domain priors including the high-frequency structure prior and the inter-modality context prior, and establish a novel Transformer architecture, called Cross-modality high-frequency Transformer (Cohf-T), to introduce such priors into super-resolving the low-resolution (LR) MR images. Comprehensive experiments on two datasets indicate that Cohf-T achieves new state-of-the-art performance

    NLPBench: Evaluating Large Language Models on Solving NLP Problems

    Full text link
    Recent developments in large language models (LLMs) have shown promise in enhancing the capabilities of natural language processing (NLP). Despite these successes, there remains a dearth of research dedicated to the NLP problem-solving abilities of LLMs. To fill the gap in this area, we present a unique benchmarking dataset, NLPBench, comprising 378 college-level NLP questions spanning various NLP topics sourced from Yale University's prior final exams. NLPBench includes questions with context, in which multiple sub-questions share the same public information, and diverse question types, including multiple choice, short answer, and math. Our evaluation, centered on LLMs such as GPT-3.5/4, PaLM-2, and LLAMA-2, incorporates advanced prompting strategies like the chain-of-thought (CoT) and tree-of-thought (ToT). Our study reveals that the effectiveness of the advanced prompting strategies can be inconsistent, occasionally damaging LLM performance, especially in smaller models like the LLAMA-2 (13b). Furthermore, our manual assessment illuminated specific shortcomings in LLMs' scientific problem-solving skills, with weaknesses in logical decomposition and reasoning notably affecting results

    A Survey of Neural Trees

    Full text link
    Neural networks (NNs) and decision trees (DTs) are both popular models of machine learning, yet coming with mutually exclusive advantages and limitations. To bring the best of the two worlds, a variety of approaches are proposed to integrate NNs and DTs explicitly or implicitly. In this survey, these approaches are organized in a school which we term as neural trees (NTs). This survey aims to present a comprehensive review of NTs and attempts to identify how they enhance the model interpretability. We first propose a thorough taxonomy of NTs that expresses the gradual integration and co-evolution of NNs and DTs. Afterward, we analyze NTs in terms of their interpretability and performance, and suggest possible solutions to the remaining challenges. Finally, this survey concludes with a discussion about other considerations like conditional computation and promising directions towards this field. A list of papers reviewed in this survey, along with their corresponding codes, is available at: https://github.com/zju-vipa/awesome-neural-treesComment: 35 pages, 7 figures and 1 tabl

    An investigation of the Brazier effect of a cylindrical tube under pure elastic-plastic bending

    No full text
    Engineering, MultidisciplinaryEngineering, MechanicalSCI(E)18ARTICLE277-863

    The elastic wrinkling of an annular plate under uniform tension on its inner edge

    No full text
    This paper analyses the elastic wrinkling of an annular plate subjected to in-plane uniform tensile stress on its inner edge with the combined use of the Kantorovich method and Galerkin method, and discusses the appearance of wrinkles on the flange of a metal circular sheet during its axisymmetric deep-drawing operation

    The plastic wrinkling of an annular plate under uniform tension on its inner edge

    No full text
    This paper analyses the plastic wrinkling of an annular plate subjected to in-plane uniform tension stress on its inner edge with the combined use of the Kantorovich method and the Galerkin method, and discusses the appearance of wrinkles on the Range of a metal circular sheet during its axisymmetric deep-drawing operation. It is shown that the method provided in this paper is simple, convenient, and very suitable for engineering applications

    An experimental investigation into the stamping of elastic-plastic circular plates

    No full text
    This paper reports the results of an experimental investigation into the deformation processes of circular plates pressed by cylindrical/hemi-spherical punches into conical dies, showing the variations of strain distribution, the development of deformation, the relationship between wrinkling loads and plate/punch dimensions, the springback, the wrinkling modes corresponding to presents some useful information for manufacturing engineers on the designing of forming tools for sheet metals
    corecore