672 research outputs found

    Monobutyl phthalate induces the expression change of G-Protein-Coupled Receptor 30 in rat testicular Sertoli cells

    Get PDF
    The aim of the study was to explore whether G-Protein-Coupled Receptor 30 (GPR30) was expressed in rat testicular Sertoli cells and to assess the impact of monobutyl phthalate (MBP) on the expression of GPR30 in Sertoli cells. By using RT-PCR, Western-Blot and immunofluorescent microscopy, the expression of GPR30 in rat Sertoli cells was found at both gene and protein level. Cultures of Sertoli cells were exposed to MBP (10– –1000 mM) or a vehicle. The results indicated that the expression of GPR30 increased at gene and protein levels in Sertoli cells following administration of MBP even at a relatively low concentration. We suggest that changes of GPR30 expression may play an important role in the effects of the xenoestrogen MBP on Sertoli cell function. (Folia Histochemica et Cytobiologica 2013, Vol. 51, No. 1, 18–24

    Analysis on the vibration modes of the electric vehicle motor stator

    Get PDF
    The lightweight design of the electric vehicle motor brought about more serious vibration and noise problem of the motor. An accurate modal calculation was the basis for the study of the vibration and noise characteristics of the electric vehicle motor. The finite element method was used to perform the modal simulation of the PMSM. Through the reasonable simplification and equivalence of the motor stator model, the first 7 orders natural frequencies and corresponding modes of the motor stator under the free state were calculated. After that, the accuracy of the finite element model was verified by the hammering modal experiment of the prototype. Furthermore, the above results will provide the theoretical basis for the electric vehicle motor’s vibration control and NVH improvement

    Carbon Market Regulation Mechanism Research Based on Carbon Accumulation Model with Jump Diffusion

    Get PDF
    In order to explore carbon market regulation mechanism more effectively, based on carbon accumulation model with jump diffusion, this paper studies the carbon price from two perspectives of quantity instrument and price instrument and quantitatively simulates carbon price regulation mechanisms in the light of actual operation of EU carbon market. The results show that quantity instrument and price instrument both have certain effects on carbon market; according to the comparison of the elasticity change of the expected carbon price, comparative advantages of both instruments rely on the price of carbon finance market. Where the carbon price is excessively high, price instrument is superior to quantity instrument; where carbon price is excessively low, quantity instrument is better than price instrument. Therefore, in the case of carbon market regulation based on expected carbon price, if the carbon price is too high, price instrument should prevail; if the carbon price is excessively low, quantity instrument should prevail

    Split, Encode and Aggregate for Long Code Search

    Full text link
    Code search with natural language plays a crucial role in reusing existing code snippets and accelerating software development. Thanks to the Transformer-based pretraining models, the performance of code search has been improved significantly compared to traditional information retrieval (IR) based models. However, due to the quadratic complexity of multi-head self-attention, there is a limit on the input token length. For efficient training on standard GPUs like V100, existing pretrained code models, including GraphCodeBERT, CodeBERT, RoBERTa (code), take the first 256 tokens by default, which makes them unable to represent the complete information of long code that is greater than 256 tokens. Unlike long text paragraph that can be regarded as a whole with complete semantics, the semantics of long code is discontinuous as a piece of long code may contain different code modules. Therefore, it is unreasonable to directly apply the long text processing methods to long code. To tackle the long code problem, we propose SEA (Split, Encode and Aggregate for Long Code Search), which splits long code into code blocks, encodes these blocks into embeddings, and aggregates them to obtain a comprehensive long code representation. With SEA, we could directly use Transformer-based pretraining models to model long code without changing their internal structure and repretraining. Leveraging abstract syntax tree (AST) based splitting and attention-based aggregation methods, SEA achieves significant improvements in long code search performance. We also compare SEA with two sparse Trasnformer methods. With GraphCodeBERT as the encoder, SEA achieves an overall mean reciprocal ranking score of 0.785, which is 10.1% higher than GraphCodeBERT on the CodeSearchNet benchmark.Comment: 9 page

    Revisiting Code Search in a Two-Stage Paradigm

    Full text link
    With a good code search engine, developers can reuse existing code snippets and accelerate software development process. Current code search methods can be divided into two categories: traditional information retrieval (IR) based and deep learning (DL) based approaches. DL-based approaches include the cross-encoder paradigm and the bi-encoder paradigm. However, both approaches have certain limitations. The inference of IR-based and bi-encoder models are fast, however, they are not accurate enough; while cross-encoder models can achieve higher search accuracy but consume more time. In this work, we propose TOSS, a two-stage fusion code search framework that can combine the advantages of different code search methods. TOSS first uses IR-based and bi-encoder models to efficiently recall a small number of top-k code candidates, and then uses fine-grained cross-encoders for finer ranking. Furthermore, we conduct extensive experiments on different code candidate volumes and multiple programming languages to verify the effectiveness of TOSS. We also compare TOSS with six data fusion methods. Experimental results show that TOSS is not only efficient, but also achieves state-of-the-art accuracy with an overall mean reciprocal ranking (MRR) score of 0.763, compared to the best baseline result on the CodeSearchNet benchmark of 0.713. Our source code and experimental data are available at: https://github.com/fly-dragon211/TOSS.Comment: Accepted by WSDM 202
    • …
    corecore