161 research outputs found

    Лексико-семантическая группа глаголов межличностных отношений в русском и китайском языках: на материале перевода романа Ф.М. Достоевского «Преступление и наказание»

    Get PDF
    This study is devoted to the analysis of lexico-semantic group of verbs, which express attitude to someone in Russian and the ways of their translation into Chinese. A group of emotional and evaluative verbs included in the lexico-semantic field of interpersonal relations is analyzed. The choice of the study material is determined by the fact that this group of verbs is one of the most frequent in the use and widely represented in the novel “Crime and punishment” by F.M. Dostoyevsky, occurring 561 times. The significance of this research lies in the absence of a special systematic study of this lexico-semantic group on the material of literature in Russian and Chinese languages, as well as in the need to develop a comprehensive research methodology, methods of comparative and contextual analyses. The study reveals the semantic features of verbs in the Russian and Chinese languages. It is established that the lexico-semantic group under study consists of verbs that are perceived as categorical-lexical semes “relation” and can have both positive and negative semantic meaning. The semes ‘positive attitude’, ‘love’, ‘faith’, ‘respect’, ‘compassion’, ‘pity’ and ‘negative attitude’, ‘suffering’, ‘doubt’, ‘fear’ are subjected to study. These features determine the structure of the group in question in the lexical and semantic system of the Russian and Chinese languages, are expanding the understanding of the content and structure of the group of verbs. The result of the study is that the analysis of interlingual gaps reveals the presence of incomplete lexical correspondence to a foreign language word. The analyzed linguistic material made it possible to identify similarities and differences in the semantics of verbs when translating the text of the novel into Chinese.Актуальность работы заключается в отсутствии специального систематического исследования лексико-семантической группы глаголов, выражающих отношение к кому-либо в русском языке, и способам их перевода на китайский язык, на материале художественной литературы в русском и китайском языках, а также связана с необходимостью разработки комплексной методики исследования, приемов сопоставительного, контекстного анализа. Анализу подвергается группа эмоционально-оценочных глаголов, входящих в лексико-семантическое поле межличностных отношений. Выбор материала исследования определяется тем, что эта группа глаголов является одной из самых частотных в употреблении и широко представленных в романе Ф.М. Достоевского «Преступление и наказание» (встречаются 561 раз). Выявлены семантические особенности глаголов в русском и китайском языках. Установлено, что исследуемую лексико-семантическую группу составляют глаголы, которые воспринимаются как категориально-лексические семы «отношение» и могут иметь как положительное, так и отрицательное семантическое значение. Исследованию подвергаются семы ‘позитивное отношение’, ‘любовь’, ‘вера’, ‘уважение’, ‘сострадание’, ‘жалость’ и ‘негативное отношение’, ‘страдание’, ‘сомнение’, ‘боязнь’. Эти признаки определяют структуру рассматриваемой группы в лексико-семантической системе русского и китайского языков, расширяя представления о содержании и структуре группы рассматриваемых глаголов. Результат исследования заключается в том, что при анализе межъязыковых лакун выявляется наличие неполного лексического соответствия иноязычному слову. Проанализированный языковой материал позволил выявить сходства и различия в семантике глаголов при переводе текста романа на китайский язык

    Characterization of severe fever with thrombocytopenia syndrome in rural regions of Zhejiang, China.

    Get PDF
    Severe fever with thrombocytopenia syndrome virus (SFTSV) infections have recently been found in rural regions of Zhejiang. A severe fever with thrombocytopenia syndrome (SFTS) surveillance and sero-epidemiological investigation was conducted in the districts with outbreaks. During the study period of 2011-2014, a total of 51 SFTSV infection cases were identified and the case fatality rate was 12% (6/51). Ninety two percent of the patients (47/51) were over 50 years of age, and 63% (32/51) of laboratory confirmed cases occurred from May to July. Nine percent (11/120) of the serum samples from local healthy people without symptoms were found to be positive for antibodies to the SFTS virus. SFTSV strains were isolated by culture using Vero, and the whole genomic sequences of two SFTSV strains (01 and Zhao) were sequenced and submitted to the GenBank. Homology analysis showed that the similarity of the target nucleocapsid gene from the SFTSV strains from different geographic areas was 94.2-100%. From the constructed phylogenetic tree, it was found that all the SFTSV strains diverged into two main clusters. Only the SFTSV strains from the Zhejiang (Daishan) region of China and the Yamaguchi, Miyazakj regions of Japan, were clustered into lineage II, consistent with both of these regions being isolated areas with similar geographic features. Two out of eight predicted linear B cell epitopes from the nucleocapsid protein showed mutations between the SFTSV strains of different clusters, but did not contribute to the binding ability of the specific SFTSV antibodies. This study confirmed that SFTSV has been circulating naturally and can cause a seasonal prevalence in Daishan, China. The results also suggest that the molecular characteristics of SFTSV are associated with the geographic region and all SFTSV strains can be divided into two genotypes

    Learning Accurate Entropy Model with Global Reference for Image Compression

    Full text link
    In recent deep image compression neural networks, the entropy model plays a critical role in estimating the prior distribution of deep image encodings. Existing methods combine hyperprior with local context in the entropy estimation function. This greatly limits their performance due to the absence of a global vision. In this work, we propose a novel Global Reference Model for image compression to effectively leverage both the local and the global context information, leading to an enhanced compression rate. The proposed method scans decoded latents and then finds the most relevant latent to assist the distribution estimating of the current latent. A by-product of this work is the innovation of a mean-shifting GDN module that further improves the performance. Experimental results demonstrate that the proposed model outperforms the rate-distortion performance of most of the state-of-the-art methods in the industry

    Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation

    Full text link
    Large language models (LLMs) have emerged as a new paradigm for Text-to-SQL task. However, the absence of a systematical benchmark inhibits the development of designing effective, efficient and economic LLM-based Text-to-SQL solutions. To address this challenge, in this paper, we first conduct a systematical and extensive comparison over existing prompt engineering methods, including question representation, example selection and example organization, and with these experimental results, we elaborate their pros and cons. Based on these findings, we propose a new integrated solution, named DAIL-SQL, which refreshes the Spider leaderboard with 86.6% execution accuracy and sets a new bar. To explore the potential of open-source LLM, we investigate them in various scenarios, and further enhance their performance with supervised fine-tuning. Our explorations highlight open-source LLMs' potential in Text-to-SQL, as well as the advantages and disadvantages of the supervised fine-tuning. Additionally, towards an efficient and economic LLM-based Text-to-SQL solution, we emphasize the token efficiency in prompt engineering and compare the prior studies under this metric. We hope that our work provides a deeper understanding of Text-to-SQL with LLMs, and inspires further investigations and broad applications.Comment: We have released code on https://github.com/BeachWang/DAIL-SQ

    Q-Diffusion: Quantizing Diffusion Models

    Full text link
    Diffusion models have achieved great success in image synthesis through iterative noise estimation using deep neural networks. However, the slow inference, high memory consumption, and computation intensity of the noise estimation model hinder the efficient adoption of diffusion models. Although post-training quantization (PTQ) is considered a go-to compression method for other tasks, it does not work out-of-the-box on diffusion models. We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture of the diffusion models, which compresses the noise estimation network to accelerate the generation process. We identify the key difficulty of diffusion model quantization as the changing output distributions of noise estimation networks over multiple time steps and the bimodal activation distribution of the shortcut layers within the noise estimation network. We tackle these challenges with timestep-aware calibration and split shortcut quantization in this work. Experimental results show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance (small FID change of at most 2.34 compared to >100 for traditional PTQ) in a training-free manner. Our approach can also be applied to text-guided image generation, where we can run stable diffusion in 4-bit weights with high generation quality for the first time.Comment: The code is available at https://github.com/Xiuyu-Li/q-diffusio

    SqueezeLLM: Dense-and-Sparse Quantization

    Full text link
    Generative Large Language Models (LLMs) have demonstrated remarkable results for a wide range of tasks. However, deploying these models for inference has been a significant challenge due to their unprecedented resource requirements. This has forced existing deployment frameworks to use multi-GPU inference pipelines, which are often complex and costly, or to use smaller and less performant models. In this work, we demonstrate that the main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, specifically for single batch inference. While quantization has emerged as a promising solution by representing model weights with reduced precision, previous efforts have often resulted in notable performance degradation. To address this, we introduce SqueezeLLM, a post-training quantization framework that not only enables lossless compression to ultra-low precisions of up to 3-bit, but also achieves higher quantization performance under the same memory constraint. Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format. When applied to the LLaMA models, our 3-bit quantization significantly reduces the perplexity gap from the FP16 baseline by up to 2.1x as compared to the state-of-the-art methods with the same memory requirement. Furthermore, when deployed on an A6000 GPU, our quantized models achieve up to 2.3x speedup compared to the baseline. Our code is open-sourced and available online

    TorchSparse++: Efficient Training and Inference Framework for Sparse Convolution on GPUs

    Full text link
    Sparse convolution plays a pivotal role in emerging workloads, including point cloud processing in AR/VR, autonomous driving, and graph understanding in recommendation systems. Since the computation pattern is sparse and irregular, specialized high-performance kernels are required. Existing GPU libraries offer two dataflow types for sparse convolution. The gather-GEMM-scatter dataflow is easy to implement but not optimal in performance, while the dataflows with overlapped computation and memory access (e.g.implicit GEMM) are highly performant but have very high engineering costs. In this paper, we introduce TorchSparse++, a new GPU library that achieves the best of both worlds. We create a highly efficient Sparse Kernel Generator that generates performant sparse convolution kernels at less than one-tenth of the engineering cost of the current state-of-the-art system. On top of this, we design the Sparse Autotuner, which extends the design space of existing sparse convolution libraries and searches for the best dataflow configurations for training and inference workloads. Consequently, TorchSparse++ achieves 2.9x, 3.3x, 2.2x and 1.7x measured end-to-end speedup on an NVIDIA A100 GPU over state-of-the-art MinkowskiEngine, SpConv 1.2, TorchSparse and SpConv v2 in inference; and is 1.2-1.3x faster than SpConv v2 in mixed precision training across seven representative autonomous driving benchmarks. It also seamlessly supports graph convolutions, achieving 2.6-7.6x faster inference speed compared with state-of-the-art graph deep learning libraries.Comment: MICRO 2023; Haotian Tang and Shang Yang contributed equally to this projec

    Fault location of multi-point hybrid transmission line based on HHT

    Get PDF
    Due to the discontinuous wave impedance, uneven line parameters, and complex and changeable fault transient traveling waves of overhead-cable hybrid transmission lines, the traditional double-ended traveling wave ranging method will produce large errors. Aiming at this problem, this paper proposes a fault location method for multi-point hybrid transmission lines based on Hilbert-Huang transform (HHT). First, the effective identification of the traveling wave head is completed at both ends of the line and at the connection point between the overhead line and the cable line, and then the HHT is used to extract the time when the fault traveling wave head reaches the measurement point, and finally it is substituted into the multi-point ranging equation to calculate the fault. The result of the ranging. The simulation results of MATLAB/PSCAD show that the method proposed in this paper avoids the influence of traveling wave velocity on the ranging accuracy, and is not affected by the line structure. Compared with the traditional double-ended ranging method, its ranging accuracy is higher. At the same time, it can also meet the requirements of engineering practice positioning accuracy within 200m
    corecore