94 research outputs found

    Granular computing and optimization model-based method for large-scale group decision-making and its application

    Get PDF
    In large-scale group decision-making process, some decision makers hesitate among several linguistic terms and cannot compare some alternatives, so they often express evaluation information with incomplete hesitant fuzzy linguistic preference relations. How to obtain suitable large-scale group decision-making results from incomplete preference information is an important and interesting issue to concern about. After analyzing the existing researches, we find that: i) the premise that complete preference relation is perfectly consistent is too strict, ii) deleting all incomplete linguistic preference relations that cannot be fully completed will lose valid assessment information, iii) semantics given by decision makers are greatly possible to be changed during the consistency improving process. In order to solve these issues, this work proposes a novel method based on Granular computing and optimization model for large-scale group decision-making, considering the original consistency of incomplete hesitant fuzzy linguistic preference relation and improving its consistency without changing semantics during the completion process. An illustrative example and simulation experiments demonstrate the rationality and advantages of the proposed method: i) semantics are not changed during the consistency improving process, ii) completion process does not significantly alter the inherent quality of information, iii) complete preference relations are globally consistent, iv) final large-scale group decision-making result is acquired by fusing complete preference relations with different weights

    When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

    Full text link
    With increasing privacy concerns on data, recent studies have made significant progress using federated learning (FL) on privacy-sensitive natural language processing (NLP) tasks. Much literature suggests fully fine-tuning pre-trained language models (PLMs) in the FL paradigm can mitigate the data heterogeneity problem and close the performance gap with centralized training. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we introduce various parameter-efficient tuning (PETuning) methods into federated learning. Specifically, we provide a holistic empirical study of representative PLMs tuning methods in FL. The experimental results cover the analysis of data heterogeneity levels, data scales, and different FL scenarios. Overall communication overhead can be significantly reduced by locally tuning and globally aggregating lightweight model parameters while maintaining acceptable performance in various FL settings. To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently. The source code is available at \url{https://github.com/iezhuozhuo/FedETuning/tree/deltaTuning}

    Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling

    Full text link
    Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI). These models generally perform cross-attention over input pairs, leading to prohibitive computational costs. Recent studies propose dual-encoder and late interaction architectures for faster computation. However, the balance between the expressive of cross-attention and computation speedup still needs better coordinated. To this end, this paper introduces a novel paradigm MixEncoder for efficient sentence pair modeling. MixEncoder involves a light-weight cross-attention mechanism. It conducts query encoding only once while modeling the query-candidate interaction in parallel. Extensive experiments conducted on four tasks demonstrate that our MixEncoder can speed up sentence pairing by over 113x while achieving comparable performance as the more expensive cross-attention models.Comment: Accepted to EMNLP 202

    Enabling Efficient Interaction between an Algorithm Agent and an LLM: A Reinforcement Learning Approach

    Full text link
    Large language models (LLMs) encode a vast amount of world knowledge acquired from massive text datasets. Recent studies have demonstrated that LLMs can assist an algorithm agent in solving complex sequential decision making tasks in embodied environments by providing high-level instructions. However, interacting with LLMs can be time-consuming, as in many practical scenarios, they require a significant amount of storage space that can only be deployed on remote cloud server nodes. Additionally, using commercial LLMs can be costly since they may charge based on usage frequency. In this paper, we explore how to enable efficient and cost-effective interactions between the agent and an LLM. We propose a reinforcement learning based mediator model that determines when it is necessary to consult LLMs for high-level instructions to accomplish a target task. Experiments on 4 MiniGrid environments that entail planning sub-goals demonstrate that our method can learn to solve target tasks with only a few necessary interactions with an LLM, significantly reducing interaction costs in testing environments, compared with baseline methods. Experimental results also suggest that by learning a mediator model to interact with the LLM, the agent's performance becomes more robust against both exploratory and stochastic environments.Comment: 10 page

    Dilated FCN: Listening Longer to Hear Better

    Full text link
    Deep neural network solutions have emerged as a new and powerful paradigm for speech enhancement (SE). The capabilities to capture long context and extract multi-scale patterns are crucial to design effective SE networks. Such capabilities, however, are often in conflict with the goal of maintaining compact networks to ensure good system generalization. In this paper, we explore dilation operations and apply them to fully convolutional networks (FCNs) to address this issue. Dilations equip the networks with greatly expanded receptive fields, without increasing the number of parameters. Different strategies to fuse multi-scale dilations, as well as to install the dilation modules are explored in this work. Using Noisy VCTK and AzBio sentences datasets, we demonstrate that the proposed dilation models significantly improve over the baseline FCN and outperform the state-of-the-art SE solutions.Comment: 5 pages; will appear in WASPAA conferenc

    Multi-scale feature fusion for pavement crack detection based on Transformer

    Get PDF
    Automated pavement crack image segmentation presents a significant challenge due to the difficulty in detecting slender cracks on complex pavement backgrounds, as well as the significant impact of lighting conditions. In this paper, we propose a novel approach for automated pavement crack detection using a multi-scale feature fusion network based on the Transformer architecture, leveraging an encoding-decoding structure. In the encoding phase, the Transformer is leveraged as a substitute for the convolution operation, which utilizes global modeling to enhance feature extraction capabilities and address long-distance dependence. Then, dilated convolution is employed to increase the receptive field of the feature map while maintaining resolution, thereby further improving context information acquisition. In the decoding phase, the linear layer is employed to adjust the length of feature sequence output by different encoder block, and the multi-scale feature map is obtained after dimension conversion. Detailed information of cracks can be restored by fusing multi-scale features, thereby improving the accuracy of crack detection. Our proposed method achieves an F1 score of 70.84% on the Crack500 dataset and 84.50% on the DeepCrack dataset, which are improvements of 1.42% and 2.07% over the state-of-the-art method, respectively. The experimental results show that the proposed method has higher detection accuracy, better generalization and better crack detection results can be obtained under both high and low brightness conditions

    Regulating Glucose and pH, and Monitoring Oxygen in a Bioreactor

    Get PDF
    A system that automatically regulates the concentration of glucose or pH in a liquid culture medium that is circulated through a rotating-wall perfused bioreactor is described. Another system monitors the concentration of oxygen in the culture medium

    Juvenile idiopathic arthritis and primary ovarian failure: a two-sample Mendelian randomization analysis in a mixed-gender cohort

    Get PDF
    BackgroundThe causal relationship between juvenile idiopathic arthritis (JIA) and primary ovarian failure (POF) remains uncertain. To elucidate this relationship, we employed a two-sample Mendelian randomization analysis.MethodsThe single nucleotide polymorphisms (SNPs) associated with JIA were obtained from a previously published genome-wide association study (GWAS), while the pooled data for POF originated from the FinnGen consortium. The study populations consisted exclusively of individuals of European descent. In our Mendelian randomization analysis, we performed inverse-variance weighted analysis, weighted-median analysis, weighted-mode analysis and Mendelian randomization-Egger regression analysis, supplemented by sensitivity analyses to validate the accuracy and robustness of the findings.ResultsThe IVW (OR = 1.23, 95% CI 1.06-1.43; P = 0.007) and weighted median (OR = 1.25, 95% CI 1.06-1.47; P = 0.009), along with sensitivity analysis validation, provide compelling evidence of a significant causal association between JIA and POF.ConclusionThe study revealed a significant causal association between genetically predicted JIA and POF, indicating that JIA significantly elevates the risk of developing POF. Therefore, it is recommended to implement screening for premature ovarian failure in women diagnosed with JIA
    corecore