200 research outputs found

    FedRec+: Enhancing Privacy and Addressing Heterogeneity in Federated Recommendation Systems

    Full text link
    Preserving privacy and reducing communication costs for edge users pose significant challenges in recommendation systems. Although federated learning has proven effective in protecting privacy by avoiding data exchange between clients and servers, it has been shown that the server can infer user ratings based on updated non-zero gradients obtained from two consecutive rounds of user-uploaded gradients. Moreover, federated recommendation systems (FRS) face the challenge of heterogeneity, leading to decreased recommendation performance. In this paper, we propose FedRec+, an ensemble framework for FRS that enhances privacy while addressing the heterogeneity challenge. FedRec+ employs optimal subset selection based on feature similarity to generate near-optimal virtual ratings for pseudo items, utilizing only the user's local information. This approach reduces noise without incurring additional communication costs. Furthermore, we utilize the Wasserstein distance to estimate the heterogeneity and contribution of each client, and derive optimal aggregation weights by solving a defined optimization problem. Experimental results demonstrate the state-of-the-art performance of FedRec+ across various reference datasets.Comment: Accepted by 59th Annual Allerton Conference on Communication, Control, and Computin

    FedEBA+: Towards Fair and Effective Federated Learning via Entropy-Based Model

    Full text link
    Ensuring fairness is a crucial aspect of Federated Learning (FL), which enables the model to perform consistently across all clients. However, designing an FL algorithm that simultaneously improves global model performance and promotes fairness remains a formidable challenge, as achieving the latter often necessitates a trade-off with the former. To address this challenge, we propose a new FL algorithm, FedEBA+, which enhances fairness while simultaneously improving global model performance. FedEBA+ incorporates a fair aggregation scheme that assigns higher weights to underperforming clients and an alignment update method. In addition, we provide theoretical convergence analysis and show the fairness of FedEBA+. Extensive experiments demonstrate that FedEBA+ outperforms other SOTA fairness FL methods in terms of both fairness and global model performance

    Network analysis on cortical morphometry in first-episode schizophrenia

    Full text link
    First-episode schizophrenia (FES) results in abnormality of brain connectivity at different levels. Despite some successful findings on functional and structural connectivity of FES, relatively few studies have been focused on morphological connectivity, which may provide a potential biomarker for FES. In this study, we aim to investigate cortical morphological connectivity in FES. T1-weighted magnetic resonance image data from 92 FES patients and 106 healthy controls (HCs) are analyzed.We parcellate brain into 68 cortical regions, calculate the averaged thickness and surface area of each region, construct undirected networks by correlating cortical thickness or surface area measures across 68 regions for each group, and finally compute a variety of network-related topology characteristics. Our experimental results show that both the cortical thickness network and the surface area network in two groups are small-world networks; that is, those networks have high clustering coefficients and low characteristic path lengths. At certain network sparsity levels, both the cortical thickness network and the surface area network of FES have significantly lower clustering coefficients and local efficiencies than those of HC, indicating FES-related abnormalities in local connectivity and small-worldness. These abnormalities mainly involve the frontal, parietal, and temporal lobes. Further regional analyses confirm significant group differences in the node betweenness of the posterior cingulate gyrus for both the cortical thickness network and the surface area network. Our work supports that cortical morphological connectivity, which is constructed based on correlations across subjects' cortical thickness, may serve as a tool to study topological abnormalities in neurological disorders

    RNA-Targeting Splicing Modifiers: Drug Development and Screening Assays

    Get PDF
    RNA splicing is an essential step in producing mature messenger RNA (mRNA) and other RNA species. Harnessing RNA splicing modifiers as a new pharmacological modality is promising for the treatment of diseases caused by aberrant splicing. This drug modality can be used for infectious diseases by disrupting the splicing of essential pathogenic genes. Several antisense oligonucleotide splicing modifiers were approved by the U.S. Food and Drug Administration (FDA) for the treatment of spinal muscular atrophy (SMA) and Duchenne muscular dystrophy (DMD). Recently, a small-molecule splicing modifier, risdiplam, was also approved for the treatment of SMA, highlighting small molecules as important warheads in the arsenal for regulating RNA splicing. The cellular targets of these approved drugs are all mRNA precursors (pre-mRNAs) in human cells. The development of novel RNA-targeting splicing modifiers can not only expand the scope of drug targets to include many previously considered “undruggable” genes but also enrich the chemical-genetic toolbox for basic biomedical research. In this review, we summarized known splicing modifiers, screening methods for novel splicing modifiers, and the chemical space occupied by the small-molecule splicing modifiers

    Numerical Simulation Analysis of Mechanical Properties on Rock Brittle–Ductility Transformation Under Different Loading Rates

    Get PDF
    At present, a large number of physical tests and numerical simulations have been carried out to study the effect of confining pressure on rock deformation mechanism, and some achievements have been achieved; however, the mechanism of rock deformation in actual mine engineering needs to be further studied, for example, rock-burst is actually a unilateral unloading process of rock mass, and this process can not be completed by physical test. RFPA3D was used to simulate the brittle–ductility transformation mechanical properties of rock under different confining pressures in this paper. The damage constitutive equation of rock was derived from continuum damage mechanics; the damage coefficients of different rocks were determined based on the numerical results of stress acoustic emission, so the correctness of rock damage constitutive equation was verified. According to the derived brittle–ductility damage equation and the fitting results of ductility cumulative damage data, it was found that the development trend of rock brittleness stage was almost the same, and the extended separation occurred after entering ductility stage. The larger the Poisson’s ratio was, the longer the ductility stage was. The smaller the Poisson’s ratio was, the shorter the ductility stage was, but the larger the bearing capacity was. At the late loading stage, the ductility cumulative damage of rock showed a linear upward trend, the bearing capacity sharply decreased, the rock stability failure occurred, and the ductility damage coefficient increased gradually. The study on the brittle–ductile mechanical properties of rocks can help to deep mine’s rock-burst prediction and prevention and has significant engineering significance

    Coverage Goal Selector for Combining Multiple Criteria in Search-Based Unit Test Generation

    Full text link
    Unit testing is critical to the software development process, ensuring the correctness of basic programming units in a program (e.g., a method). Search-based software testing (SBST) is an automated approach to generating test cases. SBST generates test cases with genetic algorithms by specifying the coverage criterion (e.g., branch coverage). However, a good test suite must have different properties, which cannot be captured using an individual coverage criterion. Therefore, the state-of-the-art approach combines multiple criteria to generate test cases. Since combining multiple coverage criteria brings multiple objectives for optimization, it hurts the test suites' coverage for certain criteria compared with using the single criterion. To cope with this problem, we propose a novel approach named \textbf{smart selection}. Based on the coverage correlations among criteria and the subsumption relationships among coverage goals, smart selection selects a subset of coverage goals to reduce the number of optimization objectives and avoid missing any properties of all criteria. We conduct experiments to evaluate smart selection on 400400 Java classes with three state-of-the-art genetic algorithms under the 22-minute budget. On average, smart selection outperforms combining all goals on 65.1%65.1\% of the classes having significant differences between the two approaches. Secondly, we conduct experiments to verify our assumptions about coverage criteria relationships. Furthermore, we experiment with different budgets of 55, 88, and 1010 minutes, confirming the advantage of smart selection over combining all goals.Comment: arXiv admin note: substantial text overlap with arXiv:2208.0409

    ChatGPT vs SBST: a comparative assessment of unit test suite generation

    Get PDF
    Recent advancements in large language models (LLMs) have demonstrated exceptional success in a wide range of general domain tasks, such as question answering and following instructions. Moreover, LLMs have shown potential in various software engineering applications. In this study, we present a systematic comparison of test suites generated by the ChatGPT LLM and the state-of-the-art SBST tool EvoSuite. Our comparison is based on several critical factors, including correctness, readability, code coverage, and bug detection capability. By highlighting the strengths and weaknesses of LLMs (specifically ChatGPT) in generating unit test cases compared to EvoSuite, this work provides valuable insights into the performance of LLMs in solving software engineering problems. Overall, our findings underscore the potential of LLMs in software engineering and pave the way for further research in this area

    Streaming Voice Conversion Via Intermediate Bottleneck Features And Non-streaming Teacher Guidance

    Full text link
    Streaming voice conversion (VC) is the task of converting the voice of one person to another in real-time. Previous streaming VC methods use phonetic posteriorgrams (PPGs) extracted from automatic speech recognition (ASR) systems to represent speaker-independent information. However, PPGs lack the prosody and vocalization information of the source speaker, and streaming PPGs contain undesired leaked timbre of the source speaker. In this paper, we propose to use intermediate bottleneck features (IBFs) to replace PPGs. VC systems trained with IBFs retain more prosody and vocalization information of the source speaker. Furthermore, we propose a non-streaming teacher guidance (TG) framework that addresses the timbre leakage problem. Experiments show that our proposed IBFs and the TG framework achieve a state-of-the-art streaming VC naturalness of 3.85, a content consistency of 3.77, and a timbre similarity of 3.77 under a future receptive field of 160 ms which significantly outperform previous streaming VC systems.Comment: The paper has been submitted to ICASSP202
    • …
    corecore