104 research outputs found

    Significant wave height forecasting based on the hybrid EMD-SVM method

    Get PDF
    1957-1962Prediction of significant wave height (SWH) is considered an effective method in marine engineering and prevention of marine disasters. Support vector machine (SVM) model has limitations in processing nonlinear and non-stationary SWH time series. Fortunately, empirical mode decomposition (EMD) can effectively deal with the complicated series. So, the SWH prediction method based on EMD and SVM is proposed by combining the advantages of both methods. A statistical analysis was carried out to compare the results of two models i.e., between the hybrid EMD-SVM and SVM. In addition, two models are used for forecasting SWH with 3, 6, 12 and 24 hours lead times, respectively. A high R value of different prediction times for the hybrid model. Results indicate that SWH prediction of the hybrid EMD-SVM model is superior to the SVM model

    Algebraic Cryptanalysis Scheme of AES-256 Using Gröbner Basis

    Get PDF
    The zero-dimensional Gröbner basis construction is a crucial step in Gröbner basis cryptanalysis on AES-256. In this paper, after performing an in-depth study on the linear transformation and the system of multivariate polynomial equations of AES-256, the zero-dimensional Gröbner basis construction method is proposed by choosing suitable term order and variable order. After giving a detailed construction process of the zero-dimensional Gröbner basis, the necessary theoretical proof is presented. Based on this, an algebraic cryptanalysis scheme of AES-256 using Gröbner basis is proposed. Analysis shows that the complexity of our scheme is lower than that of the exhaustive attack

    Solving Continual Offline Reinforcement Learning with Decision Transformer

    Full text link
    Continuous offline reinforcement learning (CORL) combines continuous and offline reinforcement learning, enabling agents to learn multiple tasks from static datasets without forgetting prior tasks. However, CORL faces challenges in balancing stability and plasticity. Existing methods, employing Actor-Critic structures and experience replay (ER), suffer from distribution shifts, low efficiency, and weak knowledge-sharing. We aim to investigate whether Decision Transformer (DT), another offline RL paradigm, can serve as a more suitable offline continuous learner to address these issues. We first compare AC-based offline algorithms with DT in the CORL framework. DT offers advantages in learning efficiency, distribution shift mitigation, and zero-shot generalization but exacerbates the forgetting problem during supervised parameter updates. We introduce multi-head DT (MH-DT) and low-rank adaptation DT (LoRA-DT) to mitigate DT's forgetting problem. MH-DT stores task-specific knowledge using multiple heads, facilitating knowledge sharing with common components. It employs distillation and selective rehearsal to enhance current task learning when a replay buffer is available. In buffer-unavailable scenarios, LoRA-DT merges less influential weights and fine-tunes DT's decisive MLP layer to adapt to the current task. Extensive experiments on MoJuCo and Meta-World benchmarks demonstrate that our methods outperform SOTA CORL baselines and showcase enhanced learning capabilities and superior memory efficiency.Comment: 11 pages, 6 figure

    Comparison of gemcitabine/carboplat in versus paclitaxel/cisplatin for the management of non small cell lung cancer

    Get PDF
    Purpose: To determine the comparative efficacy and toxicity of gemcitabine/carboplatin and paclitaxel/cisplatin in patients with completely resected stage IIa - IIIa non-small cell lung cancer (NSCLC). Methods: Sixty eligible NSCLC patients treated in Funan County People's Hospital were enrolled and assigned to two groups by randomization (n = 30 each). One group (CG group) received the combination of gemcitabine and carboplatin, while the second group (CP group) received a combination of cisplatin and paclitaxel. Efficacy was assessed based on 2-year progression-free survival, while adverse reactions were recorded to assess the toxicity of the chemotherapy treatments. Results: No marked difference was found in the 2-year relapse-free survival in the two groups with similar clinical baseline characteristics after follow-up (60 % in CG group vs. 56.67 % in CP group, p = 0.826). Specifically, no significant difference was found between the two groups with regard to incidence of local metastases, distant metastases, or brain tissue metastases within 2 years, and there were no treatment-related deaths. CG group was more likely to develop leukopenia (93.33 % vs. 63.33 % for CP group, p = 0.04), but no significant difference was observed for other adverse effects such as anemia, vomiting, and nausea. Conclusion: This study shows that adjuvant treatment using carboplatin and gemcitabine produces the same therapeutic efficacy as cisplatin and paclitaxel, but exhibits higher toxicity levels than the latter

    Inflammo-immune perspective on the association of eight migraine risk factors with migraine: a multi-omics Mendelian randomization study

    Get PDF
    BackgroundMigraine risk factors are associated with migraine susceptibility, yet their mechanisms are unclear. Evidence suggests a role for inflammatory proteins and immune cells in migraine pathogenesis. This study aimed to examine the inflammo-immune association between eight migraine risk factors and the disorder.MethodsThis study utilized inverse variance weighted (IVW) method and colocalization analysis to explore potential causal relationships between eight migraine risk factors, migraine, 731 immune cells, and 91 circulating inflammatory proteins. Mediation Mendelian randomization (MR) was further used to confirm the mediating role of circulating inflammatory proteins and immune cells between the eight migraine risk factors and migraine.ResultsMigraine risk factors are linked to 276 immune cells and inflammatory proteins, with cigarettes smoked per day strongly co-localized with CD33-HLA DR+ cells. Despite no co-localization, 23 immune cells/inflammatory proteins relate to migraine. Depression, all anxiety disorders, and sleep apnea are correlated with migraine, and all anxiety disorders are supported by strong co-localization evidence. However, the mediating effect of inflammatory proteins and immune cells between eight migraine risk factors and migraine has not been confirmed.ConclusionWe elucidate the potential causal relationships between eight migraine risk factors, migraine, immune cells, and inflammatory proteins, enhancing our understanding of the molecular etiology of migraine pathogenesis from an inflammatory-immune perspective

    MS-MT: Multi-Scale Mean Teacher with Contrastive Unpaired Translation for Cross-Modality Vestibular Schwannoma and Cochlea Segmentation

    Full text link
    Domain shift has been a long-standing issue for medical image segmentation. Recently, unsupervised domain adaptation (UDA) methods have achieved promising cross-modality segmentation performance by distilling knowledge from a label-rich source domain to a target domain without labels. In this work, we propose a multi-scale self-ensembling based UDA framework for automatic segmentation of two key brain structures i.e., Vestibular Schwannoma (VS) and Cochlea on high-resolution T2 images. First, a segmentation-enhanced contrastive unpaired image translation module is designed for image-level domain adaptation from source T1 to target T2. Next, multi-scale deep supervision and consistency regularization are introduced to a mean teacher network for self-ensemble learning to further close the domain gap. Furthermore, self-training and intensity augmentation techniques are utilized to mitigate label scarcity and boost cross-modality segmentation performance. Our method demonstrates promising segmentation performance with a mean Dice score of 83.8% and 81.4% and an average asymmetric surface distance (ASSD) of 0.55 mm and 0.26 mm for the VS and Cochlea, respectively in the validation phase of the crossMoDA 2022 challenge.Comment: Accepted by BrainLes MICCAI proceedings (5th solution for MICCAI 2022 Cross-Modality Domain Adaptation (crossMoDA) Challenge

    Dense X Retrieval: What Retrieval Granularity Should We Use?

    Full text link
    Dense retrieval has become a prominent method to obtain relevant context or world knowledge in open-domain NLP tasks. When we use a learned dense retriever on a retrieval corpus at inference time, an often-overlooked design choice is the retrieval unit in which the corpus is indexed, e.g. document, passage, or sentence. We discover that the retrieval unit choice significantly impacts the performance of both retrieval and downstream tasks. Distinct from the typical approach of using passages or sentences, we introduce a novel retrieval unit, proposition, for dense retrieval. Propositions are defined as atomic expressions within text, each encapsulating a distinct factoid and presented in a concise, self-contained natural language format. We conduct an empirical comparison of different retrieval granularity. Our results reveal that proposition-based retrieval significantly outperforms traditional passage or sentence-based methods in dense retrieval. Moreover, retrieval by proposition also enhances the performance of downstream QA tasks, since the retrieved texts are more condensed with question-relevant information, reducing the need for lengthy input tokens and minimizing the inclusion of extraneous, irrelevant information

    InstructCoder: Empowering Language Models for Code Editing

    Full text link
    Code editing encompasses a variety of pragmatic tasks that developers deal with daily. Despite its relevance and practical usefulness, automatic code editing remains an underexplored area in the evolution of deep learning models, partly due to data scarcity. In this work, we explore the use of large language models (LLMs) to edit code based on user instructions, covering a broad range of implicit tasks such as comment insertion, code optimization, and code refactoring. To facilitate this, we introduce InstructCoder, the first dataset designed to adapt LLMs for general-purpose code editing, containing highdiversity code-editing tasks. It consists of over 114,000 instruction-input-output triplets and covers multiple distinct code editing scenarios. The dataset is systematically expanded through an iterative process that commences with code editing data sourced from GitHub commits as seed tasks. Seed and generated tasks are used subsequently to prompt ChatGPT for more task data. Our experiments demonstrate that open-source LLMs fine-tuned on InstructCoder can edit code correctly based on users' instructions most of the time, exhibiting unprecedented code-editing performance levels. Such results suggest that proficient instruction-finetuning can lead to significant amelioration in code editing abilities. The dataset and the source code are available at https://github.com/qishenghu/CodeInstruct

    Giant Enhancement of Magnonic Frequency Combs by Exceptional Points

    Full text link
    With their incomparable time-frequency accuracy, frequency combs have significantly advanced precision spectroscopy, ultra-sensitive detection, and atomic clocks. Traditional methods to create photonic, phononic, and magnonic frequency combs hinge on material nonlinearities which are often weak, necessitating high power densities to surpass their initiation thresholds, which subsequently limits their applications. Here, we introduce a novel nonlinear process to efficiently generate magnonic frequency combs (MFCs) by exploiting exceptional points (EPs) in a coupled system comprising a pump-induced magnon mode and a Kittel mode. Even without any cavity, our method greatly improves the efficiency of nonlinear frequency conversion and achieves optimal MFCs at low pump power. Additionally, our novel nonlinear process enables excellent tunability of EPs using the polarization and power of the pump, simplifying MFC generation and manipulation. Our work establishes a synergistic relationship between non-Hermitian physics and MFCs, which is advantages for coherent/quantum information processing and ultra-sensitive detection.Comment: 7 pages, 4 figure

    KD_ConvNeXt: knowledge distillation-based image classification of lung tumor surgical specimen sections

    Get PDF
    Introduction: Lung cancer is currently among the most prevalent and lethal cancers in the world in terms of incidence and fatality rates. In clinical practice, identifying the specific subtypes of lung cancer is essential in diagnosing and treating lung lesions.Methods: This paper aims to collect histopathological section images of lung tumor surgical specimens to construct a clinical dataset for researching and addressing the classification problem of specific subtypes of lung tumors. Our method proposes a teacher-student network architecture based on a knowledge distillation mechanism for the specific subtype classification of lung tumor histopathological section images to assist clinical applications, namely KD_ConvNeXt. The proposed approach enables the student network (ConvNeXt) to extract knowledge from the intermediate feature layers of the teacher network (Swin Transformer), improving the feature extraction and fitting capabilities of ConvNeXt. Meanwhile, Swin Transformer provides soft labels containing information about the distribution of images in various categories, making the model focused more on the information carried by types with smaller sample sizes while training.Results: This work has designed many experiments on a clinical lung tumor image dataset, and the KD_ConvNeXt achieved a superior classification accuracy of 85.64% and an F1-score of 0.7717 compared with other advanced image classification method
    corecore