133 research outputs found

    Improvement of Market Economy Management Measures for Innovative Enterprises under Block Chain Technology

    Get PDF
    In order to solve the financing difficulties of innovative Small and Medium Enterprise (SMEs) in the financial and economic field, this research proposes a market economy management measure for innovative enterprises, namely the enterprise credit information sharing model based on block chain technology. Firstly, the problems existing in the sharing model based on block chain technology are analyzed, and the basic model framework of block chain is adopted to improve the sharing model. Secondly, according to the improved Practical Byzantine Fault Tolerance (PBFT) consensus mechanism, the simulation experiment design of the credit information sharing model of enterprise market economy management measures is carried out. Finally, the improved sharing model proposed in this research is evaluated in terms of fault tolerance and throughput. The results show that the improved market economy management measures based on block chain technology in this research can meet certain fault tolerance rate, and the throughput is relatively stable. To some extent, it can meet the needs of credit information trading and sharing, and solve the difficulties of enterprise information sharing and low efficiency of data exchange

    Unimodal Training-Multimodal Prediction: Cross-modal Federated Learning with Hierarchical Aggregation

    Full text link
    Multimodal learning has seen great success mining data features from multiple modalities with remarkable model performance improvement. Meanwhile, federated learning (FL) addresses the data sharing problem, enabling privacy-preserved collaborative training to provide sufficient precious data. Great potential, therefore, arises with the confluence of them, known as multimodal federated learning. However, limitation lies in the predominant approaches as they often assume that each local dataset records samples from all modalities. In this paper, we aim to bridge this gap by proposing an Unimodal Training - Multimodal Prediction (UTMP) framework under the context of multimodal federated learning. We design HA-Fedformer, a novel transformer-based model that empowers unimodal training with only a unimodal dataset at the client and multimodal testing by aggregating multiple clients' knowledge for better accuracy. The key advantages are twofold. Firstly, to alleviate the impact of data non-IID, we develop an uncertainty-aware aggregation method for the local encoders with layer-wise Markov Chain Monte Carlo sampling. Secondly, to overcome the challenge of unaligned language sequence, we implement a cross-modal decoder aggregation to capture the hidden signal correlation between decoders trained by data from different modalities. Our experiments on popular sentiment analysis benchmarks, CMU-MOSI and CMU-MOSEI, demonstrate that HA-Fedformer significantly outperforms state-of-the-art multimodal models under the UTMP federated learning frameworks, with 15%-20% improvement on most attributes.Comment: 10 pages,5 figure

    Multi-level Personalized Federated Learning on Heterogeneous and Long-Tailed Data

    Full text link
    Federated learning (FL) offers a privacy-centric distributed learning framework, enabling model training on individual clients and central aggregation without necessitating data exchange. Nonetheless, FL implementations often suffer from non-i.i.d. and long-tailed class distributions across mobile applications, e.g., autonomous vehicles, which leads models to overfitting as local training may converge to sub-optimal. In our study, we explore the impact of data heterogeneity on model bias and introduce an innovative personalized FL framework, Multi-level Personalized Federated Learning (MuPFL), which leverages the hierarchical architecture of FL to fully harness computational resources at various levels. This framework integrates three pivotal modules: Biased Activation Value Dropout (BAVD) to mitigate overfitting and accelerate training; Adaptive Cluster-based Model Update (ACMU) to refine local models ensuring coherent global aggregation; and Prior Knowledge-assisted Classifier Fine-tuning (PKCF) to bolster classification and personalize models in accord with skewed local data with shared knowledge. Extensive experiments on diverse real-world datasets for image classification and semantic segmentation validate that MuPFL consistently outperforms state-of-the-art baselines, even under extreme non-i.i.d. and long-tail conditions, which enhances accuracy by as much as 7.39% and accelerates training by up to 80% at most, marking significant advancements in both efficiency and effectiveness.Comment: 14 pages, 10 figure

    Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning

    Full text link
    Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications, ranging from content generation to interactive entertainment, and artistic creation. However, the diversity of downstream tasks in multitask scenarios presents substantial adaptation challenges for LLMs. While traditional methods often succumb to knowledge confusion on their monolithic dense models, Mixture-of-Experts (MoE) has been emerged as a promising solution with its sparse architecture for effective task decoupling. Inspired by the principles of human cognitive neuroscience, we design a novel framework \texttt{Intuition-MoR1E} that leverages the inherent semantic clustering of instances to mimic the human brain to deal with multitask, offering implicit guidance to router for optimized feature allocation. Moreover, we introduce cutting-edge Rank-1 Experts formulation designed to manage a spectrum of intuitions, demonstrating enhanced parameter efficiency and effectiveness in multitask LLM finetuning. Extensive experiments demonstrate that Intuition-MoR1E achieves superior efficiency and 2.15\% overall accuracy improvement across 14 public datasets against other state-of-the-art baselines.Comment: 13 pages, 5 figure

    Coix lachryma-jobi extract ameliorates inflammation and oxidative stress in a complete Freund's adjuvant-induced rheumatoid arthritis model.

    Get PDF
    Context: Adlay seed [Job’s tears, Coix lachryma-jobi L. var. ma-yuen Stapf (Poaceae)] is a Traditional Chinese Medicine, which has been investigated to treat inflammatory diseases and rheumatism. Objective: This study evaluates the ameliorative effects of adlay seed extract (ASE) in a complete Freund’s adjuvant (CFA)-induced rheumatoid arthritis (RA) rats. Materials and methods: The RA Sprague-Dawley rat model was induced and randomly divided into six groups with or without ASE treatment (50, 100 or 200 mg/kg). After 28 d administration, the symptoms, biochemical parameters and molecular mechanisms were investigated. Results: The values of paw oedema, PGE2 and MMP-3 decreased from 1.46 ± 0.04 to 0.66 ± 0.07 cm3, from 126.2 ± 11.48 to 79.71 ± 6.8 pg/mL and from 142.7 ± 8.36 to 86.51 ± 5.95 ng/mL, respectively; the values of body weight increased from 177.25 ± 5.94 to 205 ± 6.52 g in HASE group. In addition, treatment of ASE reduced the levels of pro-inflammatory cytokines (IL-1β, TNF-α, IL-6, MCP-1), and increased the activities of antioxidant enzyme (GSH-Px, SOD, and CAT). Furthermore, ASE could suppress the mRNA expression of COX-2 and CHI3L1 and improve the mRNA expression of CAT and GPx-1 in ankle tissues of RA rats. Discussion and conclusions: For the first time, our results indicated ASE exerts anti-RA effects via inhibiting pro-inflammatory factors and alleviating oxidative stress. Our finding sheds light on the research and development of anti-RA functional foods from adlay seed

    A New Energy-Aware Flexible Job Shop Scheduling Method Using Modified Biogeography-Based Optimization

    Get PDF
    Industry consumes approximately half of the total worldwide energy usage. With the increasingly rising energy costs in recent years, it is critically important to consider one of the most widely used energies, electricity, during the production planning process. We propose a new mathematical model that can determine efficient scheduling to minimize the makespan and electricity consumption cost (ECC) for the flexible job shop scheduling problem (FJSSP) under a time-of-use (TOU) policy. In addition to the traditional two subtasks in FJSSP, a new subtask called speed selection, which represents the selection of variable operating speeds, is added. Then, a modified biogeography-based optimization (MBBO) algorithm combined with variable neighborhood search (VNS) is proposed to solve the biobjective problem. Experiments are performed to verify the effectiveness of the proposed MBBO algorithm for obtaining an improved scheduling solution compared to the basic biogeography-based optimization (BBO) algorithm, genetic algorithm (GA), and harmony search (HS)

    Visualization of the entire process of rice spikelet infection by Ustilaginoidea virens through nondestructive inoculation

    Get PDF
    IntroductionRice false smut caused by Ustilaginoidea virens, is a destructive fungal disease encountered in many rice-producing areas worldwide. To determine the process by which U. virens infects rice spikelets in the field.MethodsThe green fluorescent protein-labeled U. virens was used as an inoculum to conduct artificial inoculation on rice at the booting stage via non-destructive panicle sheath instillation inoculation.ResultsThe results showed that the conidia of U. virens germinated on the surface of rice glumes and produced hyphae, which clustered at the mouth of rice glumes and entered the glumes through the gap between the palea and lemma. The conidia of U. virens colonized in rice floral organs, which led to pollen abortion of rice. U. virens wrapped the whole rice floral organ, and the floral organ-hyphae complex gradually expanded to open the glumes to form a rice false smut ball, which was two to three times larger than that observed in normal rice.DiscussionPanicle sheath instillation inoculation was shown to be a non-destructive inoculation method that could simulate the natural infection of U. virens in the field. The entire infection process of U. virens was visualized, providing a theoretical reference for formulating strategies to control rice false smut in the field

    M2^{2}Chat: Empowering VLM for Multimodal LLM Interleaved Text-Image Generation

    Full text link
    While current LLM chatbots like GPT-4V bridge the gap between human instructions and visual representations to enable text-image generations, they still lack efficient alignment methods for high-fidelity performance on multiple downstream tasks. In this paper, we propose \textbf{M2ChatM^{2}Chat}, a novel unified multimodal LLM framework for generating interleaved text-image conversation across various scenarios. Specifically, we propose an M3AdapterM^{3}Adapter that efficiently integrates granular low-level visual information and high-level semantic features from multi-modality prompts. Upon the well-aligned fused feature, M3AdapterM^{3}Adapter tailors a learnable gating strategy to balance the model creativity and consistency across various tasks adaptively. Moreover, to further enhance the effectiveness of M3AdapterM^{3}Adapter while preserving the coherence of semantic context comprehension, we introduce a two-stage M3FTM^{3}FT fine-tuning strategy. This strategy optimizes disjoint groups of parameters for image-text alignment and visual-instruction respectively. Extensive experiments demonstrate our M2ChatM^{2}Chat surpasses state-of-the-art counterparts across diverse benchmarks, showcasing its prowess in interleaving generation, storytelling, and multimodal dialogue systems. The demo and code are available at \red{https://mattie-e.github.io/M2Chat.github.io}

    The negative interplay between Aurora A/B and BRCA1/2 controls cancer cell growth and tumorigenesis via distinct regulation of cell cycle progression, cytokinesis, and tetraploidy

    Get PDF
    It is well known that the activation of Aurora A/B (Aur A/B) or inactivation of BRCA1/2 induces tumor formation. Others and we have reported that the mutual suppression between Aur A/B and BRCA1/2 may manipulate cancer cell growth and tumorigenesis, however, the interactive regulation and mechanism between these molecules are still elusive. In this study, by consecutive silencing of Aur A/B or/and BRCA1/2 with specific shRNAs, we showed that, in BRCA2-deficient pancreatic cancer cell line Capan-1 and in ovarian cancer cell line OVCA433, Aur A/B and BRCA1/2 inversely regulated the expression of each other likely through proteasome-mediated proteolysis but not through gene transcription. Aur A/B and BRCA1/2 conversely regulated cell cycle progression mainly through control of p53 and cyclin A. Moreover, the disruption of Aur A/B blocked abnormal cytokinesis and decreased cell multinuclearity and chromosome tetraploidy, whereas the deprivation of BRCA1/2 promoted the abnormal cytokinesis and enhanced the cell multinuclearity and tetraploidy. Furthermore, we showed by animal assays that the depletion of Aur A/B inhibited tumor growth of both cell lines, while the knockdown of BRCA1/2 promoted the tumor growth. However, the concurrent silencing of Aur A/B and BRCA1/2 diminished the effects of these molecules on the regulation of cell cycle, cytokinesis, and tetraploidy, leading to the burdened tumor sizes similar to those induced by scrambled shRNA-treated control cells. In summary, our study revealed that the negative interplay between Aur A/B and BRCA1/2 inversely controls the cell proliferation, cell cycle progression, cell multinuclearity, and tetraploidization to modulate tumorigenesis
    • …
    corecore