777 research outputs found

    Development of an in situ polymeric hydrogel implant of methylprednisolone for spinal injuries

    Get PDF
    Purpose: To prepare and characterize in situ gel-forming implants of methylprednisolone for the treatment of spinal cord injuries.Methods: In situ hydrogels of methylprednisolone were prepared by dispersing polylactide glycolic acid (PLGA) polymer and methylprednisolone in N-methyl-pyrrolidone solvent, and subsequent membrane sterilization. Hydrogels were prepared using varying concentrations of PLGA polymer. The physicochemical properties of hydrogels, including visual appearance, clarity, pH, viscosity, drug content, and in vitro drug release, were characterized. In vivo studies were performed to examine antiinflammatory activity (paw edema test) and in vivo motor function activity in a rat spinal injury model after injecting the hydrogels into rats.Results: The physicochemical properties of the gels were satisfactory. F1, F2, F3, and F4 formulations showed 99.67, 95.29, 88.89 and 88.20 % drug release, respectively, at the end of 7 days. In vivo antiinflammatory activity was highest for F1 (62.85 %). Motor function activity scores (arbitrary scale) for the F1, F2, F3 and F4 formulations were 4.82 ± 0.12, 4.70 ± 0.12, 4.68 ± 0.02, and 4.60 ± 0.05, respectively, and were higher (p < 0.05) for F1, F2 and F3) than for the standard (methylprednisolone, 30 mg/kg body weight, iv; activity score, 4.59 ± 0.20).Conclusions: The in situ hydrogels of methylprednisolone developed may be useful for the effective management of spinal cord injuries in patients. However, further investigations are required to ascertain their suitability for clinical use.Keywords: Methylprednisolone, In situ hydrogel, Spinal injury, Motor activity, Implan

    Chrion: Optimizing Recurrent Neural Network Inference by Collaboratively Utilizing CPUs and GPUs

    Full text link
    Deploying deep learning models in cloud clusters provides efficient and prompt inference services to accommodate the widespread application of deep learning. These clusters are usually equipped with host CPUs and accelerators with distinct responsibilities to handle serving requests, i.e. generalpurpose CPUs for input preprocessing and domain-specific GPUs for forward computation. Recurrent neural networks play an essential role in handling temporal inputs and display distinctive computation characteristics because of their high inter-operator parallelism. Hence, we propose Chrion to optimize recurrent neural network inference by collaboratively utilizing CPUs and GPUs. We formulate the model deployment in the CPU-GPU cluster as an NP-hard scheduling problem of directed acyclic graphs on heterogeneous devices. Given an input model in the ONNX format and user-defined SLO requirement, Chrion firstly preprocesses the model by model parsing and profiling, and then partitions the graph to select execution devices for each operator. When an online request arrives, Chrion performs forward computation according to the graph partition by executing the operators on the CPU and GPU in parallel. Our experimental results show that the execution time can be reduced by 19.4% at most in the latency-optimal pattern and GPU memory footprint by 67.5% in the memory-optimal pattern compared with the execution on the GPU

    FedALA: Adaptive Local Aggregation for Personalized Federated Learning

    Full text link
    A key challenge in federated learning (FL) is the statistical heterogeneity that impairs the generalization of the global model on each client. To address this, we propose a method Federated learning with Adaptive Local Aggregation (FedALA) by capturing the desired information in the global model for client models in personalized FL. The key component of FedALA is an Adaptive Local Aggregation (ALA) module, which can adaptively aggregate the downloaded global model and local model towards the local objective on each client to initialize the local model before training in each iteration. To evaluate the effectiveness of FedALA, we conduct extensive experiments with five benchmark datasets in computer vision and natural language processing domains. FedALA outperforms eleven state-of-the-art baselines by up to 3.27% in test accuracy. Furthermore, we also apply ALA module to other federated learning methods and achieve up to 24.19% improvement in test accuracy.Comment: Accepted by AAAI 202

    Hepatitis B virus infection and replication in a new cell culture system established by fusing HepG2 cells with primary human hepatocytes

    Get PDF
    BackgroundHepatitis B virus (HBV) infection is strictly species and tissue specific, therefore none of the cell models established previously can reproduce the natural infection process of HBV in vitro. The aim of this study was to establish a new cell line that is susceptible to HBV and can support the replication of HBV.MethodsA hybrid cell line was established by fusing primary human hepatocytes with HepG2 cells. The hybrid cells were incubated with HBV-positive serum for 12 hours. HBV DNA was detected by quantitative fluorescence polymerase chain reaction (QF-PCR). HBsAg (surface antigen) and HBeAg (extracellular form of core antigen) were observed by electrochemiluminescence (ECL). HBcAg (core antigen) was detected by the indirect immunofluorescence technique. HBV covalently closed circular DNA (cccDNA) was analyzed by Southern blot hybridization and quantified using real-time PCR.ResultsA new cell line was established and named HepCHLine-7. The extracellular HBV DNA was observed from Day 2 and the levels ranged from 9.80 (± 0.32) × 102 copies/mL to 3.12 (± 0.03) × 104 copies/mL. Intracellular HBV DNA was detected at Day 2 after infection and the levels ranged from 7.92 (± 1.08) × 103 copies/mL to 5.63 (± 0.11) × 105 copies/mL. HBsAg in the culture medium was detected from Day 4 to Day 20. HBeAg secretion was positive from Day 5 to Day 20. HBcAg constantly showed positive signals in approximately 20% (± 0.82%) of hybrid cells. Intracellular HBV cccDNA could be detected as early as 2 days postinfection and the highest level was 15.76 (± 0.26) copies/cell.ConclusionHepCHLine-7 cells were susceptible to HBV and supported the replication of HBV. They are therefore suitable for studying the complete life cycle of HBV

    FedCP: Separating Feature Information for Personalized Federated Learning via Conditional Policy

    Full text link
    Recently, personalized federated learning (pFL) has attracted increasing attention in privacy protection, collaborative learning, and tackling statistical heterogeneity among clients, e.g., hospitals, mobile smartphones, etc. Most existing pFL methods focus on exploiting the global information and personalized information in the client-level model parameters while neglecting that data is the source of these two kinds of information. To address this, we propose the Federated Conditional Policy (FedCP) method, which generates a conditional policy for each sample to separate the global information and personalized information in its features and then processes them by a global head and a personalized head, respectively. FedCP is more fine-grained to consider personalization in a sample-specific manner than existing pFL methods. Extensive experiments in computer vision and natural language processing domains show that FedCP outperforms eleven state-of-the-art methods by up to 6.69%. Furthermore, FedCP maintains its superiority when some clients accidentally drop out, which frequently happens in mobile settings. Our code is public at https://github.com/TsingZ0/FedCP.Comment: Accepted by KDD 202

    Seroconversion to Pandemic (H1N1) 2009 Virus and Cross-Reactive Immunity to Other Swine Influenza Viruses

    Get PDF
    To assess herd immunity to swine influenza viruses, we determined antibodies in 28 paired serum samples from participants in a prospective serologic cohort study in Hong Kong who had seroconverted to pandemic (H1N1) 2009 virus. Results indicated that infection with pandemic (H1N1) 2009 broadens cross-reactive immunity to other recent subtype H1 swine viruses

    GPFL: Simultaneously Learning Global and Personalized Feature Information for Personalized Federated Learning

    Full text link
    Federated Learning (FL) is popular for its privacy-preserving and collaborative learning capabilities. Recently, personalized FL (pFL) has received attention for its ability to address statistical heterogeneity and achieve personalization in FL. However, from the perspective of feature extraction, most existing pFL methods only focus on extracting global or personalized feature information during local training, which fails to meet the collaborative learning and personalization goals of pFL. To address this, we propose a new pFL method, named GPFL, to simultaneously learn global and personalized feature information on each client. We conduct extensive experiments on six datasets in three statistically heterogeneous settings and show the superiority of GPFL over ten state-of-the-art methods regarding effectiveness, scalability, fairness, stability, and privacy. Besides, GPFL mitigates overfitting and outperforms the baselines by up to 8.99% in accuracy.Comment: Accepted by ICCV202

    Eliminating Domain Bias for Federated Learning in Representation Space

    Full text link
    Recently, federated learning (FL) is popular for its privacy-preserving and collaborative learning abilities. However, under statistically heterogeneous scenarios, we observe that biased data domains on clients cause a representation bias phenomenon and further degenerate generic representations during local training, i.e., the representation degeneration phenomenon. To address these issues, we propose a general framework Domain Bias Eliminator (DBE) for FL. Our theoretical analysis reveals that DBE can promote bi-directional knowledge transfer between server and client, as it reduces the domain discrepancy between server and client in representation space. Besides, extensive experiments on four datasets show that DBE can greatly improve existing FL methods in both generalization and personalization abilities. The DBE-equipped FL method can outperform ten state-of-the-art personalized FL methods by a large margin. Our code is public at https://github.com/TsingZ0/DBE.Comment: Accepted by NeurIPS 2023, 24 page
    corecore