666 research outputs found

    Alendronate (ALN) combined with Osteoprotegerin (OPG) significantly improves mechanical properties of long bone than the single use of ALN or OPG in the ovariectomized rats

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Alendronate (ALN) is the most common form of bisphosphonates used for the treatment of osteoporosis. Osteoprotegerin (OPG) has also been shown to reduce osteoporotic changes in both humans and experimental animals after systemic administration. The aim of this current study was to test if the anti-resorption effects of ALN may be enhanced when used in combination with OPG.</p> <p>Objectives</p> <p>To investigate the effects of ALN, OPG or combined on bone mass and bone mechanical properties in ovariectomized (OVX) rats.</p> <p>Methods</p> <p>OVX rats were treated with ALN, OPG-Fc, or OPG-Fc and ALN. Biochemical markers, trabecular bone mass, biomechanics, histomorphometry and RANKL expression in the bone tissues were examined following the treatments.</p> <p>Results</p> <p>The treatment of ALN, OPG-Fc and ALN+OPG-Fc all prevented bone loss in the OVX-rats, there was no statistical difference among the three treatment groups in terms of vertebrae BMD, mineralizing surfaces, mineral apposition rate, BFR/BS. The ALN+OPG-Fc treatment group had significantly increased the mechanical strength of lumber vertebral bodies and femoral shafts when compared to the ALN and OPG-Fc treatment groups. The RANKL protein expression in the vertebral bones was significantly decreased in the ALN and ALN+OPG-Fc treatment groups, suggesting the combined use of OPG-Fc and ALN might have amplified inhibition of bone resorption through inhibiting RANKL-dependent osteoclastogenesis.</p> <p>Conclusion</p> <p>The combined use of OPG-Fc and ALN may be a new treatment strategy for reversing bone loss and restoring bone quality in osteoprotic disorders.</p

    Can Large Pre-trained Models Help Vision Models on Perception Tasks?

    Full text link
    The recent upsurge in pre-trained large models (e.g. GPT-4) has swept across the entire deep learning community. Such powerful large language models (LLMs) demonstrate advanced generative ability and multimodal understanding capability, which quickly achieve new state-of-the-art performances on a variety of benchmarks. The pre-trained LLM usually plays the role as a universal AI model that can conduct various tasks, including context reasoning, article analysis and image content comprehension. However, considering the prohibitively high memory and computational cost for implementing such a large model, the conventional models (such as CNN and ViT), are still essential for many visual perception tasks. In this paper, we propose to enhance the representation ability of ordinary vision models for perception tasks (e.g. image classification) by taking advantage of large pre-trained models. We present a new learning paradigm in which the knowledge extracted from large pre-trained models are utilized to help models like CNN and ViT learn enhanced representations and achieve better performance. Firstly, we curate a high quality description set by prompting a multimodal LLM to generate descriptive text for all training images. Furthermore, we feed these detailed descriptions into a pre-trained encoder to extract text embeddings with rich semantic information that encodes the content of images. During training, text embeddings will serve as extra supervising signals and be aligned with image representations learned by vision models. The alignment process helps vision models learn better and achieve higher accuracy with the assistance of pre-trained LLMs. We conduct extensive experiments to verify that the proposed algorithm consistently improves the performance for various vision models with heterogeneous architectures.Comment: 9 pages, 5 figure

    Evaluating the Vegetation Recovery in the Damage Area of Wenchuan Earthquake Using MODIS Data

    Get PDF
    The catastrophic 8.0 Richter magnitude earthquake that occurred on 12 May 2008 in Wenchuan, China caused extensive damage to vegetation due to widespread landslides and debris flows. In the past five years, the Chinese government has implemented a series of measures to restore the vegetation in the severely afflicted area. How is the vegetation recovering? It is necessary and important to evaluate the vegetation recovery effect in earthquake-stricken areas. Based on MODIS NDVI data from 2005 to 2013, the vegetation damage area was extracted by the quantified threshold detection method. The vegetation recovery rate after five years following the earthquake was evaluated with respect to counties, altitude, fault zones, earthquake intensity, soil texture and vegetation types, and assessed over time. We have proposed a new method to obtain the threshold with vegetation damage quantitatively, and have concluded that: (1) The threshold with vegetation damage was 13.47%, and 62.09% of the field points were located in the extracted damaged area; (2) The total vegetation damage area was 475,688 ha, which accounts for 14.34% of the study area and was primarily distributed in the central fault zone, the southwest mountainous areas and along rivers in the Midwest region of the study area; (3) Vegetation recovery in the damaged area was better in the northeast regions of the study area, and in the western portion of the Wenchuan-Maoxian fracture; vegetation recovery was better with increasing altitude; there is no obvious relationship between clay content in the topsoil and vegetation recovery; (4) Meadows recovered best and the worst recovery was in mixed coniferous broad-leaved forest; (5) 81,338 ha of vegetation in the damage area is currently undergoing degradation and the main vegetation types in the degradation area are coniferous forest (31.39%) and scrub (34.17%); (6) From 2009 to 2013, 41% has been restored to the level before the earthquake, 9% has not returned but 50% will continue to recover. The Chinese government usually requires five years as a period for post-disaster reconstruction. This paper could be regarded as a guidance for Chinese government departments, whereby additional investment is encouraged for vegetation recovery

    Does Graph Distillation See Like Vision Dataset Counterpart?

    Full text link
    Training on large-scale graphs has achieved remarkable results in graph representation learning, but its cost and storage have attracted increasing concerns. Existing graph condensation methods primarily focus on optimizing the feature matrices of condensed graphs while overlooking the impact of the structure information from the original graphs. To investigate the impact of the structure information, we conduct analysis from the spectral domain and empirically identify substantial Laplacian Energy Distribution (LED) shifts in previous works. Such shifts lead to poor performance in cross-architecture generalization and specific tasks, including anomaly detection and link prediction. In this paper, we propose a novel Structure-broadcasting Graph Dataset Distillation (SGDD) scheme for broadcasting the original structure information to the generation of the synthetic one, which explicitly prevents overlooking the original structure information. Theoretically, the synthetic graphs by SGDD are expected to have smaller LED shifts than previous works, leading to superior performance in both cross-architecture settings and specific tasks. We validate the proposed SGDD across 9 datasets and achieve state-of-the-art results on all of them: for example, on the YelpChi dataset, our approach maintains 98.6% test accuracy of training on the original graph dataset with 1,000 times saving on the scale of the graph. Moreover, we empirically evaluate there exist 17.6% ~ 31.4% reductions in LED shift crossing 9 datasets. Extensive experiments and analysis verify the effectiveness and necessity of the proposed designs. The code is available in the GitHub repository: https://github.com/RingBDStack/SGDD.Comment: Accepted by NeurIPS 202
    corecore