323 research outputs found

    Association of Alternative Dietary Patterns with Osteoporosis and Fracture Risk in Older People: A Scoping Review

    Get PDF
    Purpose: Although the Mediterranean diet has been associated with a lower risk of hip fracture, the effect of other dietary patterns on bone density and risk of fracture is unknown. This scoping review aims to investigate the association between adherence to alternative dietary patterns (other than the traditional Mediterranean diet) and osteoporosis or osteoporotic fracture risk in older people. Methods: A systematic search was carried out on three electronic databases (Medline, EMBASE, and Scopus) to identify original papers studying the association between alternative dietary patterns (e.g., Baltic Sea Diet (BSD), modified/alternative Mediterranean diet in non-Mediterranean populations, Dietary Approaches to Stop Hypertension (DASH)) assessed using ‘prior’ methods (validated scores) and the risk of osteoporotic fracture or Bone Mineral Density (BMD) in people aged ≥50 (or reported average age of participants ≥ 60). Results from the included studies were presented in a narrative way. Results: Six observational (four prospective cohort and two cross-sectional) studies were included. There was no significant association between BMD and BSD or DASH scores. Higher adherence to DASH was associated with a lower risk of lumbar spine osteoporosis in women in one study, although it was not associated with the risk of hip fracture in another study with men and women. Higher adherence to aMED (alternative Mediterranean diet) was associated with a lower risk of hip fracture in one study, whereas higher adherence to mMED (modified Mediterranean diet) was associated with a lower risk of hip fracture in one study and had no significant result in another study. However, diet scores were heterogeneous across cohort studies. Conclusions: There is some evidence that a modified and alternative Mediterranean diet may reduce the risk of hip fracture, and DASH may improve lumbar spine BMD. Larger cohort studies are needed to validate these findings

    4′-Chloro-3′,5′-dimethoxy­acetanilide

    Get PDF
    The title compound, C10H12ClNO3, crystallizes with four independent mol­ecules in the asymmetric unit which are linked by inter­molecular N—H⋯O hydrogen bonds

    Unleashing the Power of Visual Prompting At the Pixel Level

    Full text link
    This paper presents a simple and effective visual prompting method for adapting pre-trained models to downstream recognition tasks. Our method includes two key designs. First, rather than directly adding together the prompt and the image, we treat the prompt as an extra and independent learnable component. We show that the strategy of reconciling the prompt and the image matters, and find that warping the prompt around a properly shrinked image empirically works the best. Second, we re-introduce two "old tricks" commonly used in building transferable adversarial examples, i.e., input diversity and gradient normalization, into visual prompting. These techniques improve optimization and enable the prompt to generalize better. We provide extensive experimental results to demonstrate the effectiveness of our method. Using a CLIP model, our prompting method sets a new record of 82.8% average accuracy across 12 popular classification datasets, substantially surpassing the prior art by +5.6%. It is worth noting that this prompting performance already outperforms linear probing by +2.1% and can even match fully fine-tuning in certain datasets. In addition, our prompting method shows competitive performance across different data scales and against distribution shifts. The code is publicly available at https://github.com/UCSC-VLAA/EVP

    In Defense of Image Pre-Training for Spatiotemporal Recognition

    Full text link
    Image pre-training, the current de-facto paradigm for a wide range of visual tasks, is generally less favored in the field of video recognition. By contrast, a common strategy is to directly train with spatiotemporal convolutional neural networks (CNNs) from scratch. Nonetheless, interestingly, by taking a closer look at these from-scratch learned CNNs, we note there exist certain 3D kernels that exhibit much stronger appearance modeling ability than others, arguably suggesting appearance information is already well disentangled in learning. Inspired by this observation, we hypothesize that the key to effectively leveraging image pre-training lies in the decomposition of learning spatial and temporal features, and revisiting image pre-training as the appearance prior to initializing 3D kernels. In addition, we propose Spatial-Temporal Separable (STS) convolution, which explicitly splits the feature channels into spatial and temporal groups, to further enable a more thorough decomposition of spatiotemporal features for fine-tuning 3D CNNs. Our experiments show that simply replacing 3D convolution with STS notably improves a wide range of 3D CNNs without increasing parameters and computation on both Kinetics-400 and Something-Something V2. Moreover, this new training pipeline consistently achieves better results on video recognition with significant speedup. For instance, we achieve +0.6% top-1 of Slowfast on Kinetics-400 over the strong 256-epoch 128-GPU baseline while fine-tuning for only 50 epochs with 4 GPUs. The code and models are available at https://github.com/UCSC-VLAA/Image-Pretraining-for-Video.Comment: Published as a conference paper at ECCV 202
    corecore