57 research outputs found

    Association between composite dietary antioxidant index and handgrip strength in American adults: Data from National Health and Nutrition Examination Survey (NHANES, 2011-2014)

    Get PDF
    BackgroundThe Composite Dietary Antioxidant Index (CDAI), a composite score of multiple dietary antioxidants (including vitamin A, C, and E, selenium, zinc, and carotenoids), represents an individual’s comprehensive dietary antioxidant intake profile. CDAI was developed based on its combined effect on pro-inflammatory markers Tumor Necrosis Factor-α (TNF-α) and anti-inflammatory effects of Interleukin-1β (IL-1β), which are associated with many health outcomes, including depression, all-cause mortality, colorectal cancer, etc. Handgrip strength is used as a simple measure of muscle strength, not only is it highly correlated with overall muscle strength, but also serves as a diagnostic tool for many adverse health outcomes, including sarcopenia and frailty syndromes.PurposeThe association between CDAI and Handgrip strength (HGS) is currently unclear. This study investigated the association between CDAI (including its components) and HGS in 6,019 American adults.MethodThe research data were selected from the 2011–2014 National Health and Nutrition Survey (NHANES), and a total of 6,019 American adults were screened and included. A weighted generalized linear regression model was used to evaluate CDAI (including its components) and HGS.Results(1) CDAI was significantly positively correlated with HGS (β = 0.009, 0.005∼0.013, P < 0.001), and the trend test showed that compared with the lowest quartile of CDAI, the highest quartile of CDAI was positively correlated with HGS (β = 0.084, 0.042∼0.126, P = 0.002) and significant in trend test (P for trend < 0.0100). Gender subgroup analysis showed that male CDAI was significantly positively correlated with HGS (β = 0.015, 0.007∼0.023, P = 0.002), and the trend test showed that compared with the lowest quartile of CDAI, the highest quartile of CDAI was positively correlated with HGS (β = 0.131, 0.049∼0.213, P = 0.006) and the trend test was significant (P for trend < 0.0100). There was no correlation between female CDAI and HGS, and the trend test was not statistically significant (P > 0.05). (2) The intake of dietary vitamin E, Zinc and Selenium showed a significant positive correlation with HGS (β = 0.004, 0.002∼0.007, P = 0.006; β = 0.007, 0.004∼0.009, P < 0.001; β = 0.001, 0.001∼0.001, P < 0.001), vitamin A, vitamin C and carotenoid were significantly associated with HGS in the Crude Model, but this significant association disappeared in the complete model with the increase of control variables. Gender subgroup analysis showed that in model 3, male dietary intake levels of vitamin E, Zinc, and Selenium were significantly positively correlated with HGS (β = 0.005, 0.002∼0.009, P = 0.011; β = 0.007, 0.004∼0.011, P = 0.001; β = 0.001, 0.001∼0.001, P = 0.004), the rest of the indicators had no significant correlation with HGS. Among the female subjects, dietary zinc intake was significantly positively correlated with HGS (β = 0.005, 0.001∼0.008, P = 0.008), and there was no significant correlation between other indicators and HGS (P > 0.05).ConclusionThere was an association between the CDAI and HGS, but there was a gender difference, and there was an association between the CDAI and HGS in male, but the association was not significant in female. Intake of the dietary antioxidants vitamin E, selenium, and zinc was associated with HGS in male, but only zinc was associated with HGS among dietary antioxidants in female

    Experimental study of dynamic characteristics of an ultra-large jacket offshore wind turbine under wind and wave loads using aero-hydro-structural elastic similarities

    Get PDF
    Owing to the difficulties in the scaled rotor-nacelle assembly (RNA) and support structure design, and alleviation of small scaling effects, the limited dynamic model tests are conducted for the jacket offshore wind turbines (OWTs), which are extensively constructed in the offshore wind farms located in the depth of 40–50 m. To address this limitation, an integrated test method based on aero-hydro-structural elastic similarities is proposed in this study. It comprises a performance-scaled RNA model and a scaled support structure model. A redesigned blade model is adopted in the scaled RNA model to ensure the similarities of aerodynamic thrust loads without modifications of the scaled test winds. Moreover, auxiliary scaled drivetrain and blade pitch control are designed to simulate the operational states of a practical OWT. The scaled model of the OWT support structure is fabricated based on the joint hydro-structural elastic similarity, and the small scaling effects are mitigated by introducing sectional bending stiffness similarities. Subsequently, the dynamic model tests of an ultra-large jacket OWT under wind-only, wave-only, and combined wind and wave conditions are carried out. The accuracy of the fabricated OWT test model is validated based on the recorded responses, and the influence of the dominant frequencies on the dynamic responses of the OWT model is quantitatively evaluated using the wavelet packet-based energy analysis method. Further, the coupling mechanisms of the scaled OWT model under typical wind and wave loads are investigated, and the interactions between the environmental loads and OWT motions are proved

    Experimental Study of Ultra-Large Jacket Offshore Wind Turbine under Different Operational States Based on Joint Aero-Hydro-Structural Elastic Similarities

    Get PDF
    The jacket substructure is generalized for offshore wind farms in the southeastern offshore regions of China. The dynamic characteristics and coupling mechanisms of jacket offshore wind turbines (OWTs) have been extensively investigated using numerical simulation tools. However, limited dynamic model tests have been designed and performed for such types of OWTs. Therefore, the coupling mechanisms of jacket OWTs that are determined using numerical methods require further validation based on experimental tests. Accordingly, an integrated scaled jacket OWT physical test model is designed in this study. It consists of a scaled rotor nacelle assembly (RNA) and support structure model. For the scaled RNA model, a redesigned blade model is adopted to ensure the similarity of the aerodynamic thrust loads without modifying the scaled test winds. Auxiliary scaled drivetrain and blade pitch control system models are designed to simulate the operational states of a practical OWT. The scaled model of the OWT support structure is fabricated on the basis of the joint hydro-structural elastic similarities. A sensor arrangement involving a three-component load cell and acceleration sensors is used to record the OWT thrust loads and model motions, respectively. Then, dynamic model tests under typical scaled wind fields are implemented. Furthermore, the coupling mechanisms of the OWT model under various test winds are investigated using the wavelet packet method, and the influences of inflow winds, operational states, and mechanical strategies are introduced

    Web text-aided image classification

    No full text
    Image classification is often solved as a machine-learning problem, where a classifier is first learned from training data, and class labels are then assigned to unlabeled testing data based on the outputs of the classifier. To train an image classifier with good generalization capability, conventional methods often require a large number of human-labeled training images. However, a large number of well-labeled training images may not always be available. With the exponential growth of web data, exploiting multimodal online sources via the standard search engine has become a trend in visual recognition as it can effectively alleviate the shortage of training data. However, web data such as text data is often not cooperative due to its unstructured and noisy nature. Therefore, how to represent and utilize the web text data to aid image classification is chosen as the focus of this thesis. Since target image data and web text data are usually from different domains whose representations are in the different feature space, we firstly investigate the two modalities of data separately and then combine the bimodal information in decision level. In particular, low-level text modeling approaches including class tag occurrence and bag-of-words vectorization and image modeling approaches such as dense SIFT are employed to learn separate classifiers, whose decision scores are aggregated adaptively. On the other hand, we believe that the correlation information between image modality and web text modality is also very important. In order to explore the cross-modal correlation, we also investigate feature-level multimodal fusion models in this PhD thesis. Learning dense and real-valued text representation in a similar manner of learning image representation is the keystone of feature-level multimodal fusion. In this thesis, we propose the novel task-specific semantic matching network and task-generic semantic convolutional network models to learn semantic text features. These proposed text feature learning methods are motivated by the transferable mid-level image representation learned by the convolutional neural network (CNN). Besides traditional supervised learning setting, we find that the web text-aided strategy also makes difference in weakly supervised setting when only little labeled data is available. Specially, we investigate web text-aided one-shot learning that is able to identify unlabeled data from novel classes based on single observation using an adaptive attention mechanism. This thesis is organized as follows. Chapter 1 introduces the motivation behind the web resources-aided image classification. Chapter 2 reviews the related works in this field, including image representation learning, text representation learning and multimodal fusion learning. Chapter 3 investigates the decision-level data fusion for web-aided image classification. An adaptive combiner for two separate bimodal classifiers is developed in decision level. This adaptive fusion algorithm is inspired by the multisensory integration mechanism of the human. And the adaptability is achieved by a reliability-dependent weighting of different sensory modalities. In Chapter 4, a novel text modeling namely the semantic matching neural network (SMNN) is proposed, which is quantified by cosine similarity measures between embedded text input and task-specific semantic filters. It is capable of learning semantic features from the associated text of web images. The SMNN text features have improved reliability and applicability, compared to the text features obtained from other methods. Then, the SMNN text features and convolutional neural network visual features are jointly learned in a shared representation, which aims to capture the correlations between the two modalities in the feature level. Improving upon task-specific filters for SMNN, Chapter 5 presents a novel semantic CNN (s-CNN) model for high-level text representation learning to encode semantic correlation based on task-generic semantic filters. However, the s-CNN model inevitably brings about surplus semantic filters to achieve better applicability and generalization in universal tasks. Moreover, the surplus filters may lead to semantic overlaps and feature redundancy issue. To address this issue, the s-CNN Clustered (s-CNNC) models that use filter clusters instead of individual filters is presented. Interacting with the image CNN models, the s-CNNC models can further boost image classification under a multimodal framework, which can be trained end-to-end. Chapter 6 develops an adaptive encoder-decoder attention network that uses web text to aid one-shot image classification. Without any ground truth semantic clues, e.g., class tag information, our model is able to extract useful information from web-sourced data instead. To address the noise nature of web text, the adaptive mechanism is introduced to determine when to attend to text-inferred visual features and when to rely on original visual features. The summarization and future prospect of my PhD work are finally discussed in Chapter 7.Doctor of Philosoph

    Task-generic semantic convolutional neural network for web text-aided image classification

    No full text
    In this work, we explore how to use external and auxiliary web text to improve image classification. The keystone of web text-aided image classification is the representation learning for these two modalities of data. In the recent decade, convolutional neural networks (CNN) as the core representation methods of images have become a commodity in computer vision community. On the other hand, the long reign of word vectors has the same wide-ranging impact on NLP for representation learning. Based on the pre-trained word vectors, we propose a novel semantic CNN (s-CNN) model for high-level text representation learning using task-generic semantic filters. However, the s-CNN model inevitably brings about surplus semantic filters to achieve better applicability and generalization in universal tasks. Moreover, the surplus filters may lead to semantic overlaps and feature redundancy issue. To address this issue, we develop the so-called s-CNN Clustered (s-CNNC) models that uses filter clusters instead of individual filters. Interacting with the image CNN models, the s-CNNC models can further boost image classification under a multi-modal framework (mm-CNN). In addition, we propose to use the external text information selectively in the mm-CNN network to alleviate the noise problem inherent in web text. We validate the effectiveness of the proposed models on six benchmark datasets, and the results show that our approaches achieve remarkable improvements
    corecore