3,611 research outputs found

    The relationship between exercise and cognition in diabetes mellitus

    Get PDF
    The increasing prevalence and incidence of type 2 diabetes mellitus (T2D) has been referred to as a global epidemic. This thesis aimed to synthesise the evidence base in both animal models and human studies that exercise exposure is related to better cognition in diabetes, via 2 systematic reviews. Secondly, we investigated the efficacy of a novel form of exercise, POWER training (high velocity PRT), for cognitive function in this cohort. We hypothesised that 12 months of high intensity POWER training would significantly improve cognitive function in a cohort of older adults with T2D and multiple co-morbidities. The GREAT2DO study was the first RCT to evaluate the effects of a one-year intervention of POWER training compared to a SHAM exercise control condition on insulin resistance, HbA1c, body composition, physical performance, inflammation, adipokines, cardiovascular health status, and quality of life as well as to explore relationships between these domains in response to the intervention in this cohort. In this GREAT2DO cognitive sub-study, we assessed global cognition and several cognitive domains at baseline in relation to physical and psychological health, fitness and functional performance, as well as changes over time in cognitive outcomes in response to the intervention. We found that cognitive function improved in both POWER and sham exercise groups over time, although unexpectedly without group effect. However, we showed for the first time that there were significant direct relationships between increases in skeletal muscle mass, total muscle strength, total static balance time, and total adiponectin levels and improvements in cognitive function, and that these relationships only existed in the POWER group, as hypothesised. There is need for further study, in particular exploration of the persistence, clinical relevance, and mechanisms underlying attenuation of the rate of cognitive decline and incident dementia in this high-risk cohort

    SCOPE: Scalable Composite Optimization for Learning on Spark

    Full text link
    Many machine learning models, such as logistic regression~(LR) and support vector machine~(SVM), can be formulated as composite optimization problems. Recently, many distributed stochastic optimization~(DSO) methods have been proposed to solve the large-scale composite optimization problems, which have shown better performance than traditional batch methods. However, most of these DSO methods are not scalable enough. In this paper, we propose a novel DSO method, called \underline{s}calable \underline{c}omposite \underline{op}timization for l\underline{e}arning~({SCOPE}), and implement it on the fault-tolerant distributed platform \mbox{Spark}. SCOPE is both computation-efficient and communication-efficient. Theoretical analysis shows that SCOPE is convergent with linear convergence rate when the objective function is convex. Furthermore, empirical results on real datasets show that SCOPE can outperform other state-of-the-art distributed learning methods on Spark, including both batch learning methods and DSO methods

    Measuring the similarity of PML documents with RFID-based sensors

    Get PDF
    The Electronic Product Code (EPC) Network is an important part of the Internet of Things. The Physical Mark-Up Language (PML) is to represent and de-scribe data related to objects in EPC Network. The PML documents of each component to exchange data in EPC Network system are XML documents based on PML Core schema. For managing theses huge amount of PML documents of tags captured by Radio frequency identification (RFID) readers, it is inevitable to develop the high-performance technol-ogy, such as filtering and integrating these tag data. So in this paper, we propose an approach for meas-uring the similarity of PML documents based on Bayesian Network of several sensors. With respect to the features of PML, while measuring the similarity, we firstly reduce the redundancy data except information of EPC. On the basis of this, the Bayesian Network model derived from the structure of the PML documents being compared is constructed.Comment: International Journal of Ad Hoc and Ubiquitous Computin

    Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation

    Full text link
    Past works on multimodal machine translation (MMT) elevate bilingual setup by incorporating additional aligned vision information. However, an image-must requirement of the multimodal dataset largely hinders MMT's development -- namely that it demands an aligned form of [image, source text, target text]. This limitation is generally troublesome during the inference phase especially when the aligned image is not provided as in the normal NMT setup. Thus, in this work, we introduce IKD-MMT, a novel MMT framework to support the image-free inference phase via an inversion knowledge distillation scheme. In particular, a multimodal feature generator is executed with a knowledge distillation module, which directly generates the multimodal feature from (only) source texts as the input. While there have been a few prior works entertaining the possibility to support image-free inference for machine translation, their performances have yet to rival the image-must translation. In our experiments, we identify our method as the first image-free approach to comprehensively rival or even surpass (almost) all image-must frameworks, and achieved the state-of-the-art result on the often-used Multi30k benchmark. Our code and data are available at: https://github.com/pengr/IKD-mmt/tree/master..Comment: Long paper accepted by EMNLP2022 main conferenc

    Better Sign Language Translation with Monolingual Data

    Full text link
    Sign language translation (SLT) systems, which are often decomposed into video-to-gloss (V2G) recognition and gloss-to-text (G2T) translation through the pivot gloss, heavily relies on the availability of large-scale parallel G2T pairs. However, the manual annotation of pivot gloss, which is a sequence of transcribed written-language words in the order in which they are signed, further exacerbates the scarcity of data for SLT. To address this issue, this paper proposes a simple and efficient rule transformation method to transcribe the large-scale target monolingual data into its pseudo glosses automatically for enhancing the SLT translation. Empirical results show that the proposed approach can significantly improve the performance of SLT, especially achieving state-of-the-art results on two SLT benchmark datasets PHEONIX-WEATHER 2014T and ASLG-PC12. Our code has been released at: https://github.com/pengr/Mono\_SLT
    • …
    corecore