138 research outputs found

    CT-SRCNN: Cascade Trained and Trimmed Deep Convolutional Neural Networks for Image Super Resolution

    Full text link
    We propose methodologies to train highly accurate and efficient deep convolutional neural networks (CNNs) for image super resolution (SR). A cascade training approach to deep learning is proposed to improve the accuracy of the neural networks while gradually increasing the number of network layers. Next, we explore how to improve the SR efficiency by making the network slimmer. Two methodologies, the one-shot trimming and the cascade trimming, are proposed. With the cascade trimming, the network's size is gradually reduced layer by layer, without significant loss on its discriminative ability. Experiments on benchmark image datasets show that our proposed SR network achieves the state-of-the-art super resolution accuracy, while being more than 4 times faster compared to existing deep super resolution networks.Comment: Accepted to IEEE Winter Conf. on Applications of Computer Vision (WACV) 2018, Lake Tahoe, US

    A note on maximal operators for the Schr\"{o}dinger equation on T1.\mathbb{T}^1.

    Full text link
    Motivated by the study of the maximal operator for the Schr\"{o}dinger equation on the one-dimensional torus T1 \mathbb{T}^1 , it is conjectured that for any complex sequence {bn}n=1N \{b_n\}_{n=1}^N , βˆ₯sup⁑t∈[0,N2]βˆ£βˆ‘n=1Nbne(xnN+tn2N2)∣βˆ₯L4([0,N])≀CΟ΅NΟ΅N12βˆ₯bnβˆ₯β„“2 \left\| \sup_{t\in [0,N^2]} \left|\sum_{n=1}^N b_n e \left(x\frac{n}{N} + t\frac{n^2}{N^2} \right) \right| \right\|_{L^4([0,N])} \leq C_\epsilon N^{\epsilon} N^{\frac{1}{2}} \|b_n\|_{\ell^2} In this note, we show that if we replace the sequence {n2N2}n=1N \{\frac{n^2}{N^2}\}_{n=1}^N by an arbitrary sequence {an}n=1N \{a_n\}_{n=1}^N with only some convex properties, then βˆ₯sup⁑t∈[0,N2]βˆ£βˆ‘n=1Nbne(xnN+tan)∣βˆ₯L4([0,N])≀CΟ΅NΟ΅N712βˆ₯bnβˆ₯β„“2. \left\| \sup_{t\in [0,N^2]} \left|\sum_{n=1}^N b_n e \left(x\frac{n}{N} + ta_n \right) \right| \right\|_{L^4([0,N])} \leq C_\epsilon N^\epsilon N^{\frac{7}{12}} \|b_n\|_{\ell^2}. We further show that this bound is sharp up to a CΟ΅NΟ΅C_\epsilon N^\epsilon factor.Comment: 13 page

    TinyReptile: TinyML with Federated Meta-Learning

    Full text link
    Tiny machine learning (TinyML) is a rapidly growing field aiming to democratize machine learning (ML) for resource-constrained microcontrollers (MCUs). Given the pervasiveness of these tiny devices, it is inherent to ask whether TinyML applications can benefit from aggregating their knowledge. Federated learning (FL) enables decentralized agents to jointly learn a global model without sharing sensitive local data. However, a common global model may not work for all devices due to the complexity of the actual deployment environment and the heterogeneity of the data available on each device. In addition, the deployment of TinyML hardware has significant computational and communication constraints, which traditional ML fails to address. Considering these challenges, we propose TinyReptile, a simple but efficient algorithm inspired by meta-learning and online learning, to collaboratively learn a solid initialization for a neural network (NN) across tiny devices that can be quickly adapted to a new device with respect to its data. We demonstrate TinyReptile on Raspberry Pi 4 and Cortex-M4 MCU with only 256-KB RAM. The evaluations on various TinyML use cases confirm a resource reduction and training time saving by at least two factors compared with baseline algorithms with comparable performance.Comment: Accepted by The International Joint Conference on Neural Network (IJCNN) 202

    A Semi-Parametric Model Simultaneously Handling Unmeasured Confounding, Informative Cluster Size, and Truncation by Death with a Data Application in Medicare Claims

    Full text link
    Nearly 300,000 older adults experience a hip fracture every year, the majority of which occur following a fall. Unfortunately, recovery after fall-related trauma such as hip fracture is poor, where older adults diagnosed with Alzheimer's Disease and Related Dementia (ADRD) spend a particularly long time in hospitals or rehabilitation facilities during the post-operative recuperation period. Because older adults value functional recovery and spending time at home versus facilities as key outcomes after hospitalization, identifying factors that influence days spent at home after hospitalization is imperative. While several individual-level factors have been identified, the characteristics of the treating hospital have recently been identified as contributors. However, few methodological rigorous approaches are available to help overcome potential sources of bias such as hospital-level unmeasured confounders, informative hospital size, and loss to follow-up due to death. This article develops a useful tool equipped with unsupervised learning to simultaneously handle statistical complexities that are often encountered in health services research, especially when using large administrative claims databases. The proposed estimator has a closed form, thus only requiring light computation load in a large-scale study. We further develop its asymptotic properties that can be used to make statistical inference in practice. Extensive simulation studies demonstrate superiority of the proposed estimator compared to existing estimators.Comment: Contact Emails: [email protected]
    • …
    corecore