118 research outputs found

    Fixed-point Factorized Networks

    Full text link
    In recent years, Deep Neural Networks (DNN) based methods have achieved remarkable performance in a wide range of tasks and have been among the most powerful and widely used techniques in computer vision. However, DNN-based methods are both computational-intensive and resource-consuming, which hinders the application of these methods on embedded systems like smart phones. To alleviate this problem, we introduce a novel Fixed-point Factorized Networks (FFN) for pretrained models to reduce the computational complexity as well as the storage requirement of networks. The resulting networks have only weights of -1, 0 and 1, which significantly eliminates the most resource-consuming multiply-accumulate operations (MACs). Extensive experiments on large-scale ImageNet classification task show the proposed FFN only requires one-thousandth of multiply operations with comparable accuracy

    Estimating Marginal Hazard Ratios by Simultaneously Using A Set of Propensity Score Models: A Multiply Robust Approach

    Get PDF
    The inverse probability weighted Cox model is frequently used to estimate marginal hazard ratios. Its validity requires a crucial condition that the propensity score model is correctly specified. To provide protection against misspecification of the propensity score model, we propose a weighted estimation method rooted in empirical likelihood theory. The proposed estimator is multiply robust in that it is guaranteed to be consistent when a set of postulated propensity score models contains a correctly specified model. Our simulation studies demonstrate satisfactory finite sample performance of the proposed method in terms of consistency and efficiency. We apply the proposed method to compare the risk of postoperative hospitalization between sleeve gastrectomy and Roux-en-Y gastric bypass using data from a large medical claims and billing database.We further extend the development to multi-site studies to enable each site to postulate multiple site-specific propensity score models

    Total thyroidectomy may be more reasonable as initial surgery in unilateral multifocal papillary thyroid microcarcinoma: a single-center experience

    Get PDF
    The ethics statement of our study by the Ethics Committee of Jilin University affiliated First Hospital. (DOC 58 kb

    Feature Distilled Tracking

    Get PDF
    Feature extraction and representation is one of the most important components for fast, accurate, and robust visual tracking. Very deep convolutional neural networks (CNNs) provide effective tools for feature extraction with good generalization ability. However, extracting features using very deep CNN models needs high performance hardware due to its large computation complexity, which prohibits its extensions in real-time applications. To alleviate this problem, we aim at obtaining small and fast-to-execute shallow models based on model compression for visual tracking. Specifically, we propose a small feature distilled network (FDN) for tracking by imitating the intermediate representations of a much deeper network. The FDN extracts rich visual features with higher speed than the original deeper network. To further speed-up, we introduce a shift-and-stitch method to reduce the arithmetic operations, while preserving the spatial resolution of the distilled feature maps unchanged. Finally, a scale adaptive discriminative correlation filter is learned on the distilled feature for visual tracking to handle scale variation of the target. Comprehensive experimental results on object tracking benchmark datasets show that the proposed approach achieves 5x speed-up with competitive performance to the state-of-the-art deep trackers

    Portal Vein Thrombosis in Liver Cirrhosis

    Get PDF
    In liver cirrhosis, portal vein thrombosis (PVT), which is defined as thrombosis that occurs within the main portal vein and intrahepatic portal branches, is one of the most common complications. High incidence of PVT in the setting of liver cirrhosis is mainly due to hypercoagulable state and altered dynamic of blood flow in the portal vein. The clinical manifestations of PVT are variable among different patients, so the diagnosis of PVT is mainly dependent on the imaging examinations, like ultrasound, computed tomography and magnetic resonance imaging. The overall goal of treatment for PVT can be summarized as reducing risk factors of PVT, thus to prevent further expansion of thrombus and maintain portal patency and prevent and treat the symptoms of PVT by anticoagulants, local thrombolysis, transjugular intrahepatic portosystemic shunt and/or surgery. In future, due to the progress in vascular imaging and innovation in clinical anti-thrombotic drug, PVT could be prevented and cured effectively

    One Fits All:Power General Time Series Analysis by Pretrained LM

    Full text link
    Although we have witnessed great success of pre-trained models in natural language processing (NLP) and computer vision (CV), limited progress has been made for general time series analysis. Unlike NLP and CV where a unified model can be used to perform different tasks, specially designed approach still dominates in each time series analysis task such as classification, anomaly detection, forecasting, and few-shot learning. The main challenge that blocks the development of pre-trained model for time series analysis is the lack of a large amount of data for training. In this work, we address this challenge by leveraging language or CV models, pre-trained from billions of tokens, for time series analysis. Specifically, we refrain from altering the self-attention and feedforward layers of the residual blocks in the pre-trained language or image model. This model, known as the Frozen Pretrained Transformer (FPT), is evaluated through fine-tuning on all major types of tasks involving time series. Our results demonstrate that pre-trained models on natural language or images can lead to a comparable or state-of-the-art performance in all main time series analysis tasks, as illustrated in Figure 1. We also found both theoretically and empirically that the self-attention module behaviors similarly to principle component analysis (PCA), an observation that helps explains how transformer bridges the domain gap and a crucial step towards understanding the universality of a pre-trained transformer.The code is publicly available at https://github.com/DAMO-DI-ML/One_Fits_All.Comment: Neurips 2023 Spotligh

    One Fits All: Universal Time Series Analysis by Pretrained LM and Specially Designed Adaptors

    Full text link
    Despite the impressive achievements of pre-trained models in the fields of natural language processing (NLP) and computer vision (CV), progress in the domain of time series analysis has been limited. In contrast to NLP and CV, where a single model can handle various tasks, time series analysis still relies heavily on task-specific methods for activities such as classification, anomaly detection, forecasting, and few-shot learning. The primary obstacle to developing a pre-trained model for time series analysis is the scarcity of sufficient training data. In our research, we overcome this obstacle by utilizing pre-trained models from language or CV, which have been trained on billions of data points, and apply them to time series analysis. We assess the effectiveness of the pre-trained transformer model in two ways. Initially, we maintain the original structure of the self-attention and feedforward layers in the residual blocks of the pre-trained language or image model, using the Frozen Pre-trained Transformer (FPT) for time series analysis with the addition of projection matrices for input and output. Additionally, we introduce four unique adapters, designed specifically for downstream tasks based on the pre-trained model, including forecasting and anomaly detection. These adapters are further enhanced with efficient parameter tuning, resulting in superior performance compared to all state-of-the-art methods.Our comprehensive experimental studies reveal that (a) the simple FPT achieves top-tier performance across various time series analysis tasks; and (b) fine-tuning the FPT with the custom-designed adapters can further elevate its performance, outshining specialized task-specific models.Comment: this article draws heavily from arXiv:2302.1193
    • …
    corecore