227 research outputs found

    Comparison of gemcitabine/carboplat in versus paclitaxel/cisplatin for the management of non small cell lung cancer

    Get PDF
    Purpose: To determine the comparative efficacy and toxicity of gemcitabine/carboplatin and paclitaxel/cisplatin in patients with completely resected stage IIa - IIIa non-small cell lung cancer (NSCLC). Methods: Sixty eligible NSCLC patients treated in Funan County People's Hospital were enrolled and assigned to two groups by randomization (n = 30 each). One group (CG group) received the combination of gemcitabine and carboplatin, while the second group (CP group) received a combination of cisplatin and paclitaxel. Efficacy was assessed based on 2-year progression-free survival, while adverse reactions were recorded to assess the toxicity of the chemotherapy treatments. Results: No marked difference was found in the 2-year relapse-free survival in the two groups with similar clinical baseline characteristics after follow-up (60 % in CG group vs. 56.67 % in CP group, p = 0.826). Specifically, no significant difference was found between the two groups with regard to incidence of local metastases, distant metastases, or brain tissue metastases within 2 years, and there were no treatment-related deaths. CG group was more likely to develop leukopenia (93.33 % vs. 63.33 % for CP group, p = 0.04), but no significant difference was observed for other adverse effects such as anemia, vomiting, and nausea. Conclusion: This study shows that adjuvant treatment using carboplatin and gemcitabine produces the same therapeutic efficacy as cisplatin and paclitaxel, but exhibits higher toxicity levels than the latter

    Visit-to-visit HbA<sub>1c</sub> variability is associated with cardiovascular disease and microvascular complications in patients with newly diagnosed type 2 diabetes

    Get PDF
    OBJECTIVE To investigate the association between visit-to-visit HbA1c variability and cardiovascular events and microvascular complications in patients with newly diagnosed type 2 diabetes. RESEARCH DESIGN AND METHODS This retrospective cohort study analyzed patients from Tayside and Fife in the Scottish Care Information–Diabetes Collaboration (SCI-DC) who were observable from the diagnosis of diabetes and had at least five HbA1c measurements before the outcomes were evaluated. We used the previously reported HbA1c variability score (HVS), calculated as the percentage of the number of changes in HbA1c >0.5% (5.5 mmol/mol) among all HbA1c measurements within an individual. The association between HVS and 10 outcomes was assessed using Cox proportional hazards models. RESULTS We included 13,111–19,883 patients in the analyses of each outcome. The patients with HVS >60% were associated with elevated risks of all outcomes compared with the lowest quintile (for example, hazard ratios and 95% CIs [HVS >80 to ≤100 vs. HVS ≥0 to ≤20]: 2.38 [1.61–3.53] for major adverse cardiovascular events, 2.4 [1.72–3.33] for all-cause mortality, 2.4 [1.13–5.11] for atherosclerotic cardiovascular death, 2.63 [1.81–3.84] for coronary artery disease, 2.04 [1.12–3.73] for ischemic stroke, 3.23 [1.76–5.93] for heart failure, 7.4 [3.84–14.27] for diabetic retinopathy, 3.07 [2.23–4.22] for diabetic peripheral neuropathy, 5.24 [2.61–10.49] for diabetic foot ulcer, and 3.49 [2.47–4.95] for new-onset chronic kidney disease). Four sensitivity analyses, including adjustment for time-weighted average HbA1c, confirmed the robustness of the results. CONCLUSIONS Our study shows that higher HbA1c variability is associated with increased risks of all-cause mortality, cardiovascular events, and microvascular complications of diabetes independently of high HbA1c

    TENSILE: A Tensor granularity dynamic GPU memory scheduling method towards multiple dynamic workloads system

    Full text link
    Recently, deep learning has been an area of intense research. However, as a kind of computing-intensive task, deep learning highly relies on the scale of GPU memory, which is usually prohibitive and scarce. Although there are some extensive works have been proposed for dynamic GPU memory management, they are hard to be applied to systems with multiple dynamic workloads, such as in-database machine learning systems. In this paper, we demonstrated TENSILE, a method of managing GPU memory in tensor granularity to reduce the GPU memory peak, considering the multiple dynamic workloads. TENSILE tackled the cold-starting and across-iteration scheduling problem existing in previous works. We implement TENSILE on a deep learning framework built by ourselves and evaluated its performance. The experiment results show that TENSILE can save more GPU memory with less extra time overhead than prior works in both single and multiple dynamic workloads scenarios

    GaitFormer: Revisiting Intrinsic Periodicity for Gait Recognition

    Full text link
    Gait recognition aims to distinguish different walking patterns by analyzing video-level human silhouettes, rather than relying on appearance information. Previous research on gait recognition has primarily focused on extracting local or global spatial-temporal representations, while overlooking the intrinsic periodic features of gait sequences, which, when fully utilized, can significantly enhance performance. In this work, we propose a plug-and-play strategy, called Temporal Periodic Alignment (TPA), which leverages the periodic nature and fine-grained temporal dependencies of gait patterns. The TPA strategy comprises two key components. The first component is Adaptive Fourier-transform Position Encoding (AFPE), which adaptively converts features and discrete-time signals into embeddings that are sensitive to periodic walking patterns. The second component is the Temporal Aggregation Module (TAM), which separates embeddings into trend and seasonal components, and extracts meaningful temporal correlations to identify primary components, while filtering out random noise. We present a simple and effective baseline method for gait recognition, based on the TPA strategy. Extensive experiments conducted on three popular public datasets (CASIA-B, OU-MVLP, and GREW) demonstrate that our proposed method achieves state-of-the-art performance on multiple benchmark tests

    Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks

    Full text link
    In this paper, we propose a novel layer-adaptive weight-pruning approach for Deep Neural Networks (DNNs) that addresses the challenge of optimizing the output distortion minimization while adhering to a target pruning ratio constraint. Our approach takes into account the collective influence of all layers to design a layer-adaptive pruning scheme. We discover and utilize a very important additivity property of output distortion caused by pruning weights on multiple layers. This property enables us to formulate the pruning as a combinatorial optimization problem and efficiently solve it through dynamic programming. By decomposing the problem into sub-problems, we achieve linear time complexity, making our optimization algorithm fast and feasible to run on CPUs. Our extensive experiments demonstrate the superiority of our approach over existing methods on the ImageNet and CIFAR-10 datasets. On CIFAR-10, our method achieves remarkable improvements, outperforming others by up to 1.0% for ResNet-32, 0.5% for VGG-16, and 0.7% for DenseNet-121 in terms of top-1 accuracy. On ImageNet, we achieve up to 4.7% and 4.6% higher top-1 accuracy compared to other methods for VGG-16 and ResNet-50, respectively. These results highlight the effectiveness and practicality of our approach for enhancing DNN performance through layer-adaptive weight pruning. Code will be available on https://github.com/Akimoto-Cris/RD_VIT_PRUNE

    Neuroticism vulnerability factors of anxiety symptoms in adolescents and early adults: an analysis using the bi-factor model and multi-wave longitudinal model

    Get PDF
    Background Neuroticism and stress are important vulnerability factors in the development and outcome of anxiety symptoms. However, as neuroticism is a heterogeneity trait, it is still unclear how different neuroticism factors contribute to anxiety symptoms independently or in conjunction with stress. Thus, different factors of neuroticism were extracted in the present longitudinal study using the bi-factor model. The prediction effect of these different factors on anxiety symptoms and their combined effects with stress in both adolescent and adult samples were examined. Method Participants (592 adolescents and 638 young adults) in Hunan China were included. In the initial assessment in our longitudinal study, participants were asked to complete measurements that assessed neuroticism, stress, and anxiety symptoms. Next, a monthly assessment of stress and anxiety symptoms was completed for the subsequent 6 months. The bi-factor model was used to extract different factors of neuroticism. The hierarchical linear model was used to analyze longitudinal multi-wave data. Result Several model fit indices were used to evaluate the bi-factor model fit for neuroticism (adolescent: Tucker-Lewis index (TLI) = 0.957, comparative fit index (CFI) = 0.973, RMSEA = 0.040, Chi-Square = 80.471; early adults: TLI = 0.957, CFI = 0.973, RMSEA = 0.042, Chi-Square = 88.465). The results of hierarchical linear modeling analyses indicated that the general factor of neuroticism possessed a predictive effect on anxiety symptoms (adolescents: F = 36.77, p 0.05; early adults: F = 4.84, p 0.05; early adults: F = 0.02, p > 0.05); the interactive effects of the general factor and stress on anxiety symptoms were only found in early adulthood (adolescents: F = 0.13, p > 0.05; early adults: F = 11.55, p < 0.01). Conclusion Our results suggested that the bi-factor model achieved a satisfactory fit for neuroticism measurement and supported that the anxiety symptoms were induced by the main effects of the general factor in both age samples and the negative factor only in adults. The general factor of neuroticism, but not the negative factor could make an additive effect for anxiety symptoms in face of stress, which meant that the homogeneity of neuroticism played a more significant role in further anxiety symptoms than heterogeneity when coping with stress

    Duet: efficient and scalable hybriD neUral rElation undersTanding

    Full text link
    Learned cardinality estimation methods have achieved high precision compared to traditional methods. Among learned methods, query-driven approaches face the data and workload drift problem for a long time. Although both query-driven and hybrid methods are proposed to avoid this problem, even the state-of-the-art of them suffer from high training and estimation costs, limited scalability, instability, and long-tailed distribution problem on high cardinality and high-dimensional tables, which seriously affects the practical application of learned cardinality estimators. In this paper, we prove that most of these problems are directly caused by the widely used progressive sampling. We solve this problem by introducing predicates information into the autoregressive model and propose Duet, a stable, efficient, and scalable hybrid method to estimate cardinality directly without sampling or any non-differentiable process, which can not only reduces the inference complexity from O(n) to O(1) compared to Naru and UAE but also achieve higher accuracy on high cardinality and high-dimensional tables. Experimental results show that Duet can achieve all the design goals above and be much more practical and even has a lower inference cost on CPU than that of most learned methods on GPU

    Teach-DETR: Better Training DETR with Teachers

    Full text link
    In this paper, we present a novel training scheme, namely Teach-DETR, to learn better DETR-based detectors from versatile teacher detectors. We show that the predicted boxes from teacher detectors are effective medium to transfer knowledge of teacher detectors, which could be either RCNN-based or DETR-based detectors, to train a more accurate and robust DETR model. This new training scheme can easily incorporate the predicted boxes from multiple teacher detectors, each of which provides parallel supervisions to the student DETR. Our strategy introduces no additional parameters and adds negligible computational cost to the original detector during training. During inference, Teach-DETR brings zero additional overhead and maintains the merit of requiring no non-maximum suppression. Extensive experiments show that our method leads to consistent improvement for various DETR-based detectors. Specifically, we improve the state-of-the-art detector DINO with Swin-Large backbone, 4 scales of feature maps and 36-epoch training schedule, from 57.8% to 58.9% in terms of mean average precision on MSCOCO 2017 validation set. Code will be available at https://github.com/LeonHLJ/Teach-DETR
    • …
    corecore