147 research outputs found
MECHANISM AND IMPLICATIONS OF DESTABILIZING TIP60, A TUMOUR SUPPRESSOR
Ph.DPHD IN CANCER BIOLOG
A novel signature based on microvascular invasion predicts the recurrence of HCC.
BACKGROUND AND OBJECTIVES: In hepatocellular carcinoma (HCC) patients, microvascular invasion (MVI) is associated with worse outcomes regardless of treatment. No single reliable preoperative factor exists to predict MVI. The aim of the work described here was to develop a new MVI- based mRNA biomarker to differentiate between high and low risk patients.
METHODS: Using The Cancer Genome Atlas (TCGA) database, we collected data from 315 HCC patients, including mRNA expression and complete clinical data. We generated a seven-mRNA signature to predict patient outcomes. The mRNA signature was validated using the GSE36376 cohort. Finally, we tested the formula in our own 53 HCC patients using qPCR for the seven mRNAs and analyzing the computed tomography (CT) features.
RESULTS: This seven-mRNA signature significantly correlated with length of recurrence-free survival (RFS) and overall survival (OS) for both the training and validation groups. RFS and OS were briefer in high risk versus low risk patients. A Kaplan-Meier analysis also indicated that survival time was significantly shortened in the high risk group versus the low risk group. Time-dependent receiver operating characteristic analysis demonstrated good predictive performance for the seven-mRNA signature. The mRNA signature also acts as an independent factor according to a Multivariate analysis. Our results are consistent with the seven-mRNA formula risk score.
CONCLUSION: Our research showed a novel seven-mRNA biomarker based on MVI predicting RFS and OS in HCC patients. This mRNA signature can stratify patients into subgroups based on their risk of recurrence to help guide individualized treatment and precision management in HCC
Dapagliflozin in Heart Failure with Reduced Ejection Fraction: A Real-World Study
Aims: We aimed to observe the improvements in cardiac function indexes and the occurrence
of adverse events in patients with heart failure with reduced ejection fraction (HFrEF)
after dapagliflozin administration in a real-world setting.
Methods: We retrospectively included 201 patients with HFrEF who were treated at a tertiary
hospital in Zhengzhou and started to take dapagliflozin (10 mg/d) from March 2020
to June 2021. Their New York Heart Association (NYHA) functional class, cardiac ultrasound
indexes, laboratory indexes, and vital signs between baseline and the last follow-up
visit were compared, and their adverse events during the follow-up period were recorded.
Results: The follow-up period was 173 (67–210) days. The cardiac function indexes of patients
at follow-up, compared with baseline, indicated significant improvement (proportion
of NYHA functional class I and II: 40.8% vs. 56.2%; left ventricular ejection fraction:
28.4 ± 5.3% vs. 34.7 ± 5.9%; left ventricular end-diastolic diameter: 70.1 ± 6.4 mm
vs. 64.7 ± 5.6 mm; N-terminal pro-B-type natriuretic peptide: 5421.9 ± 2864.4 pg/mL
vs. 2842.8 ± 1703.4 pg/mL at baseline vs. at follow-up; all P < 0.05). The rates of
hypotension, deterioration of renal function, and genital infection during the follow-up
period were 6.5%, 4.0%, and 3.5%, respectively.
Conclusions: We believe that dapagliflozin is safe and effective in patients with HFrEF in the
real world.
</p
The Use of Solution-Focused Brief Therapy in Chinese Schools: A Qualitative Analysis of Practitioner Perceptions
Solution-focused brief therapy (SFBT) is a strengthens-based, future-oriented approach that has received promising results over the past decade. Literature on SFBT has demonstrated the approach’s ability to meet the unique needs of various client populations while adapting to a variety of service delivery settings. Schools are a specific setting in which SFBT has been successfully utilized in the United States. With the growing popularity of SFBT, countries outside to the United States are beginning to implement SFBT in their schools. This article explored perceptions of the use of SFBT in schools amongst Chinese mental health practitioners. A survey was conducted by the Chinese government and included 134 participants. The qualitative results showed the Chinese practitioners have a strong interest in the strengths-based approach and feel that SFBT is culturally-adaptive to the Chinese student population. However, the practitioners are not confidently able to utilize SFBT techniques. The Chinese practitioners related the lack of confidence to a lack of SFBT focused training and professional develop opportunities. As SFBT research and practice continues to grow in China, the need for affordable, accessible SFBT trainings and supervision grows as well
Scalable Scheduling for Industrial Time-Sensitive Networking: A Hyper-flow Graph Based Scheme
Industrial Time-Sensitive Networking (TSN) provides deterministic mechanisms
for real-time and reliable flow transmission. Increasing attention has been
paid to efficient scheduling for time-sensitive flows with stringent
requirements such as ultra-low latency and jitter. In TSN, the fine-grained
traffic shaping protocol, cyclic queuing and forwarding (CQF), eliminates
uncertain delay and frame loss by cyclic traffic forwarding and queuing.
However, it inevitably causes high scheduling complexity. Moreover, complexity
is quite sensitive to flow attributes and network scale. The problem stems in
part from the lack of an attribute mining mechanism in existing frame-based
scheduling. For time-critical industrial networks with large-scale complex
flows, a so-called hyper-flow graph based scheduling scheme is proposed to
improve the scheduling scalability in terms of schedulability, scheduling
efficiency and latency & jitter. The hyper-flow graph is built by aggregating
similar flow sets as hyper-flow nodes and designing a hierarchical scheduling
framework. The flow attribute-sensitive scheduling information is embedded into
the condensed maximal cliques, and reverse maps them precisely to congestion
flow portions for re-scheduling. Its parallel scheduling reduces network scale
induced complexity. Further, this scheme is designed in its entirety as a
comprehensive scheduling algorithm GH^2. It improves the three criteria of
scalability along a Pareto front. Extensive simulation studies demonstrate its
superiority. Notably, GH^2 is verified its scheduling stability with a runtime
of less than 100 ms for 1000 flows and near 1/430 of the SOTA FITS method for
2000 flows
An Empirical Study on the Effectiveness of Noisy Label Learning for Program Understanding
Recently, deep learning models have been widely applied in program
understanding tasks, and these models achieve state-of-the-art results on many
benchmark datasets. A major challenge of deep learning for program
understanding is that the effectiveness of these approaches depends on the
quality of their datasets, and these datasets often contain noisy data samples.
A typical kind of noise in program understanding datasets is label noises,
which means that the target outputs for some inputs are mislabeled.
Label noises may have a negative impact on the performance of deep learning
models, so researchers have proposed various approaches to alleviate the impact
of noisy labels, and formed a new research topic: noisy label learning (NLL).
In this paper, we conduct an empirical study on the effectiveness of noisy
label learning on deep learning for program understanding datasets. We evaluate
various noisy label learning approaches and deep learning models on two tasks:
program classification and code summarization. From the evaluation results, we
find that the impact of label noise and NLL approaches on small deep learning
models and large pre-trained models are different: small models are prone to
label noises in program classification and NLL approaches can improve their
robustness, while large pre-trained models are robust against label noises and
NLL does not significantly improve their performances. On the other hand, NLL
approaches have shown satisfying results in identifying noisy labeled samples
for both tasks, indicating that these techniques can benefit researchers in
building high-quality program understanding datasets
Performance of criteria for selecting evolutionary models in phylogenetics: a comprehensive study based on simulated datasets
<p>Abstract</p> <p>Background</p> <p>Explicit evolutionary models are required in maximum-likelihood and Bayesian inference, the two methods that are overwhelmingly used in phylogenetic studies of DNA sequence data. Appropriate selection of nucleotide substitution models is important because the use of incorrect models can mislead phylogenetic inference. To better understand the performance of different model-selection criteria, we used 33,600 simulated data sets to analyse the accuracy, precision, dissimilarity, and biases of the hierarchical likelihood-ratio test, Akaike information criterion, Bayesian information criterion, and decision theory.</p> <p>Results</p> <p>We demonstrate that the Bayesian information criterion and decision theory are the most appropriate model-selection criteria because of their high accuracy and precision. Our results also indicate that in some situations different models are selected by different criteria for the same dataset. Such dissimilarity was the highest between the hierarchical likelihood-ratio test and Akaike information criterion, and lowest between the Bayesian information criterion and decision theory. The hierarchical likelihood-ratio test performed poorly when the true model included a proportion of invariable sites, while the Bayesian information criterion and decision theory generally exhibited similar performance to each other.</p> <p>Conclusions</p> <p>Our results indicate that the Bayesian information criterion and decision theory should be preferred for model selection. Together with model-adequacy tests, accurate model selection will serve to improve the reliability of phylogenetic inference and related analyses.</p
Enumeration and representation of spin space groups
Those fundamental properties, such as phase transitions, Weyl fermions and
spin excitation, in all magnetic ordered materials was ultimately believed to
rely on the symmetry theory of magnetic space groups. Recently, it has come to
light that a more comprehensive group, known as the spin space group (SSG),
which combines separate spin and spatial operations, is necessary to fully
characterize the geometry and physical properties of magnetic ordered materials
such as altermagnets. However, the basic theory of SSG has been seldomly
developed. In this work, we present a systematic study of the enumeration and
the representation theory of SSG. Starting from the 230 crystallographic space
groups and finite translational groups with a maximum order of 8, we establish
an extensive collection of over 80,000 SSGs under a four-segment nomenclature.
We then identify inequivalent SSGs specifically applicable to collinear,
coplanar, and noncoplanar magnetic configurations. Moreover, we derive the
irreducible co-representations of the little group in momentum space within the
SSG framework. Finally, we illustrate the SSGs and band degeneracies resulting
from SSG symmetries through several representative material examples, including
a well-known altermagnet RuO2, and a spiral magnet CeAuAl3. Our work advances
the field of group theory in describing magnetic ordered materials, opening up
avenues for deeper comprehension and further exploration of emergent phenomena
in magnetic materials.Comment: 29 pages, 1 table, 5 figures and a Supplementary table with 1508
page
- …