88 research outputs found
Up-regulation and clinical relevance of novel helicase homologue DHX32 in colorectal cancer
Xiamen Bureau for Science and Technology [A0000033]Background: This study aimed to find novel biomarkers for colorectal cancer. Methods: Fluorescent mRNA differential display PCR (DD-PCR) was used to screen the genes differentially expressed in colorectal cancer tissues and their adjacent tissues. The differentially expressed genes were confirmed by real-time PCR and then their clinical relevance (such as association with tumor location and lymph gland metastasis) was further investigated. Results: We identified by DD-PCR a novel RNA helicase, DHX32, which showed higher expression in colorectal cancer tissues than their adjacent tissues, and this result was confirmed by real time RT-PCR. In addition, we found that the level of DHX32 gene expression in colorectal cancer was significantly associated with cancer location, lymph gland metastasis, cancer nodal status, differentiation grade, and Dukes, stage. Conclusion: DHX32 may play an important role in the development of colorectal cancer and could serve as a novel biomarker for colorectal cancer after additional investigation
Stress corrosion behavior of X80 pipeline steel in natural seawater with different dissolved oxygen
Slow strain rate stress corrosion test of X80 steel in the natural seawater are carried out to study to effect of dissolved oxygen on the sensitivity of stress corrosion. Scanning electron microscope (SEM) combined electrochemical measurement are adopted to analyze mechanism and the influencing factor of stress corrosion cracking. The results show that the sensitivity of SCC in the natural seawater increases, and the stress corrosion cracking gradually transforms from ductile to quasi-brittle fracture, with the increase of dissolved oxygen. Tafle polarization and electrochemical impedance spectroscopy of X80 show that dissolved oxygen aggravates electrochemical corrosion and reduces corrosion resistance. Corrosion pits and micro cracks at the lateral and fracture surface trigger stress concentration and promote anodic dissolution under stress, thereby accelerate the process of stress corrosion cracking of X80 steel in the seawater
Dual-Channel Multiplex Graph Neural Networks for Recommendation
Efficient recommender systems play a crucial role in accurately capturing
user and item attributes that mirror individual preferences. Some existing
recommendation techniques have started to shift their focus towards modeling
various types of interaction relations between users and items in real-world
recommendation scenarios, such as clicks, marking favorites, and purchases on
online shopping platforms. Nevertheless, these approaches still grapple with
two significant shortcomings: (1) Insufficient modeling and exploitation of the
impact of various behavior patterns formed by multiplex relations between users
and items on representation learning, and (2) ignoring the effect of different
relations in the behavior patterns on the target relation in recommender system
scenarios. In this study, we introduce a novel recommendation framework,
Dual-Channel Multiplex Graph Neural Network (DCMGNN), which addresses the
aforementioned challenges. It incorporates an explicit behavior pattern
representation learner to capture the behavior patterns composed of multiplex
user-item interaction relations, and includes a relation chain representation
learning and a relation chain-aware encoder to discover the impact of various
auxiliary relations on the target relation, the dependencies between different
relations, and mine the appropriate order of relations in a behavior pattern.
Extensive experiments on three real-world datasets demonstrate that our \model
surpasses various state-of-the-art recommendation methods. It outperforms the
best baselines by 10.06\% and 12.15\% on average across all datasets in terms
of R@10 and N@10 respectively
Prediabetes and the incidence of Parkinson’s disease: A meta-analysis
Diabetes has been associated with an elevated risk of Parkinson’s disease (PD), yet the relationship between prediabetes (PreD) and the incidence of PD in the adult population remains unclear. Therefore, a systematic review and meta-analysis was conducted to evaluate if PreD is also associated with a higher risk of PD. We conducted comprehensive searches of the PubMed, Embase, and Web of Science databases to identify relevant observational studies with longitudinal follow-up. The random-effects model was employed to synthesize the data, mitigating the potential impact of study heterogeneity on the outcomes. Our analysis incorporated seven datasets from five cohort studies, encompassing 18,170,592 adult participants without a PD diagnosis at baseline. Among them, 2,432,148 (13.3%) had PreD. During the follow-up, a total of 46,682 patients were diagnosed with PD. The pooled results indicated that PreD was associated with an increased incidence of PD (risk ratio [RR] 1.09, 95% confidence interval [CI] 1.02 - 1.16; P = 0.02; I2 = 52%), after adjusting for potential confounding factors such as age, sex, body mass index (BMI), and smoking. Subsequent pilot subgroup analyses suggested that the association between PreD and PD might not be significantly influenced by the country of the study, its design, age or sex of the participants, definition of PreD, or the quality scores of the study (P for subgroup difference all > 0.05). In conclusion, adult population with PreD may have a mildly increased risk of developing PD compared to those with normoglycemia
Enhancing Medical Task Performance in GPT-4V: A Comprehensive Study on Prompt Engineering Strategies
OpenAI's latest large vision-language model (LVLM), GPT-4V(ision), has piqued
considerable interest for its potential in medical applications. Despite its
promise, recent studies and internal reviews highlight its underperformance in
specialized medical tasks. This paper explores the boundary of GPT-4V's
capabilities in medicine, particularly in processing complex imaging data from
endoscopies, CT scans, and MRIs etc. Leveraging open-source datasets, we
assessed its foundational competencies, identifying substantial areas for
enhancement. Our research emphasizes prompt engineering, an often-underutilized
strategy for improving AI responsiveness. Through iterative testing, we refined
the model's prompts, significantly improving its interpretative accuracy and
relevance in medical imaging. From our comprehensive evaluations, we distilled
10 effective prompt engineering techniques, each fortifying GPT-4V's medical
acumen. These methodical enhancements facilitate more reliable, precise, and
clinically valuable insights from GPT-4V, advancing its operability in critical
healthcare environments. Our findings are pivotal for those employing AI in
medicine, providing clear, actionable guidance on harnessing GPT-4V's full
diagnostic potential
A centi-pc-scale compact radio core in the nearby galaxy M60
M60, an elliptical galaxy located 16.5~Mpc away, has an active nucleus with a
very low luminosity and an extremely low accretion rate. Its central
supermassive black hole has a mass of and a Schwarzschild radii corresponding to . To investigate the nature of its innermost radio
nucleus, data from the Very Long Baseline Array (VLBA) at 4.4 and 7.6~GHz were
reduced. The VLBA images reveal a compact component with total flux densities
of 20~mJy at both frequencies, a size of 0.27~mas (99.7
confidence level), about 0.022~pc () at 7.6~GHz, and a
brightness temperature of ~K. This suggests that the
observed centi-parsec-scale compact core could be attributed to a nonthermal
jet base or an advection-dominated accretion flow (ADAF) with nonthermal
electrons. The extremely compact structure also supports the presence of an
SMBH in the center. Our results indicate that M60 is a promising target for
broad-band VLBI observations at millimeter wavelengths to probe ADAF scenarios
and tightly constrain the potential photon ring (about 28\,as) around its
SMBH.Comment: 15 pages, 5 figures, 3 tables, accepted for publication in
Astrophysical Journa
STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training
Large-scale models pre-trained on large-scale datasets have profoundly
advanced the development of deep learning. However, the state-of-the-art models
for medical image segmentation are still small-scale, with their parameters
only in the tens of millions. Further scaling them up to higher orders of
magnitude is rarely explored. An overarching goal of exploring large-scale
models is to train them on large-scale medical segmentation datasets for better
transfer capacities. In this work, we design a series of Scalable and
Transferable U-Net (STU-Net) models, with parameter sizes ranging from 14
million to 1.4 billion. Notably, the 1.4B STU-Net is the largest medical image
segmentation model to date. Our STU-Net is based on nnU-Net framework due to
its popularity and impressive performance. We first refine the default
convolutional blocks in nnU-Net to make them scalable. Then, we empirically
evaluate different scaling combinations of network depth and width, discovering
that it is optimal to scale model depth and width together. We train our
scalable STU-Net models on a large-scale TotalSegmentator dataset and find that
increasing model size brings a stronger performance gain. This observation
reveals that a large model is promising in medical image segmentation.
Furthermore, we evaluate the transferability of our model on 14 downstream
datasets for direct inference and 3 datasets for further fine-tuning, covering
various modalities and segmentation targets. We observe good performance of our
pre-trained model in both direct inference and fine-tuning. The code and
pre-trained models are available at https://github.com/Ziyan-Huang/STU-Net
SAM-Med3D
Although the Segment Anything Model (SAM) has demonstrated impressive
performance in 2D natural image segmentation, its application to 3D volumetric
medical images reveals significant shortcomings, namely suboptimal performance
and unstable prediction, necessitating an excessive number of prompt points to
attain the desired outcomes. These issues can hardly be addressed by
fine-tuning SAM on medical data because the original 2D structure of SAM
neglects 3D spatial information. In this paper, we introduce SAM-Med3D, the
most comprehensive study to modify SAM for 3D medical images. Our approach is
characterized by its comprehensiveness in two primary aspects: firstly, by
comprehensively reformulating SAM to a thorough 3D architecture trained on a
comprehensively processed large-scale volumetric medical dataset; and secondly,
by providing a comprehensive evaluation of its performance. Specifically, we
train SAM-Med3D with over 131K 3D masks and 247 categories. Our SAM-Med3D
excels at capturing 3D spatial information, exhibiting competitive performance
with significantly fewer prompt points than the top-performing fine-tuned SAM
in the medical domain. We then evaluate its capabilities across 15 datasets and
analyze it from multiple perspectives, including anatomical structures,
modalities, targets, and generalization abilities. Our approach, compared with
SAM, showcases pronouncedly enhanced efficiency and broad segmentation
capabilities for 3D volumetric medical images. Our code is released at
https://github.com/uni-medical/SAM-Med3D
A-Eval: A Benchmark for Cross-Dataset Evaluation of Abdominal Multi-Organ Segmentation
Although deep learning have revolutionized abdominal multi-organ
segmentation, models often struggle with generalization due to training on
small, specific datasets. With the recent emergence of large-scale datasets,
some important questions arise: \textbf{Can models trained on these datasets
generalize well on different ones? If yes/no, how to further improve their
generalizability?} To address these questions, we introduce A-Eval, a benchmark
for the cross-dataset Evaluation ('Eval') of Abdominal ('A') multi-organ
segmentation. We employ training sets from four large-scale public datasets:
FLARE22, AMOS, WORD, and TotalSegmentator, each providing extensive labels for
abdominal multi-organ segmentation. For evaluation, we incorporate the
validation sets from these datasets along with the training set from the BTCV
dataset, forming a robust benchmark comprising five distinct datasets. We
evaluate the generalizability of various models using the A-Eval benchmark,
with a focus on diverse data usage scenarios: training on individual datasets
independently, utilizing unlabeled data via pseudo-labeling, mixing different
modalities, and joint training across all available datasets. Additionally, we
explore the impact of model sizes on cross-dataset generalizability. Through
these analyses, we underline the importance of effective data usage in
enhancing models' generalization capabilities, offering valuable insights for
assembling large-scale datasets and improving training strategies. The code and
pre-trained models are available at
\href{https://github.com/uni-medical/A-Eval}{https://github.com/uni-medical/A-Eval}
SA-Med2D-20M Dataset: Segment Anything in 2D Medical Imaging with 20 Million masks
Segment Anything Model (SAM) has achieved impressive results for natural
image segmentation with input prompts such as points and bounding boxes. Its
success largely owes to massive labeled training data. However, directly
applying SAM to medical image segmentation cannot perform well because SAM
lacks medical knowledge -- it does not use medical images for training. To
incorporate medical knowledge into SAM, we introduce SA-Med2D-20M, a
large-scale segmentation dataset of 2D medical images built upon numerous
public and private datasets. It consists of 4.6 million 2D medical images and
19.7 million corresponding masks, covering almost the whole body and showing
significant diversity. This paper describes all the datasets collected in
SA-Med2D-20M and details how to process these datasets. Furthermore,
comprehensive statistics of SA-Med2D-20M are presented to facilitate the better
use of our dataset, which can help the researchers build medical vision
foundation models or apply their models to downstream medical applications. We
hope that the large scale and diversity of SA-Med2D-20M can be leveraged to
develop medical artificial intelligence for enhancing diagnosis, medical image
analysis, knowledge sharing, and education. The data with the redistribution
license is publicly available at https://github.com/OpenGVLab/SAM-Med2D
- …