256 research outputs found

    Deep learning-based dynamic forecasting method and application for ultra-deep fractured reservoir production

    Get PDF
    Addressing the complex challenges in dynamic production forecasting for the deep-ultra-deep fractured carbonate reservoirs in the Tarim Basin’s Tahe Oilfield, characterized by numerous influencing factors, strong temporal variations, high non-linearity, and prediction difficulties, We proposes a prediction method based on Gated Recurrent Unit networks (GRU). Initially, the production data and influencing factors are subjected to dimensionality reduction using Pearson correlation coefficient and principal component analysis methods to obtain multi-attribute time series data. Subsequently, deep learning modeling of time series data is conducted using Gated Recurrent Unit networks. The model is then optimized using the Optuna algorithm and applied to the dynamic production forecasting of the deep-ultra-deep fractured carbonate reservoirs in the Tahe Oilfield. The results demonstrate that the Gated Recurrent Unit network model optimized by Optuna excels in the dynamic production forecasting of the Tahe fractured carbonate reservoirs. Compared with the traditional method, the mean absolute error (MAE), the root mean square error (MSE) and the mean absolute percentage error (MAPE) are reduced by 0.04, 0.1 and 1.1, respectively. This method proves to be more adaptable to the production forecasting challenges of deep fractured reservoirs, providing an effective means to enhance model performance. It holds significant practical value and importance in guiding the development of fractured reservoirs

    Comprehensive analysis of LRR-RLKs and key gene identification in Pinus massoniana resistant to pine wood nematode

    Get PDF
    Pinus massoniana is a pioneer tree widely planted for afforestation on barren hills in southern China where the total planted area is 8.04 million ha. The invasive pine wood nematode (Bursaphelenchus xylophilus) poses a serious threat to the survival of P. massoniana. Plant resistance genes encoded by leucine-rich repeat-containing transmembrane-receptor proteins play important roles in plant defense. Leucine-rich repeat receptor-like kinases (LRR-RLKs), the largest subfamily of the RLK protein family, play an important role in sensing stress signals in plants. However, the LRR-RLKs of P. massoniana have not been characterized previously, and their role in resistance to B. xylophilus is unknown. In this study, 185 members of the LRR-RLK subfamily were identified in P. massoniana and were categorized into 14 subgroups. Transcriptomic and quantitative real-time RT-PCR analyses showed that PmRLKs32 was highly expressed in the stem tissue after inoculation with B. xylophilus. The gene exhibited high homology with AtFLS2 of Arabidopsis thaliana. PmRLKs32 was localized to the plasma membrane and was significantly upregulated in nematode-resistant and nematode-susceptible individuals. The transient expression of PmRLKs32 resulted in a burst of reactive oxygen species production in P. massoniana and Nicotiana benthamiana seedlings. These results lay a foundation for further exploration of the regulatory mechanism of LRR-RLKs in response to biotic stress in P. massoniana

    Enhancing Medical Task Performance in GPT-4V: A Comprehensive Study on Prompt Engineering Strategies

    Full text link
    OpenAI's latest large vision-language model (LVLM), GPT-4V(ision), has piqued considerable interest for its potential in medical applications. Despite its promise, recent studies and internal reviews highlight its underperformance in specialized medical tasks. This paper explores the boundary of GPT-4V's capabilities in medicine, particularly in processing complex imaging data from endoscopies, CT scans, and MRIs etc. Leveraging open-source datasets, we assessed its foundational competencies, identifying substantial areas for enhancement. Our research emphasizes prompt engineering, an often-underutilized strategy for improving AI responsiveness. Through iterative testing, we refined the model's prompts, significantly improving its interpretative accuracy and relevance in medical imaging. From our comprehensive evaluations, we distilled 10 effective prompt engineering techniques, each fortifying GPT-4V's medical acumen. These methodical enhancements facilitate more reliable, precise, and clinically valuable insights from GPT-4V, advancing its operability in critical healthcare environments. Our findings are pivotal for those employing AI in medicine, providing clear, actionable guidance on harnessing GPT-4V's full diagnostic potential

    STU-Net: Scalable and Transferable Medical Image Segmentation Models Empowered by Large-Scale Supervised Pre-training

    Full text link
    Large-scale models pre-trained on large-scale datasets have profoundly advanced the development of deep learning. However, the state-of-the-art models for medical image segmentation are still small-scale, with their parameters only in the tens of millions. Further scaling them up to higher orders of magnitude is rarely explored. An overarching goal of exploring large-scale models is to train them on large-scale medical segmentation datasets for better transfer capacities. In this work, we design a series of Scalable and Transferable U-Net (STU-Net) models, with parameter sizes ranging from 14 million to 1.4 billion. Notably, the 1.4B STU-Net is the largest medical image segmentation model to date. Our STU-Net is based on nnU-Net framework due to its popularity and impressive performance. We first refine the default convolutional blocks in nnU-Net to make them scalable. Then, we empirically evaluate different scaling combinations of network depth and width, discovering that it is optimal to scale model depth and width together. We train our scalable STU-Net models on a large-scale TotalSegmentator dataset and find that increasing model size brings a stronger performance gain. This observation reveals that a large model is promising in medical image segmentation. Furthermore, we evaluate the transferability of our model on 14 downstream datasets for direct inference and 3 datasets for further fine-tuning, covering various modalities and segmentation targets. We observe good performance of our pre-trained model in both direct inference and fine-tuning. The code and pre-trained models are available at https://github.com/Ziyan-Huang/STU-Net

    Experimental Study on the Peeling Characteristics of Wax on the Surface of Flexible Composite Pipe and Plastic Alloy Tube

    Get PDF
    In this paper, the wax deposition stripping experiment on the surface of flexible composite tube and the surface of plastic alloy tube was carried out by using medical paraffin. Under different temperature and different thickness of wax, the peeling force of wax on the surface of flexible composite pipe and the surface of plastic alloy pipe lining was discussed. The experimental conclusions of this study are specific guidance for wax cleaning technology in nonmetallic pipes and device design

    A-Eval: A Benchmark for Cross-Dataset Evaluation of Abdominal Multi-Organ Segmentation

    Full text link
    Although deep learning have revolutionized abdominal multi-organ segmentation, models often struggle with generalization due to training on small, specific datasets. With the recent emergence of large-scale datasets, some important questions arise: \textbf{Can models trained on these datasets generalize well on different ones? If yes/no, how to further improve their generalizability?} To address these questions, we introduce A-Eval, a benchmark for the cross-dataset Evaluation ('Eval') of Abdominal ('A') multi-organ segmentation. We employ training sets from four large-scale public datasets: FLARE22, AMOS, WORD, and TotalSegmentator, each providing extensive labels for abdominal multi-organ segmentation. For evaluation, we incorporate the validation sets from these datasets along with the training set from the BTCV dataset, forming a robust benchmark comprising five distinct datasets. We evaluate the generalizability of various models using the A-Eval benchmark, with a focus on diverse data usage scenarios: training on individual datasets independently, utilizing unlabeled data via pseudo-labeling, mixing different modalities, and joint training across all available datasets. Additionally, we explore the impact of model sizes on cross-dataset generalizability. Through these analyses, we underline the importance of effective data usage in enhancing models' generalization capabilities, offering valuable insights for assembling large-scale datasets and improving training strategies. The code and pre-trained models are available at \href{https://github.com/uni-medical/A-Eval}{https://github.com/uni-medical/A-Eval}

    SAM-Med3D

    Full text link
    Although the Segment Anything Model (SAM) has demonstrated impressive performance in 2D natural image segmentation, its application to 3D volumetric medical images reveals significant shortcomings, namely suboptimal performance and unstable prediction, necessitating an excessive number of prompt points to attain the desired outcomes. These issues can hardly be addressed by fine-tuning SAM on medical data because the original 2D structure of SAM neglects 3D spatial information. In this paper, we introduce SAM-Med3D, the most comprehensive study to modify SAM for 3D medical images. Our approach is characterized by its comprehensiveness in two primary aspects: firstly, by comprehensively reformulating SAM to a thorough 3D architecture trained on a comprehensively processed large-scale volumetric medical dataset; and secondly, by providing a comprehensive evaluation of its performance. Specifically, we train SAM-Med3D with over 131K 3D masks and 247 categories. Our SAM-Med3D excels at capturing 3D spatial information, exhibiting competitive performance with significantly fewer prompt points than the top-performing fine-tuned SAM in the medical domain. We then evaluate its capabilities across 15 datasets and analyze it from multiple perspectives, including anatomical structures, modalities, targets, and generalization abilities. Our approach, compared with SAM, showcases pronouncedly enhanced efficiency and broad segmentation capabilities for 3D volumetric medical images. Our code is released at https://github.com/uni-medical/SAM-Med3D

    SA-Med2D-20M Dataset: Segment Anything in 2D Medical Imaging with 20 Million masks

    Full text link
    Segment Anything Model (SAM) has achieved impressive results for natural image segmentation with input prompts such as points and bounding boxes. Its success largely owes to massive labeled training data. However, directly applying SAM to medical image segmentation cannot perform well because SAM lacks medical knowledge -- it does not use medical images for training. To incorporate medical knowledge into SAM, we introduce SA-Med2D-20M, a large-scale segmentation dataset of 2D medical images built upon numerous public and private datasets. It consists of 4.6 million 2D medical images and 19.7 million corresponding masks, covering almost the whole body and showing significant diversity. This paper describes all the datasets collected in SA-Med2D-20M and details how to process these datasets. Furthermore, comprehensive statistics of SA-Med2D-20M are presented to facilitate the better use of our dataset, which can help the researchers build medical vision foundation models or apply their models to downstream medical applications. We hope that the large scale and diversity of SA-Med2D-20M can be leveraged to develop medical artificial intelligence for enhancing diagnosis, medical image analysis, knowledge sharing, and education. The data with the redistribution license is publicly available at https://github.com/OpenGVLab/SAM-Med2D
    • …
    corecore