369 research outputs found
Acoustic impedance inversion in coal strata using the priori constraint-based TCN-BiGRU method
Acoustic impedance inversion is a key technique for the seismic exploration of coalfield, which can determine subsurface lithological changes and coal seam distribution. The traditional method is highly subjective, has poor generalizability, and interpretation can be time and labor consuming. Due to the powerful nonlinear interpretation and feature extraction capabilities of neural networks, deep learning technology has demonstrated potential for geophysical exploration. To predict acoustic impedance accurately and efficiently, this study proposes the use of the initial geological model as the priori constraint for training. The low-frequency feature extraction capability of a bidirectional gated recurrent unit network and the high-frequency feature extraction capability of a temporal convolutional network are used to establish a new acoustic impedance inversion method in coal strata with a priori constraint data. The temporal convolutional network-bidirectional gated recurrent unit method was applied to data from the Xinjing Mining Area in Shanxi province, northern China. The results displayed good precision by accurately predicting the distribution and thickness variation of local coal seams. Compared with the traditional model-based method and the method using temporal convolutional network-bidirectional gated recurrent unit network, the proposed priori constraint-based temporal convolutional network-bidirectional gated recurrent unit network has better feature expression capability and provides more detailed coal seam information. In conclusion, the new method can improve the accuracy of acoustic impedance inversion, which is of great significance for coalfield seismic exploration.Document Type: Original articleCited as: Shi, S., Qi, Y., Chang, W., Li, L., Yao, X., Shi, J. Acoustic impedance inversion in coal strata using the priori constraint-based TCN-BiGRU method. Advances in Geo-Energy Research, 2023, 9(1): 13-24. https://doi.org/10.46690/ager.2023.07.0
The natural history of EGFR and EGFRvIII in glioblastoma patients
BACKGROUND: The epidermal growth factor receptor (EGFR) is over expressed in approximately 50–60% of glioblastoma (GBM) tumors, and the most common EGFR mutant, EGFRvIII, is expressed in 24–67% of cases. This study was designed to address whether over expressed EGFR or EGFRvIII is an actual independent prognostic indicator of overall survival in a uniform body of patients in whom gross total surgical resection (GTR; ≥ 95% resection) was not attempted or achieved. METHODS: Biopsed or partially/subtotally resected GBM patients (N = 54) underwent adjuvant conformal radiation and chemotherapy. Their EGFR and EGFRvIII status was determined by immunohistochemistry and Kaplan-Meier estimates of overall survival were obtained. RESULTS: In our study of GBM patients with less than GTR, 42.6% (n = 23) failed to express EGFR, 25.9% (n = 14) had over expression of the wild-type EGFR only and 31.5 % (n = 17) expressed the EGFRvIII. Patients within groups expressing the EGFR, EGFRvIII, or lacking EGFR expression did not differ in age, Karnofsky Performance Scale (KPS) score, extent of tumor resection. They all had received postoperative radiation and chemotherapy. The median overall survival times for patients with tumors having no EGFR expression, over expressed EGFR only, or EGFRvIII were 12.3 (95% CI, 8.04–16.56), 11.03 (95% CI, 10.18–11.89) and 14.07 (95% CI, 7.39–20.74) months, respectively, log rank test p > 0.05). Patients with tumors that over expressed the EGFR and EGFRvIII were more likely to present with ependymal spread, 21.4% and 35.3% respectively, compared to those patients whose GBM failed to express either marker, 13.0%, although the difference was not statistically significant. There was no significant difference in multifocal disease or gliomatosis cerebri among EGFR expression groups. CONCLUSION: The over expressed wild-type EGFR and EGFRvIII are not independent predictors of median overall survival in the cohort of patients who did not undergo extensive tumor resection
EMScore: Evaluating Video Captioning via Coarse-Grained and Fine-Grained Embedding Matching
Current metrics for video captioning are mostly based on the text-level
comparison between reference and candidate captions. However, they have some
insuperable drawbacks, e.g., they cannot handle videos without references, and
they may result in biased evaluation due to the one-to-many nature of
video-to-text and the neglect of visual relevance. From the human evaluator's
viewpoint, a high-quality caption should be consistent with the provided video,
but not necessarily be similar to the reference in literal or semantics.
Inspired by human evaluation, we propose EMScore (Embedding Matching-based
score), a novel reference-free metric for video captioning, which directly
measures similarity between video and candidate captions. Benefit from the
recent development of large-scale pre-training models, we exploit a well
pre-trained vision-language model to extract visual and linguistic embeddings
for computing EMScore. Specifically, EMScore combines matching scores of both
coarse-grained (video and caption) and fine-grained (frames and words) levels,
which takes the overall understanding and detailed characteristics of the video
into account. Furthermore, considering the potential information gain, EMScore
can be flexibly extended to the conditions where human-labeled references are
available. Last but not least, we collect VATEX-EVAL and ActivityNet-FOIl
datasets to systematically evaluate the existing metrics. VATEX-EVAL
experiments demonstrate that EMScore has higher human correlation and lower
reference dependency. ActivityNet-FOIL experiment verifies that EMScore can
effectively identify "hallucinating" captions. The datasets will be released to
facilitate the development of video captioning metrics. The code is available
at: https://github.com/ShiYaya/emscore.Comment: cvpr202
Brain-Controlled Multi-Robot at Servo-Control Level Based on Nonlinear Model Predictive Control
Using a brain-computer interface (BCI) rather than limbs to control multiple robots (i.e., brain-controlled multi-robots) can better assist people with disabilities in daily life than a brain-controlled single robot. For example, one person with disabilities can move by a brain-controlled wheelchair (leader robot) and simultaneously transport objects by follower robots. In this paper, we explore how to control the direction, speed, and formation of a brain-controlled multi-robot system (consisting of leader and follower robots) for the first time and propose a novel multi-robot predictive control framework (MRPCF) that can track users' control intents and ensure the safety of multiple robots. The MRPCF consists of the leader controller, follower controller, and formation planner. We build a whole brain-controlled multi-robot physical system for the first time and test the proposed system through human-in-the-loop actual experiments. The experimental results indicate that the proposed system can track users' direction, speed, and formation control intents when guaranteeing multiple robots’ safety. This paper can promote the study of brain-controlled robots and multi-robot systems and provide some novel views into human-machine collaboration and integration
Recommended from our members
A Roadmap for Lowering Crop Nitrogen Requirement.
Increasing nitrogen fertilizer applications have sustained a growing world population in the 20th century. However, to avoid any further associated environmental damage, new sustainable agronomic practices together with new cultivars must be developed. To date the concept of nitrogen use efficiency (NUE) has been useful in quantifying the processes of nitrogen uptake and utilization, but we propose a shift in focus to consider nitrogen responsiveness as a more appropriate trait to select varieties with lower nitrogen requirements. We provide a roadmap to integrate the regulation of nitrogen uptake and assimilation into varietal selection and crop breeding programs. The overall goal is to reduce nitrogen inputs by farmers growing crops in contrasting cropping systems around the world, while sustaining yields and reducing greenhouse gas (GHG) emissions
Semantics-enhanced Cross-modal Masked Image Modeling for Vision-Language Pre-training
In vision-language pre-training (VLP), masked image modeling (MIM) has
recently been introduced for fine-grained cross-modal alignment. However, in
most existing methods, the reconstruction targets for MIM lack high-level
semantics, and text is not sufficiently involved in masked modeling. These two
drawbacks limit the effect of MIM in facilitating cross-modal semantic
alignment. In this work, we propose a semantics-enhanced cross-modal MIM
framework (SemMIM) for vision-language representation learning. Specifically,
to provide more semantically meaningful supervision for MIM, we propose a local
semantics enhancing approach, which harvest high-level semantics from global
image features via self-supervised agreement learning and transfer them to
local patch encodings by sharing the encoding space. Moreover, to achieve deep
involvement of text during the entire MIM process, we propose a text-guided
masking strategy and devise an efficient way of injecting textual information
in both masked modeling and reconstruction target acquisition. Experimental
results validate that our method improves the effectiveness of the MIM task in
facilitating cross-modal semantic alignment. Compared to previous VLP models
with similar model size and data scale, our SemMIM model achieves
state-of-the-art or competitive performance on multiple downstream
vision-language tasks.Comment: Accepted to LREC-COLING 202
- …