85 research outputs found

    Productivity Prediction Approach of Complex Tight Gas Reservoir in Yingtai Area

    Get PDF
    Productivity is the core of evaluation of undeveloped reserves. Productivity prediction is the significant content of oilfield plan deploying, development plan, dynamical analysis, oil and gas wells allocation and development plan regulation. As an example of tight gas reservoir in Yingtai area of Jilin Oilfield, according to internal factors in productivity prediction and the lithologic character of this region which consists of volcanic rock and clastic rock, this paper proposes two combination parameters to predict productivity. The different prediction results of these two methods are compared and analyzed. Based on the verification of two wells, “quasi-formation coefficient” has higher precision, the average relative error being 4%. It has reference meaning in the productivity prediction to other gas reservoirs with the same type and similar geologic conditions.Key words: Tight gas reservoir; Quasi-formation coefficient; Gas well; New well productivity; Predictio

    TextNet: Irregular Text Reading from Images with an End-to-End Trainable Network

    Full text link
    Reading text from images remains challenging due to multi-orientation, perspective distortion and especially the curved nature of irregular text. Most of existing approaches attempt to solve the problem in two or multiple stages, which is considered to be the bottleneck to optimize the overall performance. To address this issue, we propose an end-to-end trainable network architecture, named TextNet, which is able to simultaneously localize and recognize irregular text from images. Specifically, we develop a scale-aware attention mechanism to learn multi-scale image features as a backbone network, sharing fully convolutional features and computation for localization and recognition. In text detection branch, we directly generate text proposals in quadrangles, covering oriented, perspective and curved text regions. To preserve text features for recognition, we introduce a perspective RoI transform layer, which can align quadrangle proposals into small feature maps. Furthermore, in order to extract effective features for recognition, we propose to encode the aligned RoI features by RNN into context information, combining spatial attention mechanism to generate text sequences. This overall pipeline is capable of handling both regular and irregular cases. Finally, text localization and recognition tasks can be jointly trained in an end-to-end fashion with designed multi-task loss. Experiments on standard benchmarks show that the proposed TextNet can achieve state-of-the-art performance, and outperform existing approaches on irregular datasets by a large margin.Comment: Asian conference on computer vision, 2018, oral presentatio

    PGNet: Real-time Arbitrarily-Shaped Text Spotting with Point Gathering Network

    Full text link
    The reading of arbitrarily-shaped text has received increasing research attention. However, existing text spotters are mostly built on two-stage frameworks or character-based methods, which suffer from either Non-Maximum Suppression (NMS), Region-of-Interest (RoI) operations, or character-level annotations. In this paper, to address the above problems, we propose a novel fully convolutional Point Gathering Network (PGNet) for reading arbitrarily-shaped text in real-time. The PGNet is a single-shot text spotter, where the pixel-level character classification map is learned with proposed PG-CTC loss avoiding the usage of character-level annotations. With a PG-CTC decoder, we gather high-level character classification vectors from two-dimensional space and decode them into text symbols without NMS and RoI operations involved, which guarantees high efficiency. Additionally, reasoning the relations between each character and its neighbors, a graph refinement module (GRM) is proposed to optimize the coarse recognition and improve the end-to-end performance. Experiments prove that the proposed method achieves competitive accuracy, meanwhile significantly improving the running speed. In particular, in Total-Text, it runs at 46.7 FPS, surpassing the previous spotters with a large margin.Comment: 10 pages, 8 figures, AAAI 202

    MaskOCR: Text Recognition with Masked Encoder-Decoder Pretraining

    Full text link
    Text images contain both visual and linguistic information. However, existing pre-training techniques for text recognition mainly focus on either visual representation learning or linguistic knowledge learning. In this paper, we propose a novel approach MaskOCR to unify vision and language pre-training in the classical encoder-decoder recognition framework. We adopt the masked image modeling approach to pre-train the feature encoder using a large set of unlabeled real text images, which allows us to learn strong visual representations. In contrast to introducing linguistic knowledge with an additional language model, we directly pre-train the sequence decoder. Specifically, we transform text data into synthesized text images to unify the data modalities of vision and language, and enhance the language modeling capability of the sequence decoder using a proposed masked image-language modeling scheme. Significantly, the encoder is frozen during the pre-training phase of the sequence decoder. Experimental results demonstrate that our proposed method achieves superior performance on benchmark datasets, including Chinese and English text images

    AMP-EBiLSTM: employing novel deep learning strategies for the accurate prediction of antimicrobial peptides

    Get PDF
    Antimicrobial peptides are present ubiquitously in intra- and extra-biological environments and display considerable antibacterial and antifungal activities. Clinically, it has shown good antibacterial effect in the treatment of diabetic foot and its complications. However, the discovery and screening of antimicrobial peptides primarily rely on wet lab experiments, which are inefficient. This study endeavors to create a precise and efficient method of predicting antimicrobial peptides by incorporating novel machine learning technologies. We proposed a deep learning strategy named AMP-EBiLSTM to accurately predict them, and compared its performance with ensemble learning and baseline models. We utilized Binary Profile Feature (BPF) and Pseudo Amino Acid Composition (PSEAAC) for effective local sequence capture and amino acid information extraction, respectively, in deep learning and ensemble learning. Each model was cross-validated and externally tested independently. The results demonstrate that the Enhanced Bi-directional Long Short-Term Memory (EBiLSTM) deep learning model outperformed others with an accuracy of 92.39% and AUC value of 0.9771 on the test set. On the other hand, the ensemble learning models demonstrated cost-effectiveness in terms of training time on a T4 server equipped with 16 GB of GPU memory and 8 vCPUs, with training durations varying from 0 to 30 s. Therefore, the strategy we propose is expected to predict antimicrobial peptides more accurately in the future

    Description of the final-instarlarva and pupa of Acanthacorydalis orientalis (McLachlan, 1899) (Megaloptera: Corydalidae) with some life history notes

    No full text
    Cao, Chengquan, Liu, Xingyue (2013): Description of the final-instarlarva and pupa of Acanthacorydalis orientalis (McLachlan, 1899) (Megaloptera: Corydalidae) with some life history notes. Zootaxa 3691 (1): 145-152, DOI: 10.11646/zootaxa.3691.1.
    • …
    corecore