160 research outputs found

    Vertical Stratification of Sediment Microbial Communities Along Geochemical Gradients of a Subterranean Estuary Located at the Gloucester Beach of Virginia, United States

    Get PDF
    Subterranean estuaries (STEs) have been recognized as important ecosystems for the exchange of materials between the land and sea, but the microbial players of biogeochemical processes have not been well examined. In this study, we investigated the bacterial and archaeal communities within 10 cm depth intervals of a permeable sediment core (100 cm in length) collected from a STE located at Gloucester Point (GP-STE), VA, United States. High throughput sequencing of 16S rRNA genes and subsequent bioinformatics analyses were conducted to examine the composition, diversity, and potential functions of the sediment communities. The community composition varied significantly from the surface to a depth of 100 cm with up to 13,000 operational taxonomic units (OTUs) based on 97% sequence identities. More than 95% of the sequences consisted of bacterial OTUs, while the relative abundances of archaea, dominated by Crenarchaea, gradually increased with sediment core depth. Along the redox gradients of GP-STE, differential distribution of ammonia-and methane-oxidizing, denitrifying, and sulfate reducing bacteria was observed as well as methanogenic archaea based on predicted microbial functions. The aerobic-anaerobic transition zone (AATZ) had the highest diversity and abundance of microorganisms, matching with the predicted functional diversity. This indicates the AATZ as a hotspot of biogeochemical processes of STEs. The physical and geochemical gradients in different depths have attributed to vertical stratification of microbial community composition and function in the GP-STE

    Improved bounds for the sunflower lemma

    Full text link
    A sunflower with rr petals is a collection of rr sets so that the intersection of each pair is equal to the intersection of all. Erd\H{o}s and Rado proved the sunflower lemma: for any fixed rr, any family of sets of size ww, with at least about www^w sets, must contain a sunflower. The famous sunflower conjecture is that the bound on the number of sets can be improved to cwc^w for some constant cc. In this paper, we improve the bound to about (logw)w(\log w)^w. In fact, we prove the result for a robust notion of sunflowers, for which the bound we obtain is tight up to lower order terms.Comment: Revised preprint, added sections on applications and rainbow sunflower

    MotionBEV: Attention-Aware Online LiDAR Moving Object Segmentation with Bird's Eye View based Appearance and Motion Features

    Full text link
    Identifying moving objects is an essential capability for autonomous systems, as it provides critical information for pose estimation, navigation, collision avoidance, and static map construction. In this paper, we present MotionBEV, a fast and accurate framework for LiDAR moving object segmentation, which segments moving objects with appearance and motion features in the bird's eye view (BEV) domain. Our approach converts 3D LiDAR scans into a 2D polar BEV representation to improve computational efficiency. Specifically, we learn appearance features with a simplified PointNet and compute motion features through the height differences of consecutive frames of point clouds projected onto vertical columns in the polar BEV coordinate system. We employ a dual-branch network bridged by the Appearance-Motion Co-attention Module (AMCM) to adaptively fuse the spatio-temporal information from appearance and motion features. Our approach achieves state-of-the-art performance on the SemanticKITTI-MOS benchmark. Furthermore, to demonstrate the practical effectiveness of our method, we provide a LiDAR-MOS dataset recorded by a solid-state LiDAR, which features non-repetitive scanning patterns and a small field of view

    Joint Layout Analysis, Character Detection and Recognition for Historical Document Digitization

    Full text link
    In this paper, we propose an end-to-end trainable framework for restoring historical documents content that follows the correct reading order. In this framework, two branches named character branch and layout branch are added behind the feature extraction network. The character branch localizes individual characters in a document image and recognizes them simultaneously. Then we adopt a post-processing method to group them into text lines. The layout branch based on fully convolutional network outputs a binary mask. We then use Hough transform for line detection on the binary mask and combine character results with the layout information to restore document content. These two branches can be trained in parallel and are easy to train. Furthermore, we propose a re-score mechanism to minimize recognition error. Experiment results on the extended Chinese historical document MTHv2 dataset demonstrate the effectiveness of the proposed framework.Comment: 6 pages, 6 figure

    Towards Robust Visual Information Extraction in Real World: New Dataset and Novel Solution

    Full text link
    Visual information extraction (VIE) has attracted considerable attention recently owing to its various advanced applications such as document understanding, automatic marking and intelligent education. Most existing works decoupled this problem into several independent sub-tasks of text spotting (text detection and recognition) and information extraction, which completely ignored the high correlation among them during optimization. In this paper, we propose a robust visual information extraction system (VIES) towards real-world scenarios, which is a unified end-to-end trainable framework for simultaneous text detection, recognition and information extraction by taking a single document image as input and outputting the structured information. Specifically, the information extraction branch collects abundant visual and semantic representations from text spotting for multimodal feature fusion and conversely, provides higher-level semantic clues to contribute to the optimization of text spotting. Moreover, regarding the shortage of public benchmarks, we construct a fully-annotated dataset called EPHOIE (https://github.com/HCIILAB/EPHOIE), which is the first Chinese benchmark for both text spotting and visual information extraction. EPHOIE consists of 1,494 images of examination paper head with complex layouts and background, including a total of 15,771 Chinese handwritten or printed text instances. Compared with the state-of-the-art methods, our VIES shows significant superior performance on the EPHOIE dataset and achieves a 9.01% F-score gain on the widely used SROIE dataset under the end-to-end scenario.Comment: 8 pages, 5 figures, to be published in AAAI 202
    corecore