104 research outputs found

    DEM study on the segregation of a non‐spherical intruder in a vibrated granular bed

    Get PDF
    The segregation process of a single large intruder in a vibrated bed of small particles has been widely studied, but most previous studies focused on spherical intruders. In this work, the discrete element method was used to study the effects of vibration conditions and intruder shape on the dimensionless ascending velocity (va) of the intruder. The intruder was in a prolate shape with aspect ratio varied but its equivalent diameter fixed. Three equivalent diameters, namely volume‐equivalent diameter, surface‐area‐equivalent diameter, and Sauter diameter, were used. It was found that va increases and then decreases with the rise of the dimensionless vibration amplitude (Ad) and the dimensionless vibration frequency (fd), and va increases with the decrease of the sphe-ricity of the intruder (Φ). Moreover, the porosity variation in the vibrated bed and the granular temperature were analyzed, which can be linked to the change of va. It was further found that va can 0.5 be uniformly correlated to Ad‧f d, while the critical change of the response of va to Ad and fd occurs at Γ = 4.83, where Γ is the vibration intensity. Based on these findings, a piecewise equation was pro-posed to predict va as a function of Ad, fd, and Φ

    Voxel or Pillar: Exploring Efficient Point Cloud Representation for 3D Object Detection

    Full text link
    Efficient representation of point clouds is fundamental for LiDAR-based 3D object detection. While recent grid-based detectors often encode point clouds into either voxels or pillars, the distinctions between these approaches remain underexplored. In this paper, we quantify the differences between the current encoding paradigms and highlight the limited vertical learning within. To tackle these limitations, we introduce a hybrid Voxel-Pillar Fusion network (VPF), which synergistically combines the unique strengths of both voxels and pillars. Specifically, we first develop a sparse voxel-pillar encoder that encodes point clouds into voxel and pillar features through 3D and 2D sparse convolutions respectively, and then introduce the Sparse Fusion Layer (SFL), facilitating bidirectional interaction between sparse voxel and pillar features. Our efficient, fully sparse method can be seamlessly integrated into both dense and sparse detectors. Leveraging this powerful yet straightforward framework, VPF delivers competitive performance, achieving real-time inference speeds on the nuScenes and Waymo Open Dataset. The code will be available.Comment: Accepted by AAAI-202

    Observer and Command-Filter-Based Adaptive Fuzzy Output Feedback Control of Uncertain Nonlinear Systems

    Full text link

    Third-Party Aligner for Neural Word Alignments

    Full text link
    Word alignment is to find translationally equivalent words between source and target sentences. Previous work has demonstrated that self-training can achieve competitive word alignment results. In this paper, we propose to use word alignments generated by a third-party word aligner to supervise the neural word alignment training. Specifically, source word and target word of each word pair aligned by the third-party aligner are trained to be close neighbors to each other in the contextualized embedding space when fine-tuning a pre-trained cross-lingual language model. Experiments on the benchmarks of various language pairs show that our approach can surprisingly do self-correction over the third-party supervision by finding more accurate word alignments and deleting wrong word alignments, leading to better performance than various third-party word aligners, including the currently best one. When we integrate all supervisions from various third-party aligners, we achieve state-of-the-art word alignment performances, with averagely more than two points lower alignment error rates than the best third-party aligner. We released our code at https://github.com/sdongchuanqi/Third-Party-Supervised-Aligner.Comment: 12 pages, 4 figures, findings of emnlp 202

    Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers

    Get PDF
    Most polyp segmentation methods use CNNs as their backbone, leading to two key issues when exchanging information between the encoder and decoder: 1) taking into account the differences in contribution between different-level features; and 2) designing an effective mechanism for fusing these features. Different from existing CNN-based methods, we adopt a transformer encoder, which learns more powerful and robust representations. In addition, considering the image acquisition influence and elusive properties of polyps, we introduce three novel modules, including a cascaded fusion module (CFM), a camouflage identification module (CIM), and a similarity aggregation module (SAM). Among these, the CFM is used to collect the semantic and location information of polyps from high-level features, while the CIM is applied to capture polyp information disguised in low-level features. With the help of the SAM, we extend the pixel features of the polyp area with high-level semantic position information to the entire polyp area, thereby effectively fusing cross-level features. The proposed model, named Polyp-PVT, effectively suppresses noises in the features and significantly improves their expressive capabilities. Extensive experiments on five widely adopted datasets show that the proposed model is more robust to various challenging situations (e.g., appearance changes, small objects) than existing methods, and achieves the new state-of-the-art performance. The proposed model is available at https://github.com/DengPingFan/Polyp-PVT.Comment: Technical Repor

    SimpleX: A Simple and Strong Baseline for Collaborative Filtering

    Full text link
    Collaborative filtering (CF) is a widely studied research topic in recommender systems. The learning of a CF model generally depends on three major components, namely interaction encoder, loss function, and negative sampling. While many existing studies focus on the design of more powerful interaction encoders, the impacts of loss functions and negative sampling ratios have not yet been well explored. In this work, we show that the choice of loss function as well as negative sampling ratio is equivalently important. More specifically, we propose the cosine contrastive loss (CCL) and further incorporate it to a simple unified CF model, dubbed SimpleX. Extensive experiments have been conducted on 11 benchmark datasets and compared with 29 existing CF models in total. Surprisingly, the results show that, under our CCL loss and a large negative sampling ratio, SimpleX can surpass most sophisticated state-of-the-art models by a large margin (e.g., max 48.5% improvement in NDCG@20 over LightGCN). We believe that SimpleX could not only serve as a simple strong baseline to foster future research on CF, but also shed light on the potential research direction towards improving loss function and negative sampling. Our source code will be available at https://reczoo.github.io/SimpleX.Comment: Accepted by CIKM 2021. Code available at https://reczoo.github.io/Simple

    Disambiguated Lexically Constrained Neural Machine Translation

    Full text link
    Lexically constrained neural machine translation (LCNMT), which controls the translation generation with pre-specified constraints, is important in many practical applications. Current approaches to LCNMT typically assume that the pre-specified lexical constraints are contextually appropriate. This assumption limits their application to real-world scenarios where a source lexicon may have multiple target constraints, and disambiguation is needed to select the most suitable one. In this paper, we propose disambiguated LCNMT (D-LCNMT) to solve the problem. D-LCNMT is a robust and effective two-stage framework that disambiguates the constraints based on contexts at first, then integrates the disambiguated constraints into LCNMT. Experimental results show that our approach outperforms strong baselines including existing data augmentation based approaches on benchmark datasets, and comprehensive experiments in scenarios where a source lexicon corresponds to multiple target constraints demonstrate the constraint disambiguation superiority of our approach.Comment: Accepted at ACL 2023 as a long paper (Findings), 12 pages, 3 figure

    Advances in Flexible Graphene Field-Effect Transistors for Biomolecule Sensing

    Get PDF
    With the increasing demand for biomarker detection in wearable electronic devices, flexible biosensors have garnered significant attention. Additionally, graphene field-effect transistors (GFETs) have emerged as key components for constructing biosensors, owing to their high sensitivity, multifunctionality, rapid response, and low cost. Leveraging the advantages of flexible substrates, such as biocompatibility, adaptability to complex environments, and fabrication flexibility, flexible GFET sensors exhibit promising prospects in detecting various biomarkers. This review provides a concise summary of design strategies for flexible GFET biosensors, including non-encapsulated gate without dielectric layer coverage and external gate designs. Furthermore, notable advancements in sensing applications of biomolecules, such as proteins, glucose, and ions, are highlighted. Finally, we discuss the future challenges and prospects in this field, aiming to inspire researchers to address these issues in their further investigations
    corecore