102 research outputs found

    Bis[1-benzyl-3-(quinolin-8-ylmeth­yl)-2,3-dihydro-1H-imidazol-2-yl]dibromido­palladium(II) acetonitrile disolvate

    Get PDF
    In the title compound, [PdBr2(C20H17N3)2]·2CH3CN, the Pd atom, which lies on an inversion center, is four-coordinated in a square-planar geometry. The two imidazole rings are coplanar and nearly perpendicular to the plane formed by Pd, the coordinated imidazole C atom and one of the Br atoms, making a dihedral angle of 75.1 (2)°

    Simultaneous Efficiency, NOx, and Smoke Improvements through Diesel/Gasoline Dual-Fuel Operation in a Diesel Engine

    Get PDF
    Diesel/gasoline dual-fuel combustion uses both gasoline and diesel fuel in diesel engines to exploit their different reactivities. This operation combines the advantages of diesel fuel and gasoline while avoiding their disadvantages, attains spatially stratified low temperature combustion (LTC), and yields very low NOx and PM emissions while maintaining good efficiency. It is promising in solving the problems of conventional LTC through better control of ignition and combustion. The benefits of dual-fuel operation and the potential of using gasoline fumigation to realize these benefits provide the major motivation to this research. This research is aimed at using gasoline fumigation in a medium-duty diesel engine to identify and quantify the influencing factors of diesel/gasoline dual-fuel LTC on engine efficiency and emissions. The factors include gasoline fraction, injection settings, rail pressure, intake pressure, and EGR level. This objective was realized through a series of experimental tests done at 1400 rpm and three loads, including both diesel baseline tests and dual-fuel tests. First, design of experiments and relevant statistical techniques were applied to tests. Twenty-three best models between 6 factors (intake pressure, rail pressure, SOI for diesel baseline tests, SOI for dual-fuel tests, EGR level, and gasoline fraction) and 5 targets (efficiency, NOx, smoke number, HC, and CO) were obtained through regression of test data. Confirmation tests were done based on best models. Generally, the observations are improved NOx and smoke emissions, but unimproved or deteriorated efficiency, HC and CO emissions. The optimization effort makes some achievements, but needs further improvement. The influence of each factor is analyzed. The measures to get better models are explained. Second, parametric studies of gasoline fraction and injection timing were done to find their influence on efficiency and emissions. Efficiency generally decreases slightly as gasoline fraction increases or injection timing is retarded. Generally, increasing gasoline fraction is beneficial for NOx and smoke emissions, but HC and CO emissions deteriorate. An advance in injection timing, however, has the opposite influence. Finally, individual cycle data were analyzed to study cyclic variability (CV) and its influence on dual-fuel efficiency and emissions. Factors causing or influencing CV were identified. The CV in dual-fuel operation is more serious than that in diesel operation, in terms of magnitude. Most of the test data studied do not have strong determinism, and the influence of gasoline addition is small

    Feature-Enhanced Network with Hybrid Debiasing Strategies for Unbiased Learning to Rank

    Full text link
    Unbiased learning to rank (ULTR) aims to mitigate various biases existing in user clicks, such as position bias, trust bias, presentation bias, and learn an effective ranker. In this paper, we introduce our winning approach for the "Unbiased Learning to Rank" task in WSDM Cup 2023. We find that the provided data is severely biased so neural models trained directly with the top 10 results with click information are unsatisfactory. So we extract multiple heuristic-based features for multi-fields of the results, adjust the click labels, add true negatives, and re-weight the samples during model training. Since the propensities learned by existing ULTR methods are not decreasing w.r.t. positions, we also calibrate the propensities according to the click ratios and ensemble the models trained in two different ways. Our method won the 3rd prize with a DCG@10 score of 9.80, which is 1.1% worse than the 2nd and 25.3% higher than the 4th.Comment: 5 pages, 1 figure, WSDM Cup 202

    Visual Named Entity Linking: A New Dataset and A Baseline

    Full text link
    Visual Entity Linking (VEL) is a task to link regions of images with their corresponding entities in Knowledge Bases (KBs), which is beneficial for many computer vision tasks such as image retrieval, image caption, and visual question answering. While existing tasks in VEL either rely on textual data to complement a multi-modal linking or only link objects with general entities, which fails to perform named entity linking on large amounts of image data. In this paper, we consider a purely Visual-based Named Entity Linking (VNEL) task, where the input only consists of an image. The task is to identify objects of interest (i.e., visual entity mentions) in images and link them to corresponding named entities in KBs. Since each entity often contains rich visual and textual information in KBs, we thus propose three different sub-tasks, i.e., visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL). In addition, we present a high-quality human-annotated visual person linking dataset, named WIKIPerson. Based on WIKIPerson, we establish a series of baseline algorithms for the solution of each sub-task, and conduct experiments to verify the quality of proposed datasets and the effectiveness of baseline methods. We envision this work to be helpful for soliciting more works regarding VNEL in the future. The codes and datasets are publicly available at https://github.com/ict-bigdatalab/VNEL.Comment: 13 pages, 11 figures, published to EMNLP 2022(findings

    CofeNet: Context and Former-Label Enhanced Net for Complicated Quotation Extraction

    Full text link
    Quotation extraction aims to extract quotations from written text. There are three components in a quotation: source refers to the holder of the quotation, cue is the trigger word(s), and content is the main body. Existing solutions for quotation extraction mainly utilize rule-based approaches and sequence labeling models. While rule-based approaches often lead to low recalls, sequence labeling models cannot well handle quotations with complicated structures. In this paper, we propose the Context and Former-Label Enhanced Net (CofeNet) for quotation extraction. CofeNet is able to extract complicated quotations with components of variable lengths and complicated structures. On two public datasets (i.e., PolNeAR and Riqua) and one proprietary dataset (i.e., PoliticsZH), we show that our CofeNet achieves state-of-the-art performance on complicated quotation extraction.Comment: Accepted by COLING 202

    Pre-training with Aspect-Content Text Mutual Prediction for Multi-Aspect Dense Retrieval

    Full text link
    Grounded on pre-trained language models (PLMs), dense retrieval has been studied extensively on plain text. In contrast, there has been little research on retrieving data with multiple aspects using dense models. In the scenarios such as product search, the aspect information plays an essential role in relevance matching, e.g., category: Electronics, Computers, and Pet Supplies. A common way of leveraging aspect information for multi-aspect retrieval is to introduce an auxiliary classification objective, i.e., using item contents to predict the annotated value IDs of item aspects. However, by learning the value embeddings from scratch, this approach may not capture the various semantic similarities between the values sufficiently. To address this limitation, we leverage the aspect information as text strings rather than class IDs during pre-training so that their semantic similarities can be naturally captured in the PLMs. To facilitate effective retrieval with the aspect strings, we propose mutual prediction objectives between the text of the item aspect and content. In this way, our model makes more sufficient use of aspect information than conducting undifferentiated masked language modeling (MLM) on the concatenated text of aspects and content. Extensive experiments on two real-world datasets (product and mini-program search) show that our approach can outperform competitive baselines both treating aspect values as classes and conducting the same MLM for aspect and content strings. Code and related dataset will be available at the URL \footnote{https://github.com/sunxiaojie99/ATTEMPT}.Comment: accepted by cikm202
    • …
    corecore