270 research outputs found

    On the Cultural Compensation Strategies in The Deer and the Cauldron

    Get PDF
    The translation of martial arts novels has always been a difficulty in the translation field in that they involve so many Chinese traditional elements and complicated technical terms that translators are often overwhelmed by a variety of movements and characters of the martial arts. The Deer and the Cauldron, a world-famous martial arts novel, was a masterpiece written by Louis Cha and translated by John Minford. If the translator hadn’t compensated for the cultural vacancies, the novel would certainly become unintelligible to the target readers. By analyzing the solutions of the translation of cultural vacancies in the English version of The Deer and the Cauldron, the study concludes the applicable compensation strategies to the translation of martial arts novels, including annotation, contextual amplification, and adaptation, aiming to provide some reference for the translation and introduction of the martial arts novels as it is a literary category with Chinese characteristics and hence bringing Chinese Martial arts culture to the world

    Artificial and natural duplicates in pyrosequencing reads of metagenomic data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Artificial duplicates from pyrosequencing reads may lead to incorrect interpretation of the abundance of species and genes in metagenomic studies. Duplicated reads were filtered out in many metagenomic projects. However, since the duplicated reads observed in a pyrosequencing run also include natural (non-artificial) duplicates, simply removing all duplicates may also cause underestimation of abundance associated with natural duplicates.</p> <p>Results</p> <p>We implemented a method for identification of exact and nearly identical duplicates from pyrosequencing reads. This method performs an all-against-all sequence comparison and clusters the duplicates into groups using an algorithm modified from our previous sequence clustering method cd-hit. This method can process a typical dataset in ~10 minutes; it also provides a consensus sequence for each group of duplicates. We applied this method to the underlying raw reads of 39 genomic projects and 10 metagenomic projects that utilized pyrosequencing technique. We compared the occurrences of the duplicates identified by our method and the natural duplicates made by independent simulations. We observed that the duplicates, including both artificial and natural duplicates, make up 4-44% of reads. The number of natural duplicates highly correlates with the samples' read density (number of reads divided by genome size). For high-complexity metagenomic samples lacking dominant species, natural duplicates only make up <1% of all duplicates. But for some other samples like transcriptomic samples, majority of the observed duplicates might be natural duplicates.</p> <p>Conclusions</p> <p>Our method is available from <url>http://cd-hit.org</url> as a downloadable program and a web server. It is important not only to identify the duplicates from metagenomic datasets but also to distinguish whether they are artificial or natural duplicates. We provide a tool to estimate the number of natural duplicates according to user-defined sample types, so users can decide whether to retain or remove duplicates in their projects.</p

    Synthesis and Luminescence Properties of Nine Novel Carbazolyl Diacylhydrazone Schiff-bases

    Get PDF
    Nine novel carbazolyl diacylhydrazone Schiff-bases were synthesized by alkylation, F-C acylation and condensation reactions starting from carbazole and hydrazide. The title Schiff-bases were characterized by 1H NMR, MS, IR and elemental analysis. The synthetic conditions were optimized, and the best yield of the title Schiff-bases was up to 92.3 %. The relationships between the luminescence properties and the structures of the title Schiff-bases were studied. The results showed that the introduction of the Naphthalene-2-yloxy could form great plane conjugate structure to improve their luminescence properties

    Exploiting orthologue diversity for systematic detection of gain-of-function phenotypes

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Systematic search for genes whose gain-of-function by exogenous expression confers an advantage in cell-based selective screenings is a powerful method for unbiased functional exploration of the genome, and has the potential to disclose new targets for cancer therapy. A major limit of this approach resides in the labor-intensive cloning of resistant cells, identification of the integrated genes and validation of their ability to confer a selective advantage. Moreover, the selection has to be drastic and genes conferring a limited advantage are typically missed.</p> <p>Results</p> <p>We developed a new functional screening strategy based on transduction of mammalian cells of a given species with an expression library from another species, followed by one-shot quantitative tracing with DNA microarrays of all library-derived transcripts before and after selection. In this way, exogenous transcripts enriched after selection, and therefore likely to confer resistance, are readily detected. We transduced a retroviral cDNA expression library from mouse testis into human and canine cells, and optimized the use of commercial murine gene expression arrays for species-specific detection of library-derived transcripts. We then conducted a functional screening by growing library-transduced canine MDCK cells in suspension, to enrich for cDNAs conferring anchorage independence. Notably, these cells show partial resistance to loss of anchorage, and the selection can be of limited stringency, compromising approaches based on clonal selection or anyway requiring high stringency. Microarray analysis revealed reproducible enrichment after three weeks of growth on polyhema for seven genes, among which the Hras proto-oncogene and Sox5. When individually transduced into MDCK cells, Sox5 specifically promoted anchorage-independent growth, thereby confirming the validity and specificity of the approach.</p> <p>Conclusion</p> <p>The procedure described here brings substantial advantages to the field of expression cloning, being faster, more systematic and more sensitive. Indeed, this strategy allowed identification and validation of genes promoting anchorage-independent growth of epithelial cells under selection conditions not amenable to conventional expression cloning.</p

    Pave the Way to Grasp Anything: Transferring Foundation Models for Universal Pick-Place Robots

    Full text link
    Improving the generalization capabilities of general-purpose robotic agents has long been a significant challenge actively pursued by research communities. Existing approaches often rely on collecting large-scale real-world robotic data, such as the RT-1 dataset. However, these approaches typically suffer from low efficiency, limiting their capability in open-domain scenarios with new objects, and diverse backgrounds. In this paper, we propose a novel paradigm that effectively leverages language-grounded segmentation masks generated by state-of-the-art foundation models, to address a wide range of pick-and-place robot manipulation tasks in everyday scenarios. By integrating precise semantics and geometries conveyed from masks into our multi-view policy model, our approach can perceive accurate object poses and enable sample-efficient learning. Besides, such design facilitates effective generalization for grasping new objects with similar shapes observed during training. Our approach consists of two distinct steps. First, we introduce a series of foundation models to accurately ground natural language demands across multiple tasks. Second, we develop a Multi-modal Multi-view Policy Model that incorporates inputs such as RGB images, semantic masks, and robot proprioception states to jointly predict precise and executable robot actions. Extensive real-world experiments conducted on a Franka Emika robot arm validate the effectiveness of our proposed paradigm. Real-world demos are shown in YouTube (https://www.youtube.com/watch?v=1m9wNzfp_4E ) and Bilibili (https://www.bilibili.com/video/BV178411Z7H2/ )

    AlphaBlock: Embodied Finetuning for Vision-Language Reasoning in Robot Manipulation

    Full text link
    We propose a novel framework for learning high-level cognitive capabilities in robot manipulation tasks, such as making a smiley face using building blocks. These tasks often involve complex multi-step reasoning, presenting significant challenges due to the limited paired data connecting human instructions (e.g., making a smiley face) and robot actions (e.g., end-effector movement). Existing approaches relieve this challenge by adopting an open-loop paradigm decomposing high-level instructions into simple sub-task plans, and executing them step-by-step using low-level control models. However, these approaches are short of instant observations in multi-step reasoning, leading to sub-optimal results. To address this issue, we propose to automatically collect a cognitive robot dataset by Large Language Models (LLMs). The resulting dataset AlphaBlock consists of 35 comprehensive high-level tasks of multi-step text plans and paired observation sequences. To enable efficient data acquisition, we employ elaborated multi-round prompt designs that effectively reduce the burden of extensive human involvement. We further propose a closed-loop multi-modal embodied planning model that autoregressively generates plans by taking image observations as input. To facilitate effective learning, we leverage MiniGPT-4 with a frozen visual encoder and LLM, and finetune additional vision adapter and Q-former to enable fine-grained spatial perception for manipulation tasks. We conduct experiments to verify the superiority over existing open and closed-loop methods, and achieve a significant increase in success rate by 21.4% and 14.5% over ChatGPT and GPT-4 based robot tasks. Real-world demos are shown in https://www.youtube.com/watch?v=ayAzID1_qQk

    Two-color atom guide and 1D optical lattice using evanescent fields of high-order transverse modes

    Full text link
    We propose a two-color scheme of atom guide and 1D optical lattice using evanescent light fields of different transverse modes. The optical waveguide carries a red-detuned light and a blue-detuned light, with both modes far from resonance. The atom guide and 1D optical lattice potentials can be transformed to each other by using a Mach-Zehnder interferometer to accurately control mode transformation. This might provide a new approach to realize flexible transition between the guiding and trapping states of atoms.Comment: 18 pages, 12 figures, 1 tabl
    • 

    corecore