472 research outputs found

    Indole contributes to tetracycline resistance via the outer membrane protein OmpN in Vibrio splendidus

    Get PDF
    As an interspecies and interkingdom signaling molecule, indole has recently received attention for its diverse effects on the physiology of both bacteria and hosts. In this study, indole increased the tetracycline resistance of Vibrio splendidus. The minimal inhibitory concentration of tetracycline was 10 mu g/mL, and the OD600 of V. splendidus decreased by 94.5% in the presence of 20 mu g/mL tetracycline; however, the OD600 of V. splendidus with a mixture of 20 mu g/mL tetracycline and 125 mu M indole was 10- or 4.5-fold higher than that with only 20 mu g/mL tetracycline at different time points. The percentage of cells resistant to 10 mu g/mL tetracycline was 600-fold higher in the culture with an OD600 of approximately 2.0 (higher level of indole) than that in the culture with an OD600 of 0.5, which also meant that the level of indole was correlated to the tetracycline resistance of V. splendidus. Furthermore, one differentially expressed protein, which was identified as the outer membrane porin OmpN using SDS-PAGE combined with MALDI-TOF/TOF MS, was upregulated. Consequently, the expression of the ompN gene in the presence of either tetracycline or indole and simultaneously in the presence of indole and tetracycline was upregulated by 1.8-, 2.54-, and 6.01-fold, respectively, compared to the control samples. The combined results demonstrated that indole enhanced the tetracycline resistance of V. splendidus, and this resistance was probably due to upregulation of the outer membrane porin OmpN

    Patch-based 3D Natural Scene Generation from a Single Example

    Full text link
    We target a 3D generative model for general natural scenes that are typically unique and intricate. Lacking the necessary volumes of training data, along with the difficulties of having ad hoc designs in presence of varying scene characteristics, renders existing setups intractable. Inspired by classical patch-based image models, we advocate for synthesizing 3D scenes at the patch level, given a single example. At the core of this work lies important algorithmic designs w.r.t the scene representation and generative patch nearest-neighbor module, that address unique challenges arising from lifting classical 2D patch-based framework to 3D generation. These design choices, on a collective level, contribute to a robust, effective, and efficient model that can generate high-quality general natural scenes with both realistic geometric structure and visual appearance, in large quantities and varieties, as demonstrated upon a variety of exemplar scenes.Comment: 23 pages, 26 figures, accepted by CVPR 2023. Project page: http://weiyuli.xyz/Sin3DGen

    Example-based Motion Synthesis via Generative Motion Matching

    Full text link
    We present GenMM, a generative model that "mines" as many diverse motions as possible from a single or few example sequences. In stark contrast to existing data-driven methods, which typically require long offline training time, are prone to visual artifacts, and tend to fail on large and complex skeletons, GenMM inherits the training-free nature and the superior quality of the well-known Motion Matching method. GenMM can synthesize a high-quality motion within a fraction of a second, even with highly complex and large skeletal structures. At the heart of our generative framework lies the generative motion matching module, which utilizes the bidirectional visual similarity as a generative cost function to motion matching, and operates in a multi-stage framework to progressively refine a random guess using exemplar motion matches. In addition to diverse motion generation, we show the versatility of our generative framework by extending it to a number of scenarios that are not possible with motion matching alone, including motion completion, key frame-guided generation, infinite looping, and motion reassembly. Code and data for this paper are at https://wyysf-98.github.io/GenMM/Comment: SIGGRAPH 2023. Project page: https://wyysf-98.github.io/GenMM/, Video: https://www.youtube.com/watch?v=lehnxcade4

    Rethinking Person Re-identification from a Projection-on-Prototypes Perspective

    Full text link
    Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous development over the past decade. Existing state-of-the-art methods follow an analogous framework to first extract features from the input images and then categorize them with a classifier. However, since there is no identity overlap between training and testing sets, the classifier is often discarded during inference. Only the extracted features are used for person retrieval via distance metrics. In this paper, we rethink the role of the classifier in person Re-ID, and advocate a new perspective to conceive the classifier as a projection from image features to class prototypes. These prototypes are exactly the learned parameters of the classifier. In this light, we describe the identity of input images as similarities to all prototypes, which are then utilized as more discriminative features to perform person Re-ID. We thereby propose a new baseline ProNet, which innovatively reserves the function of the classifier at the inference stage. To facilitate the learning of class prototypes, both triplet loss and identity classification loss are applied to features that undergo the projection by the classifier. An improved version of ProNet++ is presented by further incorporating multi-granularity designs. Experiments on four benchmarks demonstrate that our proposed ProNet is simple yet effective, and significantly beats previous baselines. ProNet++ also achieves competitive or even better results than transformer-based competitors

    Luminance Prediction of Paper Model Surface Based on Non-Contact Measurement

    Get PDF
    The overall appearance perception is affected by luminance perception accuracy and efficiency mostly. The surface luminance prediction correlated with surface angle and surface tone value was performed by measuring and modeling the paper model surface luminance. First, we used a rotating bracket designed to facilitate to set the paper surface angle. Then, we set the surface angle from 5° to 85° at the interval of 5° using the designed rotating bracket. Additionally, the four primary color scales, cyan, magenta, yellow, and black, were printed and set at the designed angle. The angle-ware and tone-ware luminance was measured using spectroradiometer, CS-2000. Finally, we proposed and evaluated a mathematical model to reveal the relationship between luminance and surface angle and surface tone using the least squares method. The results indicated that the surface luminance of paper model could be predicted and obtained quickly and accurately for any surface angles and surface tone values by the proposed prediction model

    The research and development of ChemGrid in CGSP

    Full text link
    With the rapid development of computing technologies and network technologies, Grid technology has emerged as the solution for high-performance computing. Recently, the grid of orient-services has become a hot issue in this research area. In this paper, we propose an architecture of ChemGrid in CGSP (China Grid Support Platform). The effectiveness of the proposed architecture is demonstrated by an example which is developed as a Web service based on CGSP; the Web service is used for searching elements in the periodic table. An improvement of the user interface for applications is proposed in order to obtain results interactively. Finally, an extension of ChemGrid is discussed in order to integrate different types of resources and provide specialized services.<br /

    Semi-automatic Data Annotation System for Multi-Target Multi-Camera Vehicle Tracking

    Full text link
    Multi-target multi-camera tracking (MTMCT) plays an important role in intelligent video analysis, surveillance video retrieval, and other application scenarios. Nowadays, the deep-learning-based MTMCT has been the mainstream and has achieved fascinating improvements regarding tracking accuracy and efficiency. However, according to our investigation, the lacking of datasets focusing on real-world application scenarios limits the further improvements for current learning-based MTMCT models. Specifically, the learning-based MTMCT models training by common datasets usually cannot achieve satisfactory results in real-world application scenarios. Motivated by this, this paper presents a semi-automatic data annotation system to facilitate the real-world MTMCT dataset establishment. The proposed system first employs a deep-learning-based single-camera trajectory generation method to automatically extract trajectories from surveillance videos. Subsequently, the system provides a recommendation list in the following manual cross-camera trajectory matching process. The recommendation list is generated based on side information, including camera location, timestamp relation, and background scene. In the experimental stage, extensive results further demonstrate the efficiency of the proposed system.Comment: 9 pages, 10 figure

    Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification

    Full text link
    Cloth-changing person Re-IDentification (Re-ID) is a particularly challenging task, suffering from two limitations of inferior identity-relevant features and limited training samples. Existing methods mainly leverage auxiliary information to facilitate discriminative feature learning, including soft-biometrics features of shapes and gaits, and additional labels of clothing. However, these information may be unavailable in real-world applications. In this paper, we propose a novel FIne-grained Representation and Recomposition (FIRe2^{2}) framework to tackle both limitations without any auxiliary information. Specifically, we first design a Fine-grained Feature Mining (FFM) module to separately cluster images of each person. Images with similar so-called fine-grained attributes (e.g., clothes and viewpoints) are encouraged to cluster together. An attribute-aware classification loss is introduced to perform fine-grained learning based on cluster labels, which are not shared among different people, promoting the model to learn identity-relevant features. Furthermore, by taking full advantage of the clustered fine-grained attributes, we present a Fine-grained Attribute Recomposition (FAR) module to recompose image features with different attributes in the latent space. It can significantly enhance representations for robust feature learning. Extensive experiments demonstrate that FIRe2^{2} can achieve state-of-the-art performance on five widely-used cloth-changing person Re-ID benchmarks
    corecore