255 research outputs found

    Detection of gfp expression from gfp-labelled bacteria spot inoculated onto sugarcane tissues

    Get PDF
    Green fluorescent protein (GFP) as a marker gene has facilitated biological research in plant-microbe interactions. However, there is one major limiting factor in the detection of GFP in living organisms whose cells emit background autofluorescence. In this study, Herbaspirillum sp. B501gfp1 bacterial cells were spot inoculated onto 5 month-old sterile micro-propagated sugarcane tissues to detect if the GFP fluorescence expression could be distinguished from the tissue’s background fluorescence. Stem tissues and leaf sections mounted on glass slides were directly inoculated with a single touch using the tip of a syringe previously dipped into the inoculum containing 108 bacterial cells/ml. We observed that GFP fluorescence could be easily distinguished in the stem than in the leaf tissues. However, the brightness level of the fluorescence varied with time as a result of fluctuations in the bacterial celldensity. The presence of chloroplasts in the leaf tissues of sugarcane requires the use of bright GFP variants when monitoring bacteria-plant interactions using GFP labelled bacteria

    Visio-Linguistic Brain Encoding

    Full text link
    Enabling effective brain-computer interfaces requires understanding how the human brain encodes stimuli across modalities such as visual, language (or text), etc. Brain encoding aims at constructing fMRI brain activity given a stimulus. There exists a plethora of neural encoding models which study brain encoding for single mode stimuli: visual (pretrained CNNs) or text (pretrained language models). Few recent papers have also obtained separate visual and text representation models and performed late-fusion using simple heuristics. However, previous work has failed to explore: (a) the effectiveness of image Transformer models for encoding visual stimuli, and (b) co-attentive multi-modal modeling for visual and text reasoning. In this paper, we systematically explore the efficacy of image Transformers (ViT, DEiT, and BEiT) and multi-modal Transformers (VisualBERT, LXMERT, and CLIP) for brain encoding. Extensive experiments on two popular datasets, BOLD5000 and Pereira, provide the following insights. (1) To the best of our knowledge, we are the first to investigate the effectiveness of image and multi-modal Transformers for brain encoding. (2) We find that VisualBERT, a multi-modal Transformer, significantly outperforms previously proposed single-mode CNNs, image Transformers as well as other previously proposed multi-modal models, thereby establishing new state-of-the-art. The supremacy of visio-linguistic models raises the question of whether the responses elicited in the visual regions are affected implicitly by linguistic processing even when passively viewing images. Future fMRI tasks can verify this computational insight in an appropriate experimental setting.Comment: 18 pages, 13 figure

    Closed conformal Killing-Yano tensor and geodesic integrability

    Full text link
    Assuming the existence of a single rank-2 closed conformal Killing-Yano tensor with a certain symmetry we show that there exist mutually commuting rank-2 Killing tensors and Killing vectors. We also discuss the condition of separation of variables for the geodesic Hamilton-Jacobi equations.Comment: 17 pages, no figure, LaTe

    On the Classification of Brane Tilings

    Full text link
    We present a computationally efficient algorithm that can be used to generate all possible brane tilings. Brane tilings represent the largest class of superconformal theories with known AdS duals in 3+1 and also 2+1 dimensions and have proved useful for describing the physics of both D3 branes and also M2 branes probing Calabi-Yau singularities. This algorithm has been implemented and is used to generate all possible brane tilings with at most 6 superpotential terms, including consistent and inconsistent brane tilings. The collection of inconsistent tilings found in this work form the most comprehensive study of such objects to date.Comment: 33 pages, 12 figures, 15 table
    corecore