1,014 research outputs found

    Kernelized Similarity Learning and Embedding for Dynamic Texture Synthesis

    Full text link
    Dynamic texture (DT) exhibits statistical stationarity in the spatial domain and stochastic repetitiveness in the temporal dimension, indicating that different frames of DT possess a high similarity correlation that is critical prior knowledge. However, existing methods cannot effectively learn a promising synthesis model for high-dimensional DT from a small number of training data. In this paper, we propose a novel DT synthesis method, which makes full use of similarity prior knowledge to address this issue. Our method bases on the proposed kernel similarity embedding, which not only can mitigate the high-dimensionality and small sample issues, but also has the advantage of modeling nonlinear feature relationship. Specifically, we first raise two hypotheses that are essential for DT model to generate new frames using similarity correlation. Then, we integrate kernel learning and extreme learning machine into a unified synthesis model to learn kernel similarity embedding for representing DT. Extensive experiments on DT videos collected from the internet and two benchmark datasets, i.e., Gatech Graphcut Textures and Dyntex, demonstrate that the learned kernel similarity embedding can effectively exhibit the discriminative representation for DT. Accordingly, our method is capable of preserving the long-term temporal continuity of the synthesized DT sequences with excellent sustainability and generalization. Meanwhile, it effectively generates realistic DT videos with fast speed and low computation, compared with the state-of-the-art methods. The code and more synthesis videos are available at our project page https://shiming-chen.github.io/Similarity-page/Similarit.html.Comment: 13 pages, 12 figures, 2 table

    Efficient Privacy Preserving Viola-Jones Type Object Detection via Random Base Image Representation

    Full text link
    A cloud server spent a lot of time, energy and money to train a Viola-Jones type object detector with high accuracy. Clients can upload their photos to the cloud server to find objects. However, the client does not want the leakage of the content of his/her photos. In the meanwhile, the cloud server is also reluctant to leak any parameters of the trained object detectors. 10 years ago, Avidan & Butman introduced Blind Vision, which is a method for securely evaluating a Viola-Jones type object detector. Blind Vision uses standard cryptographic tools and is painfully slow to compute, taking a couple of hours to scan a single image. The purpose of this work is to explore an efficient method that can speed up the process. We propose the Random Base Image (RBI) Representation. The original image is divided into random base images. Only the base images are submitted randomly to the cloud server. Thus, the content of the image can not be leaked. In the meanwhile, a random vector and the secure Millionaire protocol are leveraged to protect the parameters of the trained object detector. The RBI makes the integral-image enable again for the great acceleration. The experimental results reveal that our method can retain the detection accuracy of that of the plain vision algorithm and is significantly faster than the traditional blind vision, with only a very low probability of the information leakage theoretically.Comment: 6 pages, 3 figures, To appear in the proceedings of the IEEE International Conference on Multimedia and Expo (ICME), Jul 10, 2017 - Jul 14, 2017, Hong Kong, Hong Kon

    Predicting Aesthetic Score Distribution through Cumulative Jensen-Shannon Divergence

    Full text link
    Aesthetic quality prediction is a challenging task in the computer vision community because of the complex interplay with semantic contents and photographic technologies. Recent studies on the powerful deep learning based aesthetic quality assessment usually use a binary high-low label or a numerical score to represent the aesthetic quality. However the scalar representation cannot describe well the underlying varieties of the human perception of aesthetics. In this work, we propose to predict the aesthetic score distribution (i.e., a score distribution vector of the ordinal basic human ratings) using Deep Convolutional Neural Network (DCNN). Conventional DCNNs which aim to minimize the difference between the predicted scalar numbers or vectors and the ground truth cannot be directly used for the ordinal basic rating distribution. Thus, a novel CNN based on the Cumulative distribution with Jensen-Shannon divergence (CJS-CNN) is presented to predict the aesthetic score distribution of human ratings, with a new reliability-sensitive learning method based on the kurtosis of the score distribution, which eliminates the requirement of the original full data of human ratings (without normalization). Experimental results on large scale aesthetic dataset demonstrate the effectiveness of our introduced CJS-CNN in this task.Comment: AAAI Conference on Artificial Intelligence (AAAI), New Orleans, Louisiana, USA. 2-7 Feb. 201

    BGM: Building a Dynamic Guidance Map without Visual Images for Trajectory Prediction

    Full text link
    Visual images usually contain the informative context of the environment, thereby helping to predict agents' behaviors. However, they hardly impose the dynamic effects on agents' actual behaviors due to the respectively fixed semantics. To solve this problem, we propose a deterministic model named BGM to construct a guidance map to represent the dynamic semantics, which circumvents to use visual images for each agent to reflect the difference of activities in different periods. We first record all agents' activities in the scene within a period close to the current to construct a guidance map and then feed it to a Context CNN to obtain their context features. We adopt a Historical Trajectory Encoder to extract the trajectory features and then combine them with the context feature as the input of the social energy based trajectory decoder, thus obtaining the prediction that meets the social rules. Experiments demonstrate that BGM achieves state-of-the-art prediction accuracy on the two widely used ETH and UCY datasets and handles more complex scenarios

    FIZ1 is part of the regulatory protein complex on active photoreceptor-specific gene promoters in vivo

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>FIZ1 (Flt-3 Interacting Zinc-finger) is a broadly expressed protein of unknown function. We reported previously that in the mammalian retina, FIZ1 interacts with NRL (Neural-Retina Leucine-zipper), an essential transcriptional activator of rod photoreceptor-specific genes. The concentration of FIZ1 in the retina increases during photoreceptor terminal maturation, when two key transcription factors NRL and CRX (Cone-Rod Homeobox) become detectable on the promoters of photoreceptor-specific genes (i.e. <it>Rhodopsin, Pde6b</it>). To determine if FIZ1 is involved in regulating CRX-mediated transcriptional activation, we examined FIZ1 subcellular location in mouse neural retina, its ability to interact with CRX, and its association with CRX/NRL target genes.</p> <p>Results</p> <p>FIZ1 is present in the nucleus of adult photoreceptors as well as other retinal neurons as shown by transmission electron microscopy with nano-gold labeling. FIZ1 and CRX were co-precipitated from retinal nuclear extracts with antibodies to either protein. Chromatin immunoprecipitation (ChIP) assays revealed that FIZ1 is part of the protein complex on several rod and cone gene promoters, within photoreceptor cells of the mouse retina. FIZ1 complexes with CRX or NRL on known NRL- and CRX-responsive elements, as shown by electrophoretic mobility shift assays with FIZ1 antibody. FIZ1 can directly bind to CRX, as demonstrated using yeast two-hybrid and GST pull-down assays. Co-transfection assays demonstrated that FIZ1 increases CRX-mediated activation of <it>Opsin </it>test promoters. Quantitative ChIP analysis revealed an increased association of FIZ1 with the <it>Rhodopsin </it>promoter in adult (P-25) neural retina versus immature (P-3) neural retina. The quantity of transcriptionally active RNA Polymerase-II within the <it>Rhodopsin </it>gene (<it>Rho</it>) was significantly increased in the adult neural retina, compared to the immature retina.</p> <p>Conclusion</p> <p>FIZ1 directly interacts with CRX to enhance CRX's transactivation activity for target genes. Developmentally, in neural retina tissue, the increased association of FIZ1 with CRX target genes corresponds to an increased association of transcriptionally active Pol-II within the <it>Rho </it>gene. Together with previous findings, our results suggest that FIZ1 may act as a transcriptional co-regulator of photoreceptor-specific genes, recruited by at least two photoreceptor-specific transcription factors, CRX and NRL. Further studies are underway to elucidate the exact role of FIZ1 in photoreceptor gene expression, development and maintenance.</p

    Expanding Language-Image Pretrained Models for General Video Recognition

    Full text link
    Contrastive language-image pretraining has shown great success in learning visual-textual joint representation from web-scale data, demonstrating remarkable "zero-shot" generalization ability for various image tasks. However, how to effectively expand such new language-image pretraining methods to video domains is still an open problem. In this work, we present a simple yet effective approach that adapts the pretrained language-image models to video recognition directly, instead of pretraining a new model from scratch. More concretely, to capture the long-range dependencies of frames along the temporal dimension, we propose a cross-frame attention mechanism that explicitly exchanges information across frames. Such module is lightweight and can be plugged into pretrained language-image models seamlessly. Moreover, we propose a video-specific prompting scheme, which leverages video content information for generating discriminative textual prompts. Extensive experiments demonstrate that our approach is effective and can be generalized to different video recognition scenarios. In particular, under fully-supervised settings, our approach achieves a top-1 accuracy of 87.1% on Kinectics-400, while using 12 times fewer FLOPs compared with Swin-L and ViViT-H. In zero-shot experiments, our approach surpasses the current state-of-the-art methods by +7.6% and +14.9% in terms of top-1 accuracy under two popular protocols. In few-shot scenarios, our approach outperforms previous best methods by +32.1% and +23.1% when the labeled data is extremely limited. Code and models are available at https://aka.ms/X-CLIPComment: Accepted by ECCV2022, Ora

    CRISPR/Cas9-Based Gene Editing Using Egg Cell-Specific Promoters in Arabidopsis and Soybean

    Get PDF
    CRISPR/Cas9-based systems are efficient genome editing tools in a variety of plant species including soybean. Most of the gene edits in soybean plants are somatic and non-transmissible when Cas9 is expressed under control of constitutive promoters. Tremendous effort, therefore, must be spent to identify the inheritable edits occurring at lower frequencies in plants of successive generations. Here, we report the development and validation of genome editing systems in soybean and Arabidopsis based on Cas9 driven under four different egg-cell specific promoters. A soybean ubiquitin gene promoter driving expression of green fluorescent protein (GFP) is incorporated in the CRISPR/Cas9 constructs for visually selecting transgenic plants and transgene-evicted edited lines. In Arabidopsis, the four systems all produced a collection of mutations in the T2 generation at frequencies ranging from 8.3 to 42.9%, with egg cell-specific promoter AtEC1.2e1.1p being the highest. In soybean, function of the gRNAs and Cas9 expressed under control of the CaMV double 35S promoter (2x35S) in soybean hairy roots was tested prior to making stable transgenic plants. The 2x35S:Cas9 constructs yielded a high somatic mutation frequency in soybean hairy roots. In stable transgenic soybean T1 plants, AtEC1.2e1.1p:Cas9 yielded a mutation rate of 26.8%, while Cas9 expression driven by the other three egg cell-specific promoters did not produce any detected mutations. Furthermore, the mutations were inheritable in the T2 generation. Our study provides CRISPR gene-editing platforms to generate inheritable mutants of Arabidopsis and soybean without the complication of somatic mutagenesis, which can be used to characterize genes of interest in Arabidopsis and soybean

    CRISPR/Cas9-Based Gene Editing Using Egg Cell-Specific Promoters in Arabidopsis and Soybean

    Get PDF
    CRISPR/Cas9-based systems are efficient genome editing tools in a variety of plant species including soybean. Most of the gene edits in soybean plants are somatic and non-transmissible when Cas9 is expressed under control of constitutive promoters. Tremendous effort, therefore, must be spent to identify the inheritable edits occurring at lower frequencies in plants of successive generations. Here, we report the development and validation of genome editing systems in soybean and Arabidopsis based on Cas9 driven under four different egg-cell specific promoters. A soybean ubiquitin gene promoter driving expression of green fluorescent protein (GFP) is incorporated in the CRISPR/Cas9 constructs for visually selecting transgenic plants and transgene-evicted edited lines. In Arabidopsis, the four systems all produced a collection of mutations in the T2 generation at frequencies ranging from 8.3 to 42.9%, with egg cell-specific promoter AtEC1.2e1.1p being the highest. In soybean, function of the gRNAs and Cas9 expressed under control of the CaMV double 35S promoter (2x35S) in soybean hairy roots was tested prior to making stable transgenic plants. The 2x35S:Cas9 constructs yielded a high somatic mutation frequency in soybean hairy roots. In stable transgenic soybean T1 plants, AtEC1.2e1.1p:Cas9 yielded a mutation rate of 26.8%, while Cas9 expression driven by the other three egg cell-specific promoters did not produce any detected mutations. Furthermore, the mutations were inheritable in the T2 generation. Our study provides CRISPR gene-editing platforms to generate inheritable mutants of Arabidopsis and soybean without the complication of somatic mutagenesis, which can be used to characterize genes of interest in Arabidopsis and soybean
    corecore