38 research outputs found

    Feature-aware uniform tessellations on video manifold for content-sensitive supervoxels

    Get PDF
    Over-segmenting a video into supervoxels has strong potential to reduce the complexity of computer vision applications. Content-sensitive supervoxels (CSS) are typically smaller in content-dense regionsand larger in content-sparse regions. In this paper, we propose to compute feature-aware CSS (FCSS) that are regularly shaped 3D primitive volumes well aligned with local object/region/motion boundaries in video.To compute FCSS, we map a video to a 3-dimensional manifold, in which the volume elements of video manifold give a good measure of the video content density. Then any uniform tessellation on manifold can induce CSS. Our idea is that among all possible uniform tessellations, FCSS find one whose cell boundaries well align with local video boundaries. To achieve this goal, we propose a novel tessellation method that simultaneously minimizes the tessellation energy and maximizes the average boundary distance.Theoretically our method has an optimal competitive ratio O(1). We also present a simple extension of FCSS to streaming FCSS for processing long videos that cannot be loaded into main memory at once. We evaluate FCSS, streaming FCSS and ten representative supervoxel methods on four video datasets and two novel video applications. The results show that our method simultaneously achieves state-of-the-art performance with respect to various evaluation criteria

    Ranking-preserving cross-source learning for image retargeting quality assessment

    Get PDF
    Image retargeting techniques adjust images into different sizes and have attracted much attention recently. Objective quality assessment (OQA) of image retargeting results is often desired to automatically select the best results. Existing OQA methods train a model using some benchmarks (e.g., RetargetMe), in which subjective scores evaluated by users are provided. Observing that it is challenging even for human subjects to give consistent scores for retargeting results of different source images (diff-source-results), in this paper we propose a learning-based OQA method that trains a General Regression Neural Network (GRNN) model based on relative scores - which preserve the ranking - of retargeting results of the same source image (same-source-results). In particular, we develop a novel training scheme with provable convergence that learns a common base scalar for same-source-results. With this source specific offset, our computed scores not only preserve the ranking of subjective scores for same-source-results, but also provide a reference to compare the diff-source-results. We train and evaluate our GRNN model using human preference data collected in RetargetMe. We further introduce a subjective benchmark to evaluate the generalizability of different OQA methods. Experimental results demonstrate that our method outperforms ten representative OQA methods in ranking prediction and has better generalizability to different datasets

    Chinese National Income, ca. 1661–1933

    Get PDF
    In recent decades, national income has become increasingly important as a measure of a nation’s economic health. In this study, we used a wide array of primary and secondary sources to arrive at values of the Chinese per capita gross domestic product (GDP) during the period of 1661–1933. We found a persistent decline in the per capita GDP between the 17th and 19th centuries, followed by a period of stagnation. This pattern, which shows up in many Asian countries, with the exception of Japan, provides a basis for improving our understanding of the patterns of global economic convergence and divergence

    Chinese National Income, ca. 1661–1933

    Get PDF
    In recent decades, national income has become increasingly important as a measure of a nation’s economic health. In this study, we used a wide array of primary and secondary sources to arrive at values of the Chinese per capita gross domestic product (GDP) during the period of 1661–1933. We found a persistent decline in the per capita GDP between the 17th and 19th centuries, followed by a period of stagnation. This pattern, which shows up in many Asian countries, with the exception of Japan, provides a basis for improving our understanding of the patterns of global economic convergence and divergence

    Animating portrait line drawings from a single face photo and a speech signal

    Get PDF
    Animating a single face photo is an important research topic which receives considerable attention in computer vision and graphics. Yet line drawings for face portraits, which is a longstanding and popular art form, have not been explored much in this area. Simply concatenating a realistic talking face video generation model with a photo-to-drawing style transfer module suffers from severe inter-frame discontinuity issues. To address this new challenge, we propose a novel framework to generate artistic talking portrait-line-drawing video, given a single face photo and a speech signal. After predicting facial landmark movements from the input speech signal, we propose a novel GAN model to simultaneously handle domain transfer (from photo to drawing) and facial geometry change (according to the predicted facial landmarks). To address the inter-frame discontinuity issues, we propose two novel temporal coherence losses: one based on warping and the other based on a temporal coherence discriminator. Experiments show that our model produces high quality artistic talking portrait-line-drawing videos and outperforms baseline methods. We also show our method can be easily extended to other artistic styles and generate good results. The source code is available at https://github.com/AnimatePortrait/AnimatePortrait

    GAN-based multi-style photo cartoonization

    Get PDF
    Cartoon is a common form of art in our daily life and automatic generation of cartoon images from photos is highly desirable. However, state-of-the-art single-style methods can only generate one style of cartoon images from photos and existing multi-style image style transfer methods still struggle to produce high-quality cartoon images due to their highly simplified and abstract nature. In this paper, we propose a novel multi-style generative adversarial network (GAN) architecture, called MS-CartoonGAN, which can transform photos into multiple cartoon styles. We develop a multi-domain architecture, where the generator consists of a shared encoder and multiple decoders for different cartoon styles, along with multiple discriminators for individual styles. By observing that cartoon images drawn by different artists have their unique styles while sharing some common characteristics, our shared network architecture exploits the common characteristics of cartoon styles, achieving better cartoonization and being more efficient than single-style cartoonization. We show that our multi-domain architecture can theoretically guarantee to output desired multiple cartoon styles. Through extensive experiments including a user study, we demonstrate the superiority of the proposed method, outperforming state-of-the-art single-style and multi-style image style transfer methods

    Identifying Hub Genes for Heat Tolerance in Water Buffalo (Bubalus bubalis) Using Transcriptome Data

    Get PDF
    Heat stress has a detrimental effect on the physiological and production performance of buffaloes. Elucidating the underlying mechanisms of heat stress is challenging, therefore identifying candidate genes is urgent and necessary. We evaluated the response of buffaloes (n = 30) to heat stress using the physiological parameters, ELISA indexes, and hematological parameters. We then performed mRNA and microRNA (miRNA) expression profiles analysis between heat tolerant (HT, n = 4) and non-heat tolerant (NHT, n = 4) buffaloes, as well as the specific modules, significant genes, and miRNAs related to the heat tolerance identified using the weighted gene co-expression network analysis (WGCNA). The results indicated that the buffaloes in HT had a significantly lower rectal temperature (RT) and respiratory rate (RR) and displayed a higher plasma heat shock protein (HSP70 and HSP90) and cortisol (COR) levels than those of NHT buffaloes. Differentially expressed analysis revealed a total of 753 differentially expressed genes (DEGs) and 16 differentially expressed miRNAs (DEmiRNAs) were identified between HT and NHT. Using the WGCNA analysis, these DEGs assigned into 5 modules, 4 of which were significantly correlation with the heat stress indexes. Interestingly, 158 DEGs associated with heat tolerance in the turquoise module were identified, 35 of which were found within the protein-protein interaction network. Several hub genes (IL18RAP, IL6R, CCR1, PPBP, IL1B, and IL1R1) were identified that significantly enriched in the Cytokine-cytokine receptor interaction. The findings may help further elucidate the underlying mechanisms of heat tolerance in buffaloes
    corecore