32 research outputs found

    The Tissue Response and Degradation of Electrospun Poly( ε

    Get PDF
    Due to the advantage of controllability on the mechanical property and the degradation rates, electrospun PCL/PTMC nanofibrous scaffold could be appropriate for vascular tissue engineering. However, the tissue response and degradation of electrospun PCL/PTMC scaffold in vivo have never been evaluated in detail. So, electrospun PCL/PTMC scaffolds with different blend ratios were prepared in this study. Mice subcutaneous implantation showed that the continuous degradation of PCL/PTMC scaffolds induced a lasted macrophage-mediated foreign body reaction, which could be in favor of the tissue regeneration in graft

    Image Captioning through Image Transformer

    Get PDF
    Automatic captioning of images is a task that combines the challenges of image analysis and text generation. One important aspect in captioning is the notion of attention: How to decide what to describe and in which order. Inspired by the successes in text analysis and translation, previous work have proposed the \textit{transformer} architecture for image captioning. However, the structure between the \textit{semantic units} in images (usually the detected regions from object detection model) and sentences (each single word) is different. Limited work has been done to adapt the transformer's internal architecture to images. In this work, we introduce the \textbf{\textit{image transformer}}, which consists of a modified encoding transformer and an implicit decoding transformer, motivated by the relative spatial relationship between image regions. Our design widen the original transformer layer's inner architecture to adapt to the structure of images. With only regions feature as inputs, our model achieves new state-of-the-art performance on both MSCOCO offline and online testing benchmarks

    Image Captioning through Image Transformer

    Get PDF
    Automatic captioning of images is a task that combines the challenges of image analysis and text generation. One important aspect of captioning is the notion of attention: how to decide what to describe and in which order. Inspired by the successes in text analysis and translation, previous works have proposed the transformer architecture for image captioning. However, the structure between the semantic units in images (usually the detected regions from object detection model) and sentences (each single word) is different. Limited work has been done to adapt the transformer’s internal architecture to images. In this work, we introduce the image transformer, which consists of a modified encoding transformer and an implicit decoding transformer, motivated by the relative spatial relationship between image regions. Our design widens the original transformer layer’s inner architecture to adapt to the structure of images. With only regions feature as inputs, our model achieves new state-of-the-art performance on both MSCOCO offline and online testing benchmarks. The code is available at https://github.com/wtliao/ImageTransformer

    Geochemical Characteristics of the Lower Cretaceous HengTongshan Formation in the Tonghua Basin, Northeast China: Implications for Depositional Environment and Shale Oil Potential Evaluation

    No full text
    The Tonghua Basin in Northeast China potentially contains shale oil and gas resources, but the exploration and development of these resources has been limited. The Sankeyushu depression represents the sedimentary center of the Tonghua Basin, and a large thickness of shale, the Hengtongshan Formation, was deposited in this depression. Exploratory engineering discoveries in recent years have confirmed that the Hengtongshan Formation has the potential to produce oil and gas. A series of methods, including inorganic and organic geochemistry and organic petrology, have been used to study the source material, organic matter maturity, depositional environment and oil-generating potential of the Hengtongshan Formation. Investigation of drill core samples has revealed that the Hengtongshan Formation in the Sankeyushu depression is mainly composed of black shale, with a small amount of plant fossils and thin volcanic rocks, and the content of brittle minerals (quartz + carbonate minerals) is high. The provenance of organic matter in the source rocks in the Hengtongshan Formation is a mixture of aquatic organisms (algae and bacteria) and higher plants, and there may be some marine organic components present in some strata.The organic matter was deposited and preserved in a saline reducing environment. Volcanism may have promoted the formation of a reducing environment by stratification of the lake bottom water, and the lake may have experienced a short-term marine ingression with the increase in the salinity. The maturity of the organic matter in all the source rocks in the Hengtongshan Formation is relatively high, and hydrocarbons have been generated. Some source rocks may have been affected by volcanism, and the organic matter in these rocks is overmature. In terms of the shale oil resource potential, the second member of the Hengtongshan Formation is obviously superior to the other members, with an average total organic carbon (TOC) of 1.37% and an average hydrogen index (HI) of 560.93 mg HC/g TOC. Most of the samples can be classified as good to very good source rocks with good resource potential. The second member can be regarded as a potential production stratum. According to the results of geochemical analysis and observations of shale oil and natural gas during drilling, it is predicted that the shale oil is present in the form of a self-sourced reservoir, but the migration range of natural gas is likely relatively large

    Disentangled lifespan face synthesis

    Get PDF
    A lifespan face synthesis (LFS) model aims to generate a set of photo-realistic face images of a person's whole life, given only one snapshot as reference. The generated face image given a target age code is expected to be age-sensitive reflected by bio-plausible transformations of shape and texture, while being identity preserving. This is extremely challenging because the shape and texture characteristics of a face undergo separate and highly nonlinear transformations w.r.t. age. Most recent LFS models are based on generative adversarial networks (GANs) whereby age code conditional transformations are applied to a latent face representation. They benefit greatly from the recent advancements of GANs. However, without explicitly disentangling their latent representations into the texture, shape and identity factors, they are fundamentally limited in modeling the nonlinear age-related transformation on texture and shape whilst preserving identity. In this work, a novel LFS model is proposed to disentangle the key face characteristics including shape, texture and identity so that the unique shape and texture age transformations can be modeled effectively. This is achieved by extracting shape, texture and identity features separately from an encoder. Critically, two transformation modules, one conditional convolution based and the other channel attention based, are designed for modeling the nonlinear shape and texture feature transformations respectively. This is to accommodate their rather distinct aging processes and ensure that our synthesized images are both age-sensitive and identity preserving. Extensive experiments show that our LFS model is clearly superior to the state-of-the-art alternatives. Codes and demo are available on our project website: https://senhe.github.io/projects/iccv_2021_lifespan_face

    Evolution of Biomarker Maturity Parameters and Feedback to the Pyrolysis Process for In Situ Conversion of Nongan Oil Shale in Songliao Basin

    No full text
    In the oil shale in situ conversion project, it is urgent to solve the problem that the reaction degree of organic matter cannot be determined. The yield and composition of organic products in each stage of the oil shale pyrolysis reaction change regularly, so it is very important to master the process of the pyrolysis reaction and reservoir change for oil shale in situ conversion project. In the in situ conversion project, it is difficult to directly obtain cores through drilling for kerogen maturity testing, and the research on judging the reaction process of subsurface pyrolysis based on the maturity of oil products has not been carried out in-depth. The simulation experiments and geochemical analysis carried out in this study are based on the oil shale of the Nenjiang Formation in the Songliao Basin and the pyrolysis oil samples produced by the in situ conversion project. Additionally, this study aims to clarify the evolution characteristics of maturity parameters such as effective biomarker compounds during the evolution of oil shale pyrolysis hydrocarbon products and fit it with the kerogen maturity in the Nenjiang formation. The response relationship with the pyrolysis process of oil shale is established, and it lays a theoretical foundation for the efficient, economical and stable operation of oil shale in situ conversion projects

    Rheological Characteristics Evaluation of Bitumen Composites Containing Rock Asphalt and Diatomite

    No full text
    Previous studies have showed that rock asphalt (RA) or diatomite were used to modify the petroleum bitumen. This paper presents the findings from a study conducted to evaluate the potential impact of RA and diatomite on the rheological characteristics of bitumen composites. RA and diatomite with three different dosages were added into the petroleum bitumen: 18% RA, 13% RA+7% diatomite, and 16% RA+9% diatomite by weight. The rheological characteristics of the RA and diatomite modified bitumens were evaluated in this study. The tests conducted included temperature sweep and frequency sweep tests with a dynamic shear rheometer (DSR), a Brookfield rotation viscosity test, and a scanning electron microscope test. The research showed that the addition of RA and diatomite to petroleum bitumen considerably increased the apparent viscosity, dynamic shear modulus, and rutting resistance in bitumen specimens. However, the DSR test indicated a slight reduction in the fatigue performance of composites made of RA and diatomite modified bitumens. Overall, RA and diatomite are good modifiers for petroleum bitumen for a performance improvement

    Context-aware layout to image generation with enhanced object appearance

    No full text
    A layout to image (L2I) generation model aims to generate a complicated image containing multiple objects (things) against natural background (stuff), conditioned on a given layout. Built upon the recent advances in generative adversarial networks (GANs), existing L2I models have made great progress. However, a close inspection of their generated images reveals two major limitations: (1) the object-to-object as well as object-to-stuff relations are often broken and (2) each object’s appearance is typically distorted lacking the key defining characteristics associated with the object class. We argue that these are caused by the lack of context-aware object and stuff feature encoding in their generators, and location-sensitive appearance representation in their discriminators. To address these limitations, two new modules are proposed in this work. First, a context-aware feature transformation module is introduced in the generator to ensure that the generated feature encoding of either object or stuff is aware of other coexisting objects/stuff in the scene. Second, instead of feeding location-insensitive image features to the discriminator, we use the Gram matrix computed from the feature maps of the generated object images to preserve location-sensitive information, resulting in much enhanced object appearance. Extensive experiments show that the proposed method achieves state-of-the-art performance on the COCO-Thing-Stuff and Visual Genome benchmarks. Code available at: https://github.com/wtliao/layout2img

    Context-Aware Layout to Image Generation with Enhanced Object Appearance

    Get PDF
    A layout to image (L2I) generation model aims to generate a complicated image containing multiple objects (things) against natural background (stuff), conditioned on a given layout. Built upon the recent advances in gen-erative adversarial networks (GANs), existing L2I models have made great progress. However, a close inspection of their generated images reveals two major limitations: (1) the object-to-object as well as object-to-stuff relations are often broken and (2) each object's appearance is typically distorted lacking the key defining characteristics associated with the object class. We argue that these are caused by the lack of context-aware object and stuff feature encoding in their generators, and location-sensitive appearance representation in their discriminators. To address these limitations, two new modules are proposed in this work. First, a context-aware feature transformation module is introduced in the generator to ensure that the generated feature encoding of either object or stuff is aware of other co-existing objects/stuff in the scene. Second, instead of feeding location-insensitive image features to the discriminator, we use the Gram matrix computed from the feature maps of the generated object images to preserve location-sensitive information, resulting in much enhanced object appearance. Extensive experiments show that the proposed method achieves state-of-the-art performance on the COCO-Thing-Stuff and Visual Genome benchmarks. Code available at: https://github.com/wtliao/layout2img
    corecore