259 research outputs found

    MAMBA: Multi-level Aggregation via Memory Bank for Video Object Detection

    Full text link
    State-of-the-art video object detection methods maintain a memory structure, either a sliding window or a memory queue, to enhance the current frame using attention mechanisms. However, we argue that these memory structures are not efficient or sufficient because of two implied operations: (1) concatenating all features in memory for enhancement, leading to a heavy computational cost; (2) frame-wise memory updating, preventing the memory from capturing more temporal information. In this paper, we propose a multi-level aggregation architecture via memory bank called MAMBA. Specifically, our memory bank employs two novel operations to eliminate the disadvantages of existing methods: (1) light-weight key-set construction which can significantly reduce the computational cost; (2) fine-grained feature-wise updating strategy which enables our method to utilize knowledge from the whole video. To better enhance features from complementary levels, i.e., feature maps and proposals, we further propose a generalized enhancement operation (GEO) to aggregate multi-level features in a unified manner. We conduct extensive evaluations on the challenging ImageNetVID dataset. Compared with existing state-of-the-art methods, our method achieves superior performance in terms of both speed and accuracy. More remarkably, MAMBA achieves mAP of 83.7/84.6% at 12.6/9.1 FPS with ResNet-101. Code is available at https://github.com/guanxiongsun/vfe.pytorch.Comment: update code url https://github.com/guanxiongsun/vfe.pytorc

    A Dark Target Algorithm for the GOSAT TANSO-CAI Sensor in Aerosol Optical Depth Retrieval over Land

    Get PDF
    Cloud and Aerosol Imager (CAI) onboard the Greenhouse Gases Observing Satellite (GOSAT) is a multi-band sensor designed to observe and acquire information on clouds and aerosols. In order to retrieve aerosol optical depth (AOD) over land from the CAI sensor, a Dark Target (DT) algorithm for GOSAT CAI was developed based on the strategy of the Moderate Resolution Imaging Spectroradiometer (MODIS) DT algorithm. When retrieving AOD from satellite platforms, determining surface contributions is a major challenge. In the MODIS DT algorithm, surface signals in the visible wavelengths are estimated based on the relationships between visible channels and shortwave infrared (SWIR) near the 2.1 µm channel. However, the CAI only has a 1.6 µm band to cover the SWIR wavelengths. To resolve the difficulties in determining surface reflectance caused by the lack of 2.1 μm band data, we attempted to analyze the relationship between reflectance at 1.6 µm and at 2.1 µm. We did this using the MODIS surface reflectance product and then connecting the reflectances at 1.6 µm and the visible bands based on the empirical relationship between reflectances at 2.1 µm and the visible bands. We found that the reflectance relationship between 1.6 µm and 2.1 µm is typically dependent on the vegetation conditions, and that reflectances at 2.1 µm can be parameterized as a function of 1.6 µm reflectance and the Vegetation Index (VI). Based on our experimental results, an Aerosol Free Vegetation Index (AFRI2.1)-based regression function connecting the 1.6 µm and 2.1 µm bands was summarized. Under light aerosol loading (AOD at 0.55 µm < 0.1), the 2.1 µm reflectance derived by our method has an extremely high correlation with the true 2.1 µm reflectance (r-value = 0.928). Similar to the MODIS DT algorithms (Collection 5 and Collection 6), a CAI-applicable approach that uses AFRI2.1 and the scattering angle to account for the visible surface signals was proposed. It was then applied to the CAI sensor for AOD retrieval; the retrievals were validated by comparisons with ground-level measurements from Aerosol Robotic Network (AERONET) sites. Validations show that retrievals from the CAI have high agreement with the AERONET measurements, with an r-value of 0.922, and 69.2% of the AOD retrieved data falling within the expected error envelope of ± (0.1 + 15% AODAERONET)

    Enhancing the acoustic-to-electrical conversion efficiency of nanofibrous membrane-based triboelectric nanogenerators by nanocomposite composition

    Get PDF
    Acoustic energy is difficult to capture and utilise in general. The current work proposes a novel nanofibrous membrane-based (NFM) triboelectric nanogenerator (TENG) that can harvest acoustic energy from the environment. The device is ultra-thin, lightweight, and compact. The electrospun NFM used in the TENG contains three nanocomponents: polyacrylonitrile (PAN), polyvinylidene fluoride (PVDF), and multi-walled carbon nanotubes (MWCNTs). The optimal concentration ratio of the three nanocomponents has been identified for the first time, resulting in higher electric output than a single-component NFM TENG. For an incident sound pressure level of 116 dB at 200 Hz, the optimised NFM TENG can output a maximum open-circuit voltage of over 120 V and a short-circuit current of 30μA, corresponding to a maximum areal power density of 2.25 W/m2. The specific power reached 259μW/g. The ability to power digital devices is illustrated by lighting up 62 light-emitting diodes in series and powering other devices. The findings may inspire the design of acoustic NFM TENGs comprising multiple nanocomponents, and show that the NFM TENG can promote the utilisation of acoustic energy for many applications, such as microelectronic devices and the Internet of Things

    3D Cinemagraphy from a Single Image

    Full text link
    We present 3D Cinemagraphy, a new technique that marries 2D image animation with 3D photography. Given a single still image as input, our goal is to generate a video that contains both visual content animation and camera motion. We empirically find that naively combining existing 2D image animation and 3D photography methods leads to obvious artifacts or inconsistent animation. Our key insight is that representing and animating the scene in 3D space offers a natural solution to this task. To this end, we first convert the input image into feature-based layered depth images using predicted depth values, followed by unprojecting them to a feature point cloud. To animate the scene, we perform motion estimation and lift the 2D motion into the 3D scene flow. Finally, to resolve the problem of hole emergence as points move forward, we propose to bidirectionally displace the point cloud as per the scene flow and synthesize novel views by separately projecting them into target image planes and blending the results. Extensive experiments demonstrate the effectiveness of our method. A user study is also conducted to validate the compelling rendering results of our method.Comment: Accepted by CVPR 2023. Project page: https://xingyi-li.github.io/3d-cinemagraphy

    Make-It-4D: Synthesizing a Consistent Long-Term Dynamic Scene Video from a Single Image

    Full text link
    We study the problem of synthesizing a long-term dynamic video from only a single image. This is challenging since it requires consistent visual content movements given large camera motions. Existing methods either hallucinate inconsistent perpetual views or struggle with long camera trajectories. To address these issues, it is essential to estimate the underlying 4D (including 3D geometry and scene motion) and fill in the occluded regions. To this end, we present Make-It-4D, a novel method that can generate a consistent long-term dynamic video from a single image. On the one hand, we utilize layered depth images (LDIs) to represent a scene, and they are then unprojected to form a feature point cloud. To animate the visual content, the feature point cloud is displaced based on the scene flow derived from motion estimation and the corresponding camera pose. Such 4D representation enables our method to maintain the global consistency of the generated dynamic video. On the other hand, we fill in the occluded regions by using a pretrained diffusion model to inpaint and outpaint the input image. This enables our method to work under large camera motions. Benefiting from our design, our method can be training-free which saves a significant amount of training time. Experimental results demonstrate the effectiveness of our approach, which showcases compelling rendering results.Comment: accepted by ACM MM'2

    De novo sequencing and comparative transcriptome analysis of white petals and red labella in Phalaenopsis for discovery of genes related to flower color and floral differentation

    Get PDF
    Phalaenopsis is one of the world’s most popular and important epiphytic monopodial orchids. The extraordinary floral diversity of Phalaenopsis is a reflection of its evolutionary success. As a consequence of this diversity, and of the complexity of flower color development in Phalaenopsis, this species is a valuable research material for developmental biology studies. Nevertheless, research on the molecular mechanisms underlying flower color and floral organ formation in Phalaenopsis is still in the early phases. In this study, we generated large amounts of data from Phalaenopsis flowers by combining Illumina sequencing with differentially expressed gene (DEG) analysis. We obtained 37 723 and 34 020 unigenes from petals and labella, respectively. A total of 2736 DEGs were identified, and the functions of many DEGs were annotated by BLAST-searching against several public databases. We mapped 837 up-regulated DEGs (432 from petals and 405 from labella) to 102 Kyoto Encyclopedia of Genes and Genomes pathways. Almost all pathways were represented in both petals (102 pathways) and labella (99 pathways). DEGs involved in energy metabolism were significantly differentially distributed between labella and petals, and various DEGs related to flower color and floral differentiation were found in the two organs. Interestingly, we also identified genes encoding several key enzymes involved in carotenoid synthesis. These genes were differentially expressed between petals and labella, suggesting that carotenoids may influence Phalaenopsis flower color. We thus conclude that a combination of anthocyanins and/or carotenoids determine flower color formation in Phalaenopsis. These results broaden our understanding of the mechanisms controlling flower color and floral organ differentiation in Phalaenopsis and other orchids

    De novo sequencing and comparative transcriptome analysis of white petals and red labella in Phalaenopsis for discovery of genes related to flower color and floral differentation

    Get PDF
    Phalaenopsis is one of the world’s most popular and important epiphytic monopodial orchids. The extraordinary floral diversity of Phalaenopsis is a reflection of its evolutionary success. As a consequence of this diversity, and of the complexity of flower color development in Phalaenopsis, this species is a valuable research material for developmental biology studies. Nevertheless, research on the molecular mechanisms underlying flower color and floral organ formation in Phalaenopsis is still in the early phases. In this study, we generated large amounts of data from Phalaenopsis flowers by combining Illumina sequencing with differentially expressed gene (DEG) analysis. We obtained 37 723 and 34 020 unigenes from petals and labella, respectively. A total of 2736 DEGs were identified, and the functions of many DEGs were annotated by BLAST-searching against several public databases. We mapped 837 up-regulated DEGs (432 from petals and 405 from labella) to 102 Kyoto Encyclopedia of Genes and Genomes pathways. Almost all pathways were represented in both petals (102 pathways) and labella (99 pathways). DEGs involved in energy metabolism were significantly differentially distributed between labella and petals, and various DEGs related to flower color and floral differentiation were found in the two organs. Interestingly, we also identified genes encoding several key enzymes involved in carotenoid synthesis. These genes were differentially expressed between petals and labella, suggesting that carotenoids may influence Phalaenopsis flower color. We thus conclude that a combination of anthocyanins and/or carotenoids determine flower color formation in Phalaenopsis. These results broaden our understanding of the mechanisms controlling flower color and floral organ differentiation in Phalaenopsis and other orchids
    corecore