231 research outputs found

    STM32-based music player design

    Get PDF
    This design mainly studies the music player based on STM32 microcontroller, among them, the STM32F103C8T6 single_x005fchip microcomputer is mainly used, LCD1602 monitor, LM386 power amplifi er module, blue and white potentiometer, XY-V7B serial port controls the voice module, TF card, horn, KEY button. In addition to the hardware, software is required for system control. Finally, the mode of combining software and hardware is used, implementation status display, play music, music switching, the function of volume up and down. The chapter contains detailed information on various software and hardware and how to use them, this design optimizes the excessive display articulation, make the function more user-friendly, closer to the aspect of convenience and cost-eff ectiveness, it provides a more convenient way for music players

    Take a Prior from Other Tasks for Severe Blur Removal

    Full text link
    Recovering clear structures from severely blurry inputs is a challenging problem due to the large movements between the camera and the scene. Although some works apply segmentation maps on human face images for deblurring, they cannot handle natural scenes because objects and degradation are more complex, and inaccurate segmentation maps lead to a loss of details. For general scene deblurring, the feature space of the blurry image and corresponding sharp image under the high-level vision task is closer, which inspires us to rely on other tasks (e.g. classification) to learn a comprehensive prior in severe blur removal cases. We propose a cross-level feature learning strategy based on knowledge distillation to learn the priors, which include global contexts and sharp local structures for recovering potential details. In addition, we propose a semantic prior embedding layer with multi-level aggregation and semantic attention transformation to integrate the priors effectively. We introduce the proposed priors to various models, including the UNet and other mainstream deblurring baselines, leading to better performance on severe blur removal. Extensive experiments on natural image deblurring benchmarks and real-world images, such as GoPro and RealBlur datasets, demonstrate our method's effectiveness and generalization ability

    Band Structure Engineering of Interfacial Semiconductors Based on Atomically Thin Lead Iodide Crystals

    Full text link
    To explore new constituents in two-dimensional materials and to combine their best in van der Waals heterostructures, are in great demand as being unique platform to discover new physical phenomena and to design novel functionalities in interface-based devices. Herein, PbI2 crystals as thin as few-layers are first synthesized, particularly through a facile low-temperature solution approach with the crystals of large size, regular shape, different thicknesses and high-yields. As a prototypical demonstration of flexible band engineering of PbI2-based interfacial semiconductors, these PbI2 crystals are subsequently assembled with several transition metal dichalcogenide monolayers. The photoluminescence of MoS2 is strongly enhanced in MoS2/PbI2 stacks, while a dramatic photoluminescence quenching of WS2 and WSe2 is revealed in WS2/PbI2 and WSe2/PbI2 stacks. This is attributed to the effective heterojunction formation between PbI2 and these monolayers, but type I band alignment in MoS2/PbI2 stacks where fast-transferred charge carriers accumulate in MoS2 with high emission efficiency, and type II in WS2/PbI2 and WSe2/PbI2 stacks with separated electrons and holes suitable for light harvesting. Our results demonstrate that MoS2, WS2, WSe2 monolayers with very similar electronic structures themselves, show completely distinct light-matter interactions when interfacing with PbI2, providing unprecedent capabilities to engineer the device performance of two-dimensional heterostructures.Comment: 36 pages, 5 figure

    Learning from Multi-Perception Features for Real-Word Image Super-resolution

    Full text link
    Currently, there are two popular approaches for addressing real-world image super-resolution problems: degradation-estimation-based and blind-based methods. However, degradation-estimation-based methods may be inaccurate in estimating the degradation, making them less applicable to real-world LR images. On the other hand, blind-based methods are often limited by their fixed single perception information, which hinders their ability to handle diverse perceptual characteristics. To overcome this limitation, we propose a novel SR method called MPF-Net that leverages multiple perceptual features of input images. Our method incorporates a Multi-Perception Feature Extraction (MPFE) module to extract diverse perceptual information and a series of newly-designed Cross-Perception Blocks (CPB) to combine this information for effective super-resolution reconstruction. Additionally, we introduce a contrastive regularization term (CR) that improves the model's learning capability by using newly generated HR and LR images as positive and negative samples for ground truth HR. Experimental results on challenging real-world SR datasets demonstrate that our approach significantly outperforms existing state-of-the-art methods in both qualitative and quantitative measures

    Fumigant activity and transcriptomic analysis of two plant essential oils against the tea green leafhopper, Empoasca onukii Matsuda

    Get PDF
    Introduction: The tea green leafhopper, Empoasca (Matsumurasca) onukii Matsuda, R., 1952 (Hemiptera: Cicadellidae), is currently one of the most devastating pests in the Chinese tea industry. The long-term use of chemical pesticides has a negative impact on human health, impeding the healthy and sustainable development of the tea industry in this region. Therefore, there is a need for non-chemical insecticides to control E. onukii in tea plants. The essential oils from plants have been identified for their potential insecticidal ability; however, there is a lack of knowledge regarding the effect of plant essential oils on E. onukii and its gene expression.Methods: In order to address these knowledge gaps, the components of Pogostemon cablin and Cinnamomum camphora essential oils were analyzed in the present study using gas chromatographyÔÇÉmass spectrometry. The fumigation toxicity of two essential oils on E. onukii was tested using sealed conical flasks. In addition, We performed comparative transcriptome analyses of E. onukii treated with or without P. cablin essential oil.Results: The 36-h lethal concentration (LC50) values for E. onukii treated with P. cablin and C. camphora essential oils were 0.474 and 1.204┬á╬╝L┬ámLÔłĺ1 respectively. Both essential oils exhibited the potential to control E. onukii, but the fumigation activity of P. cablin essential oil was more effective. A total of 2,309 differentially expressed genes were obtained by transcriptome sequencing of E. onukii treated with P. cablin essential oil.Conclusion: Many of differentially expressed genes were found to contain detoxifification genes, indicating that these families may have played an important role when E. onukii was exposed to essential oil stress. We also found differential expression of genes related to redox-related gene families, suggesting the upregulation of genes associated with possible development of drug and stress resistance. This work offers new insights for the prevention and management of E. onukii in the future

    Learning to Fuse Monocular and Multi-view Cues for Multi-frame Depth Estimation in Dynamic Scenes

    Full text link
    Multi-frame depth estimation generally achieves high accuracy relying on the multi-view geometric consistency. When applied in dynamic scenes, e.g., autonomous driving, this consistency is usually violated in the dynamic areas, leading to corrupted estimations. Many multi-frame methods handle dynamic areas by identifying them with explicit masks and compensating the multi-view cues with monocular cues represented as local monocular depth or features. The improvements are limited due to the uncontrolled quality of the masks and the underutilized benefits of the fusion of the two types of cues. In this paper, we propose a novel method to learn to fuse the multi-view and monocular cues encoded as volumes without needing the heuristically crafted masks. As unveiled in our analyses, the multi-view cues capture more accurate geometric information in static areas, and the monocular cues capture more useful contexts in dynamic areas. To let the geometric perception learned from multi-view cues in static areas propagate to the monocular representation in dynamic areas and let monocular cues enhance the representation of multi-view cost volume, we propose a cross-cue fusion (CCF) module, which includes the cross-cue attention (CCA) to encode the spatially non-local relative intra-relations from each source to enhance the representation of the other. Experiments on real-world datasets prove the significant effectiveness and generalization ability of the proposed method.Comment: Accepted by CVPR 2023. Code and models are available at: https://github.com/ruili3/dynamic-multiframe-dept
    • ÔÇŽ
    corecore