170 research outputs found

    Certifying randomness in quantum state collapse

    Full text link
    The unpredictable process of state collapse caused by quantum measurements makes the generation of quantum randomness possible. In this paper, we explore the quantitive connection between the randomness generation and the state collapse and provide a randomness verification protocol under the assumptions: (I) independence between the source and the measurement devices and (II) the L\"{u}ders' rule for collapsing state. Without involving heavy mathematical machinery, the amount of genereted quantum randomness can be directly estimated with the disturbance effect originating from the state collapse. In the protocol, we can employ general measurements that are not fully trusted. Equipped with trusted projection measurements, we can further optimize the randomness generation performance. Our protocol also shows a high efficiency and yields a higher randomness generation rate than the one based on uncertainty relation. We expect our results to provide new insights for understanding and generating quantum randomnes

    MoVideo: Motion-Aware Video Generation with Diffusion Models

    Full text link
    While recent years have witnessed great progress on using diffusion models for video generation, most of them are simple extensions of image generation frameworks, which fail to explicitly consider one of the key differences between videos and images, i.e., motion. In this paper, we propose a novel motion-aware video generation (MoVideo) framework that takes motion into consideration from two aspects: video depth and optical flow. The former regulates motion by per-frame object distances and spatial layouts, while the later describes motion by cross-frame correspondences that help in preserving fine details and improving temporal consistency. More specifically, given a key frame that exists or generated from text prompts, we first design a diffusion model with spatio-temporal modules to generate the video depth and the corresponding optical flows. Then, the video is generated in the latent space by another spatio-temporal diffusion model under the guidance of depth, optical flow-based warped latent video and the calculated occlusion mask. Lastly, we use optical flows again to align and refine different frames for better video decoding from the latent space to the pixel space. In experiments, MoVideo achieves state-of-the-art results in both text-to-video and image-to-video generation, showing promising prompt consistency, frame consistency and visual quality.Comment: project homepage: https://jingyunliang.github.io/MoVide

    Concurrent and lagged effects of drought on grassland net primary productivity: a case study in Xinjiang, China

    Get PDF
    Xinjiang grasslands play a crucial role in regulating the regional carbon cycle and maintaining ecosystem stability, and grassland net primary productivity (NPP) is highly vulnerable to drought. Drought events are frequent in Xinjiang due to the impact of global warming. However, there is a lack of more systematic research results on how Xinjiang grassland NPP responds to drought and how its heterogeneity is characterized. In this study, the CASA (Carnegie Ames Stanford Application) model was used to simulate the 1982–2020 grassland NPP in Xinjiang, and the standardized Precipitation Evapotranspiration Index (SPEI) was calculated using meteorological station data to characterize drought. The spatial and temporal variability of NPP and drought in Xinjiang grasslands from 1982 to 2020 were analyzed by the Sen trend method and the Mann-Kendall test, and the response characteristics of NPP to drought in Xinjiang grasslands were investigated by the correlation analysis method. The results showed that (1) the overall trend of NPP in Xinjiang grassland was increasing, and its value was growing season > summer > spring > autumn. Mild drought occurred most frequently in the growing season and autumn, and moderate drought occurred most frequently in spring. (2) A total of 64.63% of grassland NPP had a mainly concurrent effect on drought, and these grasslands were primarily located in the northern region of Xinjiang. The concurrent effect of drought on NPP was strongest in plain grassland and weakest in alpine subalpine grassland. (3) The lagged effect is mainly in the southern grasslands, the NPP of alpine subalpine meadows, meadows, and alpine subalpine grasslands showed mainly a 1-month time lag effect to drought, and desert grassland NPP showed mainly a 3-month time lag effect to drought. This research can contribute to a reliable theoretical basis for regional sustainable development

    STAIBT: Blockchain and CP-ABE Empowered Secure and Trusted Agricultural IoT Blockchain Terminal

    Get PDF
    The integration of agricultural Internet of Things (IoT) and blockchain has become the key technology of precision agriculture. How to protect data privacy and security from data source is one of the difficult issues in agricultural IoT research. This work integrates cryptography, blockchain and Interplanetary File System (IPFS) technologies, and proposes a general IoT blockchain terminal system architecture, which strongly supports the integration of the IoT and blockchain technology. This research innovatively designed a fine-grained and flexible terminal data access control scheme based on the ciphertext-policy attribute-based encryption (CP-ABE) algorithm. Based on CP-ABE and DES algorithms, a hybrid data encryption scheme is designed to realize 1-to-N encrypted data sharing. A "horizontal + vertical" IoT data segmentation scheme under blockchain technology is proposed to realize the classified release of different types of data on the blockchain. The experimental results show that the design scheme can ensure data access control security, privacy data confidentiality, and data high-availability security. This solution significantly reduces the complexity of key management, can realize efficient sharing of encrypted data, flexibly set access control strategies, and has the ability to store large data files in the agricultural IoT

    Learning Task-Oriented Flows to Mutually Guide Feature Alignment in Synthesized and Real Video Denoising

    Full text link
    Video denoising aims at removing noise from videos to recover clean ones. Some existing works show that optical flow can help the denoising by exploiting the additional spatial-temporal clues from nearby frames. However, the flow estimation itself is also sensitive to noise, and can be unusable under large noise levels. To this end, we propose a new multi-scale refined optical flow-guided video denoising method, which is more robust to different noise levels. Our method mainly consists of a denoising-oriented flow refinement (DFR) module and a flow-guided mutual denoising propagation (FMDP) module. Unlike previous works that directly use off-the-shelf flow solutions, DFR first learns robust multi-scale optical flows, and FMDP makes use of the flow guidance by progressively introducing and refining more flow information from low resolution to high resolution. Together with real noise degradation synthesis, the proposed multi-scale flow-guided denoising network achieves state-of-the-art performance on both synthetic Gaussian denoising and real video denoising. The codes will be made publicly available

    Towards Interpretable Video Super-Resolution via Alternating Optimization

    Full text link
    In this paper, we study a practical space-time video super-resolution (STVSR) problem which aims at generating a high-framerate high-resolution sharp video from a low-framerate low-resolution blurry video. Such problem often occurs when recording a fast dynamic event with a low-framerate and low-resolution camera, and the captured video would suffer from three typical issues: i) motion blur occurs due to object/camera motions during exposure time; ii) motion aliasing is unavoidable when the event temporal frequency exceeds the Nyquist limit of temporal sampling; iii) high-frequency details are lost because of the low spatial sampling rate. These issues can be alleviated by a cascade of three separate sub-tasks, including video deblurring, frame interpolation, and super-resolution, which, however, would fail to capture the spatial and temporal correlations among video sequences. To address this, we propose an interpretable STVSR framework by leveraging both model-based and learning-based methods. Specifically, we formulate STVSR as a joint video deblurring, frame interpolation, and super-resolution problem, and solve it as two sub-problems in an alternate way. For the first sub-problem, we derive an interpretable analytical solution and use it as a Fourier data transform layer. Then, we propose a recurrent video enhancement layer for the second sub-problem to further recover high-frequency details. Extensive experiments demonstrate the superiority of our method in terms of quantitative metrics and visual quality.Comment: ECCV 202

    Practical Blind Denoising via Swin-Conv-UNet and Data Synthesis

    Full text link
    While recent years have witnessed a dramatic upsurge of exploiting deep neural networks toward solving image denoising, existing methods mostly rely on simple noise assumptions, such as additive white Gaussian noise (AWGN), JPEG compression noise and camera sensor noise, and a general-purpose blind denoising method for real images remains unsolved. In this paper, we attempt to solve this problem from the perspective of network architecture design and training data synthesis. Specifically, for the network architecture design, we propose a swin-conv block to incorporate the local modeling ability of residual convolutional layer and non-local modeling ability of swin transformer block, and then plug it as the main building block into the widely-used image-to-image translation UNet architecture. For the training data synthesis, we design a practical noise degradation model which takes into consideration different kinds of noise (including Gaussian, Poisson, speckle, JPEG compression, and processed camera sensor noises) and resizing, and also involves a random shuffle strategy and a double degradation strategy. Extensive experiments on AGWN removal and real image denoising demonstrate that the new network architecture design achieves state-of-the-art performance and the new degradation model can help to significantly improve the practicability. We believe our work can provide useful insights into current denoising research.Comment: Codes: https://github.com/cszn/SCUNe

    Type I interferons suppress viral replication but contribute to T cell depletion and dysfunction during chronic HIV-1 infection

    Get PDF
    The direct link between sustained type I interferon (IFN-I) signaling and HIV-1-induced immunopathogenesis during chronic infection remains unclear. Here we report studies using a monoclonal antibody to block IFN-α/β receptor 1 (IFNAR1) signaling during persistent HIV-1 infection in humanized mice (hu-mice). We discovered that, during chronic HIV-1 infection, IFNAR blockade increased viral replication, which was correlated with elevated T cell activation. Thus, IFN-Is suppress HIV-1 replication during the chronic phase but are not essential for HIV-1-induced aberrant immune activation. Surprisingly, IFNAR blockade rescued both total human T cell and HIV-specific T cell numbers despite elevated HIV-1 replication and immune activation. We showed that IFNAR blockade reduced HIV-1-induced apoptosis of CD4+ T cells. Importantly, IFNAR blockade also rescued the function of human T cells, including HIV-1-specific CD8+ and CD4+ T cells. We conclude that during persistent HIV-1 infection, IFN-Is suppress HIV-1 replication, but contribute to depletion and dysfunction of T cells

    Event-Based Fusion for Motion Deblurring with Cross-modal Attention

    Get PDF
    Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times. As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution, providing valid image degradation information within the exposure time. In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network. To effectively fuse event and image features, we design an event-image cross-modal attention module applied at multiple levels of our network, which allows to focus on relevant features from the event branch and filter out noise. We also introduce a novel symmetric cumulative event representation specifically for image deblurring as well as an event mask gated connection between the two stages of our network which helps avoid information loss. At the dataset level, to foster event-based motion deblurring and to facilitate evaluation on challenging real-world images, we introduce the Real Event Blur (REBlur) dataset, captured with an event camera in an illumination controlled optical laboratory. Our Event Fusion Network (EFNet) sets the new state of the art in motion deblurring, surpassing both the prior best-performing image-based method and all event-based methods with public implementations on the GoPro dataset (by up to 2.47dB) and on our REBlur dataset, even in extreme blurry conditions. The code and our REBlur dataset will be made publicly available
    • …
    corecore