483 research outputs found

    Regularized Fourier ptychography using an online plug-and-play algorithm

    Full text link
    The plug-and-play priors (PnP) framework has been recently shown to achieve state-of-the-art results in regularized image reconstruction by leveraging a sophisticated denoiser within an iterative algorithm. In this paper, we propose a new online PnP algorithm for Fourier ptychographic microscopy (FPM) based on the accelerated proximal gradient method (APGM). Specifically, the proposed algorithm uses only a subset of measurements, which makes it scalable to a large set of measurements. We validate the algorithm by showing that it can lead to significant performance gains on both simulated and experimental data.https://arxiv.org/abs/1811.00120Published versio

    Regularized Fourier ptychography using an online plug-and-play algorithm

    Full text link
    The plug-and-play priors (PnP) framework has been recently shown to achieve state-of-the-art results in regularized image reconstruction by leveraging a sophisticated denoiser within an iterative algorithm. In this paper, we propose a new online PnP algorithm for Fourier ptychographic microscopy (FPM) based on the accelerated proximal gradient method (APGM). Specifically, the proposed algorithm uses only a subset of measurements, which makes it scalable to a large set of measurements. We validate the algorithm by showing that it can lead to significant performance gains on both simulated and experimental data.https://arxiv.org/abs/1811.00120Published versio

    Regularized Fourier ptychographic microscopy

    Get PDF
    Quantitative phase image (QPI) is a popular microscopy technique for studying cell morphology. Recently, Fourier ptychographic microscopy (FPM) has emerged as a low-cost computational microscopy technique for forming high-resolution wide-field QPI images by taking multiple images from different illumination angles. However, the applicability of FPM to dynamic imaging is limited by its high data requirement. In this thesis, we propose new methods for highly compressive FPM imaging using a data-adaptive sparse coding and an online plug-and-play (PnP) method with non-local priors based on the fast iterative shrinkage/threshold algorithm (FISTA). We validate the proposed method on both simulated and experimental data and show that our method is capable of reconstructing images under a significantly lower data rate

    Unsupervised Multi-view Pedestrian Detection

    Full text link
    With the prosperity of the video surveillance, multiple cameras have been applied to accurately locate pedestrians in a specific area. However, previous methods rely on the human-labeled annotations in every video frame and camera view, leading to heavier burden than necessary camera calibration and synchronization. Therefore, we propose in this paper an Unsupervised Multi-view Pedestrian Detection approach (UMPD) to eliminate the need of annotations to learn a multi-view pedestrian detector via 2D-3D mapping. 1) Firstly, Semantic-aware Iterative Segmentation (SIS) is proposed to extract unsupervised representations of multi-view images, which are converted into 2D pedestrian masks as pseudo labels, via our proposed iterative PCA and zero-shot semantic classes from vision-language models. 2) Secondly, we propose Geometry-aware Volume-based Detector (GVD) to end-to-end encode multi-view 2D images into a 3D volume to predict voxel-wise density and color via 2D-to-3D geometric projection, trained by 3D-to-2D rendering losses with SIS pseudo labels. 3) Thirdly, for better detection results, i.e., the 3D density projected on Birds-Eye-View from GVD, we propose Vertical-aware BEV Regularization (VBR) to constraint them to be vertical like the natural pedestrian poses. Extensive experiments on popular multi-view pedestrian detection benchmarks Wildtrack, Terrace, and MultiviewX, show that our proposed UMPD approach, as the first fully-unsupervised method to our best knowledge, performs competitively to the previous state-of-the-art supervised techniques. Code will be available

    Are Diffusion Models Vulnerable to Membership Inference Attacks?

    Full text link
    Diffusion-based generative models have shown great potential for image synthesis, but there is a lack of research on the security and privacy risks they may pose. In this paper, we investigate the vulnerability of diffusion models to Membership Inference Attacks (MIAs), a common privacy concern. Our results indicate that existing MIAs designed for GANs or VAE are largely ineffective on diffusion models, either due to inapplicable scenarios (e.g., requiring the discriminator of GANs) or inappropriate assumptions (e.g., closer distances between synthetic samples and member samples). To address this gap, we propose Step-wise Error Comparing Membership Inference (SecMI), a query-based MIA that infers memberships by assessing the matching of forward process posterior estimation at each timestep. SecMI follows the common overfitting assumption in MIA where member samples normally have smaller estimation errors, compared with hold-out samples. We consider both the standard diffusion models, e.g., DDPM, and the text-to-image diffusion models, e.g., Latent Diffusion Models and Stable Diffusion. Experimental results demonstrate that our methods precisely infer the membership with high confidence on both of the two scenarios across multiple different datasets. Code is available at https://github.com/jinhaoduan/SecMI.Comment: To appear in ICML 202

    Let Segment Anything Help Image Dehaze

    Full text link
    The large language model and high-level vision model have achieved impressive performance improvements with large datasets and model sizes. However, low-level computer vision tasks, such as image dehaze and blur removal, still rely on a small number of datasets and small-sized models, which generally leads to overfitting and local optima. Therefore, we propose a framework to integrate large-model prior into low-level computer vision tasks. Just as with the task of image segmentation, the degradation of haze is also texture-related. So we propose to detect gray-scale coding, network channel expansion, and pre-dehaze structures to integrate large-model prior knowledge into any low-level dehazing network. We demonstrate the effectiveness and applicability of large models in guiding low-level visual tasks through different datasets and algorithms comparison experiments. Finally, we demonstrate the effect of grayscale coding, network channel expansion, and recurrent network structures through ablation experiments. Under the conditions where additional data and training resources are not required, we successfully prove that the integration of large-model prior knowledge will improve the dehaze performance and save training time for low-level visual tasks

    Mapping global research on shadow education: Trends and future agenda

    Get PDF
    This study aimed to analyze bibliographies of journals, authors, and research topics on shadow education using the Scopus database. Bibliometric analysis focuses on the metadata of journals, authors, and topics, visualized, and analyzed to produce a road map, research trends, and future agenda. The data were obtained from 207 articles published on Scopus downloaded on 29/8/2021 by using “shadow education” or “shadow curriculum” keywords. Furthermore, descriptive statistical methods and bibliometric analysis using Biblioshiny, an R-based application that generates bibliometric maps were used. Shadow education research has not been widely developed. Therefore, this bibliographic study may form the basis for future developments. Shadow education is the highest trend, followed by education and policy, high stakes testing, teacher education, curriculum, academic achievement, and private tutoring. This study provides an overview of trends in journals, authors, and research topics related to shadow education. Specifically, it provides relevant information to develop the potential and related themes in the future
    • …
    corecore