2,324 research outputs found

    Simultaneous image transformation and sparse representation recovery

    Get PDF
    Sparse representation in compressive sensing is gaining increasing attention due to its success in various applications. As we demonstrate in this paper, however, image sparse representation is sensitive to image plane transformations such that existing approaches can not reconstruct the sparse representation of a geometrically transformed image. We introduce a simple technique for obtaining transformation-invariant image sparse representation. It is rooted in two observations: 1) if the aligned model images of an object span a linear subspace, their transformed versions with respect to some group of transformations can still span a linear subspace in a higher dimension; 2) if a target (or test) image, aligned with the model images, lives in the above subspace, its pre-alignment versions would get closer to the subspace after applying estimated transformations with more and more accurate parameters. These observations motivate us to project a potentially unaligned target image to random projection manifolds defined by the model images and the transformation model. Each projection is then separated into the aligned projection target and a residue due to misalignment. The desired aligned projection target is then iteratively optimized by gradually diminishing the residue. In this framework, we can simultaneously recover the sparse representation of a target image and the image plane transformation between the target and the model images. We have applied the proposed methodology to two applications: face recognition, and dynamic texture registration. The improved performance over previous methods that we obtain demonstrates the effectiveness of the proposed approach. 1

    AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

    Full text link
    In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different subregions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14% on the CUB dataset and 170.25% on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image

    Snapshot 3D tracking of insulin granules in live cells

    Full text link
    Rapid and accurate volumetric imaging remains a challenge, yet has the potential to enhance understanding of cell function. We developed and used a multifocal microscope (MFM) for 3D snapshot imaging to allow 3D tracking of insulin granules labeled with mCherry in MIN6 cells. MFM employs a special diffractive optical element (DOE) to simultaneously image multiple focal planes. This simultaneous acquisition of information determines the 3D location of single objects at a speed only limited by the frame rate of array detector . We validated the accuracy of MFM imaging and tracking with fluorescence beads; the 3D positions and trajectories of single fluorescence beads can be determined accurately over a wide range of spatial and temporal scales. The 3D positions and trajectories of single insulin granules in a 3.2 micro meter deep volume were determined with imaging processing that combines 3D decovolution, shift correction, and finally tracking using the Imaris software package. We find that the motion of the granules is super-diffusive, but less so in 3D than 2D for cells grown on coverslip surfaces, suggesting an anisotropy in the cytoskeleton (e.g. microtubules and action)
    corecore