363 research outputs found

    Logical gaps in the approximate solutions of the social learning game and an exact solution

    Full text link
    After the social learning models were proposed, finding the solutions of the games becomes a well-defined mathematical question. However, almost all papers on the games and their applications are based on solutions built upon either an add-hoc argument or a twisted Bayesian analysis of the games. Here, we present logical gaps in those solutions and an exact solution of our own. We also introduced a minor extension to the original game such that not only logical difference but also difference in action outcomes among those solutions become visible.Comment: A major revisio

    Linear Convergence of Adaptively Iterative Thresholding Algorithms for Compressed Sensing

    Full text link
    This paper studies the convergence of the adaptively iterative thresholding (AIT) algorithm for compressed sensing. We first introduce a generalized restricted isometry property (gRIP). Then we prove that the AIT algorithm converges to the original sparse solution at a linear rate under a certain gRIP condition in the noise free case. While in the noisy case, its convergence rate is also linear until attaining a certain error bound. Moreover, as by-products, we also provide some sufficient conditions for the convergence of the AIT algorithm based on the two well-known properties, i.e., the coherence property and the restricted isometry property (RIP), respectively. It should be pointed out that such two properties are special cases of gRIP. The solid improvements on the theoretical results are demonstrated and compared with the known results. Finally, we provide a series of simulations to verify the correctness of the theoretical assertions as well as the effectiveness of the AIT algorithm.Comment: 15 pages, 5 figure

    Integration of <18/sup>O Labeling and Solution Isoelectric Focusing in a Shotgun Analysis of Mitochondrial Proteins

    Get PDF
    The coupling of efficient separations and mass spectrometry instrumentation is highly desirable to provide global proteomic analysis. When quantitative comparisons are part of the strategy, separation and analytical methods should be selected, which optimize the isotope labeling procedure. Enzyme-catalyzed O labeling is considered to be the labeling method most compatible with analysis of proteins from tissue and other limited samples. The introduction of label at the peptide stage mandates that protein manipulation be minimized in favor of peptide fractionation post-labeling. In the present study, forward and reverse O labeling are integrated with solution isoelectric focusing and capillary LC-tandem mass spectrometry to study changes in mitochondrial proteins associated with drug resistance in human cancer cells. A total of 637 peptides corresponding to 278 proteins were identified in this analysis. Of these, twelve proteins have been demonstrated from the forward and reverse labeling experiments to have abundances altered by greater than a factor of two between the drug susceptible MCF-7 cell line and the MCF-7 cell line selected for resistance to mitoxantrone. Galectin-3 binding protein precursor was detected in the resistant cell line, but was not detected in the drug susceptible line. Such proteins are challenging to O and other isotope strategies and a solution is offered, based on reverse labeling. These twelve proteins play a role in several pathways including apoptosis, oxidative phosphorylation, fatty acid metabolism and amino acid metabolism. For some of these proteins, their possible functions in drug resistance have been proposed

    Structural Prior Guided Generative Adversarial Transformers for Low-Light Image Enhancement

    Full text link
    We propose an effective Structural Prior guided Generative Adversarial Transformer (SPGAT) to solve low-light image enhancement. Our SPGAT mainly contains a generator with two discriminators and a structural prior estimator (SPE). The generator is based on a U-shaped Transformer which is used to explore non-local information for better clear image restoration. The SPE is used to explore useful structures from images to guide the generator for better structural detail estimation. To generate more realistic images, we develop a new structural prior guided adversarial learning method by building the skip connections between the generator and discriminators so that the discriminators can better discriminate between real and fake features. Finally, we propose a parallel windows-based Swin Transformer block to aggregate different level hierarchical features for high-quality image restoration. Experimental results demonstrate that the proposed SPGAT performs favorably against recent state-of-the-art methods on both synthetic and real-world datasets

    LSTM Pose Machines

    Full text link
    We observed that recent state-of-the-art results on single image human pose estimation were achieved by multi-stage Convolution Neural Networks (CNN). Notwithstanding the superior performance on static images, the application of these models on videos is not only computationally intensive, it also suffers from performance degeneration and flicking. Such suboptimal results are mainly attributed to the inability of imposing sequential geometric consistency, handling severe image quality degradation (e.g. motion blur and occlusion) as well as the inability of capturing the temporal correlation among video frames. In this paper, we proposed a novel recurrent network to tackle these problems. We showed that if we were to impose the weight sharing scheme to the multi-stage CNN, it could be re-written as a Recurrent Neural Network (RNN). This property decouples the relationship among multiple network stages and results in significantly faster speed in invoking the network for videos. It also enables the adoption of Long Short-Term Memory (LSTM) units between video frames. We found such memory augmented RNN is very effective in imposing geometric consistency among frames. It also well handles input quality degradation in videos while successfully stabilizes the sequential outputs. The experiments showed that our approach significantly outperformed current state-of-the-art methods on two large-scale video pose estimation benchmarks. We also explored the memory cells inside the LSTM and provided insights on why such mechanism would benefit the prediction for video-based pose estimations.Comment: Poster in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 201

    Learning A Coarse-to-Fine Diffusion Transformer for Image Restoration

    Full text link
    Recent years have witnessed the remarkable performance of diffusion models in various vision tasks. However, for image restoration that aims to recover clear images with sharper details from given degraded observations, diffusion-based methods may fail to recover promising results due to inaccurate noise estimation. Moreover, simple constraining noises cannot effectively learn complex degradation information, which subsequently hinders the model capacity. To solve the above problems, we propose a coarse-to-fine diffusion Transformer (C2F-DFT) for image restoration. Specifically, our C2F-DFT contains diffusion self-attention (DFSA) and diffusion feed-forward network (DFN) within a new coarse-to-fine training scheme. The DFSA and DFN respectively capture the long-range diffusion dependencies and learn hierarchy diffusion representation to facilitate better restoration. In the coarse training stage, our C2F-DFT estimates noises and then generates the final clean image by a sampling algorithm. To further improve the restoration quality, we propose a simple yet effective fine training scheme. It first exploits the coarse-trained diffusion model with fixed steps to generate restoration results, which then would be constrained with corresponding ground-truth ones to optimize the models to remedy the unsatisfactory results affected by inaccurate noise estimation. Extensive experiments show that C2F-DFT significantly outperforms diffusion-based restoration method IR-SDE and achieves competitive performance compared with Transformer-based state-of-the-art methods on 33 tasks, including deraining, deblurring, and real denoising.Comment: 9 pages, 8 figure
    • …
    corecore