193 research outputs found

    Characterizing Photophysics of Photoconvertible-fluorescent Protein mEos3.2 for Quantitative Fluorescence Microscopy

    Get PDF
    Photoconvertible fluorescent proteins (PCFPs) are widely used in super-resolution microscopy, and studies of cellular dynamics. Their photoconversion properties have enabled single-molecule localization microscopy (SMLM) by temporally separating closely-spaced molecules. However, our understanding of their photophysics is still limited, hampering their quantitative application. For example, counting fluorescently-tagged fusion proteins from the discrete localizations of individual molecules is still difficult. The red-to-green photoconvertible fluorescent protein mEos3.2 is favored by many due to its monomeric property, high brightness, photostability, compatibility with live cells, and 1:1 labeling stoichiometry. The fluorescent protein mEos3.2 is fused to the coding sequence of a protein of interest in the genome for endogenous expression or expressed exogenously and transiently in cells. Irradiation at 405 nm photoconverts mEos3.2 molecules from their native green state with an emission peak at 516 nm to their red state with an emission peak at 580 nm. Sparsely distributed photoconverted red mEos3.2 are excited at 561 nm and then localized for SMLM imaging. Understanding the factors that affect mEos3.2 photophysics can greatly strengthen its applications in imaging and quantitative measurements. However, we still do not know 1) how the behavior of mEos3.2 in live cells compares with fixed cells, and how the imaging buffer influences mEos3.2 photophysics in fixed cells, 2) how different imaging methods and laser intensities affect the behavior of mEos3.2, and 3) if there are unknown dark states of mEos3.2 that can further complicate imaging and quantitative applications of mEos3.2. In this body of work, I first reviewed the usage of photoconvertible fluorescent proteins in SMLM with a focus on its quantitative application. I discussed the significance, advantages, and challenges of counting molecules of interest tagged with mEos3.2 by SMLM. I highlighted how our limited understanding of mEos3.2 photophysics hampers its application in quantitative SMLM, thus requiring further investigation. Parts of this chapter are taken from Sun et al., 2021. In Chapter 2, I combined quantitative fluorescence microscopy and mathematical modeling to estimate the photophysical parameters of mEos3.2 in fission yeast cells. I measured the time-integrated fluorescence signal per cell, and rate constants for photoconversion and photobleaching by fitting a 3-state model of photoconversion and photobleaching to the time courses of the mEos3.2 fluorescence signal per cell measured by quantitative fluorescence microscopy. My method can be applied to study the photophysical properties of other photoactivatable fluorescent proteins and photoconvertible fluorescent proteins quantitatively, an approach complementary to conventional single-molecule experiments. This chapter is taken from Sun et al., 2021. In Chapter 3, I investigated how fixation affects the photophysical properties of mEos3.2, so that I could compare experiments conducted in live and fixed yeast cells with mEos3.2. Light fixation has been used to preserve cellular structures and eliminate movements of proteins to simplify the imaging and quantification process of quantitative SMLM. I discovered that formaldehyde fixation permeabilizes the S. pombe cells for small molecules, making the photophysical properties of mEos3.2 sensitive to the extracellular buffer conditions. To find conditions where the photophysical parameters of mEos3.2 are comparable in live and fixed yeast cells, I investigated how the pH and reducing agent in the imaging buffer affect the mEos3.2 photophysics in fixed cells. I discovered that using a buffer at pH 8.5 with 1 mM DTT to image mEos3.2 in fixed cells gave similar photophysical parameters to live cells. My results strongly suggested that formaldehyde fixation did not destroy mEos3.2 molecules but partially permeabilized the yeast cell membrane to small molecules. This chapter is taken from Sun et al., 2021. In Chapter 4, I investigated the effects of fixation and imaging buffer on mEos3.2 photophysics over a wide range of laser intensities by point-scanning and widefield microscopy, and also by SMLM. This chapter is taken from Sun et al., 2021. In Chapter 5, I alternated illumination at 405- and 561-nm to investigate the effects of 405- and 561-nm illumination separately. I discovered that 405-nm irradiation drove some of the red-state mEos3.2 molecules to enter an intermediate dark state, which can be converted back to the red fluorescent state by 561-nm illumination. I established the ā€œpositiveā€ switching behavior (off-switching by 405-nm and on-switching by 561-nm illumination) of red mEos3.2 in addition to the previously reported ā€œnegativeā€ switching behavior (switching off by 561-nm and switching on by 405-nm illumination), which could potentially affect counting the number of localizations of red mEos3.2 by quantitative SMLM. This chapter is taken from Sun et al., 2021. In Chapter 6, I described my ongoing progress towards developing a method to count molecules with SMLM using internal standards tagged with mEos3.2. I summarized the preliminary data on the internal calibration standards that I have tried. Further work is needed to optimize the standards and test the robustness and the reproducibility of the standards. Ultimately, this work can be applied to count the number of molecules in diffraction-limited subcellular structures with SMLM by converting the number of localizations to the number of molecules

    Efficient parallel implementation of the multiplicative weight update method for graph-based linear programs

    Full text link
    Positive linear programs (LPs) model many graph and operations research problems. One can solve for a (1+Ļµ)(1+\epsilon)-approximation for positive LPs, for any selected Ļµ\epsilon, in polylogarithmic depth and near-linear work via variations of the multiplicative weight update (MWU) method. Despite extensive theoretical work on these algorithms through the decades, their empirical performance is not well understood. In this work, we implement and test an efficient parallel algorithm for solving positive LP relaxations, and apply it to graph problems such as densest subgraph, bipartite matching, vertex cover and dominating set. We accelerate the algorithm via a new step size search heuristic. Our implementation uses sparse linear algebra optimization techniques such as fusion of vector operations and use of sparse format. Furthermore, we devise an implicit representation for graph incidence constraints. We demonstrate the parallel scalability with the use of threading OpenMP and MPI on the Stampede2 supercomputer. We compare this implementation with exact libraries and specialized libraries for the above problems in order to evaluate MWU's practical standing for both accuracy and performance among other methods. Our results show this implementation is faster than general purpose LP solvers (IBM CPLEX, Gurobi) in all of our experiments, and in some instances, outperforms state-of-the-art specialized parallel graph algorithms.Comment: Pre-print. 13 pages, comments welcom

    Self-Refining Deep Symmetry Enhanced Network for Rain Removal

    Full text link
    Rain removal aims to remove the rain streaks on rain images. The state-of-the-art methods are mostly based on Convolutional Neural Network~(CNN). However, as CNN is not equivariant to object rotation, these methods are unsuitable for dealing with the tilted rain streaks. To tackle this problem, we propose Deep Symmetry Enhanced Network~(DSEN) that is able to explicitly extract the rotation equivariant features from rain images. In addition, we design a self-refining mechanism to remove the accumulated rain streaks in a coarse-to-fine manner. This mechanism reuses DSEN with a novel information link which passes the gradient flow to the higher stages. Extensive experiments on both synthetic and real-world rain images show that our self-refining DSEN yields the top performance.Comment: Accepted by ICIP 19. Corresponding and contact author: Hanrong Y

    Action classification by exploring directional co-occurrence of weighted STIPs

    Get PDF
    Human action recognition is challenging mainly due to intro-variety, inter-ambiguity and clutter backgrounds in real videos. Bag-of-visual words model utilizes spatio-temporal interest points(STIPs), and represents action by the distribution of points which ignores visual context among points. To add more contextual information, we propose a method by encoding spatio-temporal distribution of weighted pairwise points. First, STIPs are extracted from an action sequence and clustered into visual words. Then, each word is weighted in both temporal and spatial domains to capture the relationships with other words. Finally, the directional relationships between co-occurrence pairwise words are used to encode visual contexts. We report state-of-the-art results on Rochester and UT-Interaction datasets to validate that our method can classify human actions with high accuracies. ? 2014 IEEE.EI1460-146

    Learning directional co-occurrence for human action classification

    Get PDF
    Spatio-temporal interest point (STIP) based methods have shown promising results for human action classification. However, state-of-art works typically utilize bag-of-visual words (BoVW), which focuses on the statistical distribution of features but ignores their inherent structural relationships. To solve this problem, a descriptor, namely directional pairwise feature (DPF), is proposed to encode the mutual direction information between pairwise words, aiming at adding more spatial discriminant to BoVW. Firstly, STIP features are extracted and classified into a set of labeled words. Then in each frame, the DPF is constructed for every pair of words with different labels, according to their assigned directional vector. Finally, DPFs are quantized to be a probability histogram as a representation of human action. The proposed method is evaluated on two challenging datasets, Rochester and UT-interaction, and the results based on chi-squared kernel SVM classifiers confirm that our method can classify human actions with high accuracies.AcousticsEngineering, Electrical & ElectronicEICPCI-S(ISTP)

    Human activity prediction by mapping grouplets to recurrent self-organizing map

    Get PDF
    Human activity prediction is defined as inferring the high-level activity category with the observation of only a few action units. It is very meaningful for time-critical applications such as emergency surveillance. For efficient prediction, we represent the ongoing human activity by using body part movements and taking full advantage of inherent sequentiality, then find the best matching activity template by a proper aligning measurement. In streaming videos, dense spatio-temporal interest points (STIPs) are first extracted as low-level descriptors for their high detection efficiency. Then, sparse grouplets, i.e., clustered point groups, are located to represent body part movements, for which we propose a scale-adaptive mean shift method that can determine grouplet number and scale for each frame adaptively. To learn the sequentiality, located grouplets are successively mapped to Recurrent Self-Organizing Map (RSOM), which has been pre-trained to preserve the temporal topology of grouplet sequences. During this mapping, a growing RSOM trajectory, which represents the ongoing activity, is obtained. For the special structure of RSOM trajectory, a combination of dynamic time warping (DTW) distance and edit distance, called DTW-E distance, is designed for similarity measurement. Four activity datasets with different characteristics such as complex scenes and inter-class ambiguities serve for performance evaluation. Experimental results confirm that our method is very efficient for predicting human activity and yields better performance than state-of-the-art works. (C) 2015 Elsevier B.V. All rights reserved.National Natural Science Foundation of China (NSFC) [61340046]; National High Technology Research and Development Program of China (863 Program) [2006AA04Z247]; Scientific and Technical Innovation Commission of Shenzhen Municipality [JCYJ20120614152234873, JCYJ20130331144716089]; Specialized Research Fund for the Doctoral Program of Higher Education [20130001110011]SCI(E)[email protected]

    A compact representation of human actions by sliding coordinate coding

    Get PDF
    Human action recognition remains challenging in realistic videos, where scale and viewpoint changes make the problem complicated. Many complex models have been developed to overcome these difficulties, while we explore using low-level features and typical classifiers to achieve the state-of-the-art performance. The baseline model of feature encoding for action recognition is bag-of-words model, which has shown high efficiency but ignores the arrangement of local features. Refined methods compensate for this problem by using a large number of co-occurrence descriptors or a concatenation of the local distributions in designed segments. In contrast, this article proposes to encode the relative position of visual words using a simple but very compact method called sliding coordinates coding (SCC). The SCC vector of each kind of word is only an eight-dimensional vector which is more compact than many of the spatial or spatialā€“temporal pooling methods in the literature. Our key observation is that the relative position is robust to the variations of video scale and view angle. Additionally, we design a temporal cutting scheme to define the margin of coding within video clips, since visual words far away from each other have little relationship. In experiments, four action data sets, including KTH, Rochester Activities, IXMAS, and UCF YouTube, are used for performance evaluation. Results show that our method achieves comparable or better performance than the state of the art, while using more compact and less complex models.Published versio
    • ā€¦
    corecore