171 research outputs found

    Mechanisms and Kinetics for Sorption of CO2 on Bicontinuous Mesoporous Silica Modified with n-Propylamine

    Get PDF
    We studied equilibrium adsorption and uptake kinetics and identified molecular species that formed during sorption of carbon dioxide on amine-modified silica. Bicontinuous silicas (AMS-6 and MCM-48) were postsynthetically modified with (3-aminopropyl)triethoxysilane or (3-aminopropyl)methyldiethoxysilane, and amine-modified AMS-6 adsorbed more CO(2) than did amine-modified MCM-48. By in situ FTIR spectroscopy, we showed that the amine groups reacted with CO(2) and formed ammonium carbamate ion pairs as well as carbamic acids under both dry and moist conditions. The carbamic acid was stabilized by hydrogen bonds, and ammonium carbamate ion pairs formed preferably on sorbents with high densities of amine groups. Under dry conditions, silylpropylcarbamate formed, slowly, by condensing carbamic acid and silanol groups. The ratio of ammonium carbamate ion pairs to silylpropylcarbamate was higher for samples with high amine contents than samples with low amine contents. Bicarbonates or carbonates did not form under dry or moist conditions. The uptake of CO(2) was enhanced in the presence of water, which was rationalized by the observed release of additional amine groups under these conditions and related formation of ammonium carbamate ion pairs. Distinct evidence for a fourth and irreversibly formed moiety was observed under sorption of CO(2) under dry conditions. Significant amounts of physisorbed, linear CO(2) were detected at relatively high partial pressures of CO(2), such that they could adsorb only after the reactive amine groups were consumed.authorCount :7</p

    Background subtraction using multi-channel fused lasso

    No full text
    Abstract Background subtraction is a fundamental problem in computer vision. Despite having made significant progress over the past decade, accurate foreground extraction in complex scenarios is still challenging. Recently, sparse signal recovery has attracted a considerable attention due to the fact that moving objects in videos are sparse. Considering the coherent of the foreground in spatial and temporal domain, many works use the structured sparsity or fused sparsity to regularize the foreground signals. However, existing methods ignore the group prior of foreground signals on multi-channels (such as the RGB). In fact, a pixel should be considered as a multi-channel signal. If a pixel is equal to the adjacent ones that means all the three RGB coefficients should be equal. In this paper, we propose a Multi-Channel Fused Lasso regularizer to explore the smoothness of multi-channels signals. The proposed method is validated on various challenging video sequences. Experiments demonstrate that our approach effectively works on a wide range of complex scenarios, and achieves a state-of-the-art performance

    Face hallucination via coarse-to-fine recursive kernel regression structure

    No full text
    Abstract In recent years, patch-based face hallucination algorithms have attracted considerable interest due to their effectiveness. These approaches produce a high-resolution (HR) face image according to the corresponding low-resolution (LR) input by learning a reconstruction model from the given training image set. The critical problem in these algorithms is establishing the underlying relationship between LR and HR patch pairs. Most previous methods aim to denote each input LR patch by the linear combination of the training set in the LR space while utilizing the combination weights to reconstruct the target HR patch. However, this assumes that the same combination weights should be shared between various resolution spaces, which is truly difficult to satisfy because of the one-to-many mapping relation between LR and HR patches. In this paper, we directly train a series of adaptive kernel regression mappings for predicting the lost high-frequency information from the LR patch, which avoids dealing with the above difficult problem. During the training process, we first establish a local optimization function on each LR/HR training pair according to the geometric structure of neighboring patches. The objective of local optimization can be presented in two aspects: 1) ensure the reconstruction consistency between each LR patch and the corresponding HR patch and 2) preserve the intrinsic geometry between each HR training patch and its original neighbors after the reconstruction process. The local optimizations are finally incorporated as the global optimization for calculating the optimal kernel regression function. To better approximate the target HR patch, we further propose a recursive structure to compensate for the residual reconstruction error of high-frequency details by a series of regression mappings. The proposed method is rather fast yet very effective in producing HR face images. Experimental results show that the proposed approach achieves superior performance with reasonable computational time compared with the state-of-the-art methods

    Self-supervised learning via multi-view facial rendezvous for 3D/4D affect recognition

    No full text
    Abstract In this paper, we present Multi-view Facial Rendezvous (MiFaR): a novel multi-view self-supervised learning model for 3D/4D facial affect recognition. Our self-supervised learning architecture has the capability to learn collaboratively via multi-views. For each view, our model learns to compute the embeddings via different encoders and robustly aims to correlate two distorted versions of the input batch. We additionally present a novel loss function that not only leverages the correlation associated with the underlying facial patterns among multi-views but it is also robust and consistent towards different batch sizes. Finally, our model is equipped with distributed training to ensure better learning along with computational convenience. We conduct extensive experiments and report ablations to validate the competence of our model on widely-used datasets for 3D/4D FER

    3D skeletal gesture recognition via sparse coding of time-warping invariant riemannian trajectories

    No full text
    Abstract 3D skeleton based human representation for gesture recognition has increasingly attracted attention due to its invariance to camera view and environment dynamics. Existing methods typically utilize absolute coordinate to present human motion features. However, gestures are independent of the performer’s locations, and the features should be invariant to the body size of performer. Moreover, temporal dynamics can significantly distort the distance metric when comparing and identifying gestures. In this paper, we represent each skeleton as a point in the product space of special orthogonal group SO3, which explicitly models the 3D geometric relationships between body parts. Then, a gesture skeletal sequence can be characterized by a trajectory on a Riemannian manifold. Next, we generalize the transported square-root vector field to obtain a re-parametrization invariant metric on the product space of SO(3), therefore, the goal of comparing trajectories in a time-warping invariant manner is realized. Furthermore, we present a sparse coding of skeletal trajectories by explicitly considering the labeling information with each atoms to enforce the discriminant validity of dictionary. Experimental results demonstrate that proposed method has achieved state-of-the-art performance on three challenging benchmarks for gesture recognition
    • 

    corecore