249 research outputs found

    Recent advances in computational study and design of MOF catalysts for CO2 conversion

    Get PDF
    Catalytic conversion of the greenhouse gas CO2 into value-added chemicals and fuels is highly beneficial to the environment, the economy, and the global energy supply. Metal–organic frameworks (MOFs) are promising catalysts for this purpose due to their uniquely high structural and chemical tunability. In the catalyst discovery process, computational chemistry has emerged as an essential tool as it can not only aid in the interpretation of experimental observations but also provide atomistic-level insights into the catalytic mechanism. This Mini Review summarizes recent computational studies on MOF-catalyzed CO2 conversion through different types of reactions, discusses about the usage of various computational methods in those works, and provides a brief perspective of future works in this field

    ABC-CNN: An Attention Based Convolutional Neural Network for Visual Question Answering

    Full text link
    We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions

    Impedance-based Stability Analysis of Metro Traction Power System Considering Regenerative Braking

    Get PDF

    A novel decomposed-ensemble time series forecasting framework: capturing underlying volatility information

    Full text link
    Time series forecasting represents a significant and challenging task across various fields. Recently, methods based on mode decomposition have dominated the forecasting of complex time series because of the advantages of capturing local characteristics and extracting intrinsic modes from data. Unfortunately, most models fail to capture the implied volatilities that contain significant information. To enhance the prediction of contemporary diverse and complex time series, we propose a novel time series forecasting paradigm that integrates decomposition with the capability to capture the underlying fluctuation information of the series. In our methodology, we implement the Variational Mode Decomposition algorithm to decompose the time series into K distinct sub-modes. Following this decomposition, we apply the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model to extract the volatility information in these sub-modes. Subsequently, both the numerical data and the volatility information for each sub-mode are harnessed to train a neural network. This network is adept at predicting the information of the sub-modes, and we aggregate the predictions of all sub-modes to generate the final output. By integrating econometric and artificial intelligence methods, and taking into account both the numerical and volatility information of the time series, our proposed framework demonstrates superior performance in time series forecasting, as evidenced by the significant decrease in MSE, RMSE, and MAPE in our comparative experimental results

    Upcycling Mask Waste to Carbon Capture Sorbents: A Combined Experimental and Computational Study

    Get PDF
    Massive plastic pollution and grand scale emission of CO2 into the atmosphere represent two major and deeply connected societal challenges, which can have adverse impacts on climate, human health, and marine ecosystems. In particular, the COVID-19 pandemic led to substantially increased production, use, and discarding of disposable masks, a problem that requires urgent and effective technological solutions to mitigate their negative environmental impacts. Furthermore, over the years significant research efforts have sought to address the challenges of plastic waste and CO2 emission, such as development of chemical upcycling methods and low-cost CO2 capture sorbents at scale, respectively. In this work, we introduce a simple and scalable method for directly converting surgical polypropylene mask waste into sulfur-doped carbon fibers, which can exhibit a high CO2 sorption capacity of ≤3.11 mmol/g and high selectivity (\u3e45) against N2 gas. This excellent performance is attributed to the high affinity between sulfur heteroatoms in the carbon framework and CO2 gas molecules, confirmed by combined experimental and simulation investigations. This work provides an industrially viable approach for upcycling plastic waste into carbon-based products with increased value, which can then be employed to address the environmental challenges of CO2 remediation

    Triplet Attention Transformer for Spatiotemporal Predictive Learning

    Full text link
    Spatiotemporal predictive learning offers a self-supervised learning paradigm that enables models to learn both spatial and temporal patterns by predicting future sequences based on historical sequences. Mainstream methods are dominated by recurrent units, yet they are limited by their lack of parallelization and often underperform in real-world scenarios. To improve prediction quality while maintaining computational efficiency, we propose an innovative triplet attention transformer designed to capture both inter-frame dynamics and intra-frame static features. Specifically, the model incorporates the Triplet Attention Module (TAM), which replaces traditional recurrent units by exploring self-attention mechanisms in temporal, spatial, and channel dimensions. In this configuration: (i) temporal tokens contain abstract representations of inter-frame, facilitating the capture of inherent temporal dependencies; (ii) spatial and channel attention combine to refine the intra-frame representation by performing fine-grained interactions across spatial and channel dimensions. Alternating temporal, spatial, and channel-level attention allows our approach to learn more complex short- and long-range spatiotemporal dependencies. Extensive experiments demonstrate performance surpassing existing recurrent-based and recurrent-free methods, achieving state-of-the-art under multi-scenario examination including moving object trajectory prediction, traffic flow prediction, driving scene prediction, and human motion capture.Comment: Accepted to WACV 202
    • …
    corecore