9 research outputs found
Catalytic oxidation of chlorinated organics over lanthanide perovskites: effects of phosphoric acid etching and water vapor on chlorine desorption behavior
In this article,
the underlying effect of phosphoric acid etching
and additional water vapor on chlorine desorption behavior over a
model catalyst La3Mn2O7 was explored.
Acid treatment led to the formation of LaPO4 and enhanced
the mobility of lattice oxygen of La3Mn2O7 evidenced by a range of characterization (i.e., X-ray diffraction,
temperature-programmed analyses, NH3–IR, etc.).
The former introduced thermally stable Brönsted acidic sites
that enhanced dichloromethane (DCM) hydrolysis while the latter facilitated
desorption of accumulated chlorine at elevated temperatures. The acid-modified
catalyst displayed a superior catalytic activity in DCM oxidation
compared to the untreated sample, which was ascribed to the abundance
of proton donors and MnÂ(IV) species. The addition of water vapor to
the reaction favored the formation and desorption of HCl and avoided
surface chlorination at low temperatures. This resulted in a further
reduction in reaction temperature under humid conditions (T90 of 380 °C for the modified catalyst).
These results provide an in-depth interpretation of chlorine desorption
behavior for DCM oxidation, which should aid the future design of
industrial catalysts for the durable catalytic combustion of chlorinated
organics
Adversarial Attention-Based Variational Graph Autoencoder
Autoencoders have been successfully used for graph embedding, and many variants have been proven to effectively express graph data and conduct graph analysis in low-dimensional space. However, previous methods ignore the structure and properties of the reconstructed graph, or they do not consider the potential data distribution in the graph, which typically leads to unsatisfactory graph embedding performance. In this paper, we propose the adversarial attention variational graph autoencoder (AAVGA), which is a novel framework that incorporates attention networks into the encoder part and uses an adversarial mechanism in embedded training. The encoder involves node neighbors in the representation of nodes by stacking attention layers, which can further improve the graph embedding performance of the encoder. At the same time, due to the adversarial mechanism, the distribution of the potential features that are generated by the encoder are closer to the actual distribution of the original graph data; thus, the decoder generates a graph that is closer to the original graph. Experimental results prove that AAVGA performs competitively with state-of-the-art popular graph encoders on three citation datasets