783 research outputs found
High Volumetric Performance Supercapacitors with Controlled Nanomorphology
Supercapacitor is one of the promising energy storage devices due to its relatively higher energy density compared with dielectric capacitor and higher power density and longer cycle life time (>millions) than conventional battery. In order to satisfy various requirements for energy technologies, supercapacitors with higher energy and power densities are required. In this chapter, we improved the electrochemical performance largely compared with commercial product through controlling the nanomorphology of cells. Meanwhile, although many past research programs have focused mainly on gravimetric energy densities, here we have also devoted efforts to study and develop nanomorphologic structures to realize high volumetric energy and power densities, since device volume is another critical and key performance parameter. Moreover, fundamental studies have been carried out on the mobile ion transport and storage in the nanostructures developed in this chapter
The Dynamics Analysis of Two Delayed Epidemic Spreading Models with Latent Period on Heterogeneous Network
Two novel delayed epidemic spreading models with latent period on scale-free network are presented. The formula of the basic reproductive number and the analysis of dynamical behaviors for the models are presented. Meanwhile, numerical simulations are given to verify the main results
VSA: Learning Varied-Size Window Attention in Vision Transformers
Attention within windows has been widely explored in vision transformers to
balance the performance, computation complexity, and memory footprint. However,
current models adopt a hand-crafted fixed-size window design, which restricts
their capacity of modeling long-term dependencies and adapting to objects of
different sizes. To address this drawback, we propose
\textbf{V}aried-\textbf{S}ize Window \textbf{A}ttention (VSA) to learn adaptive
window configurations from data. Specifically, based on the tokens within each
default window, VSA employs a window regression module to predict the size and
location of the target window, i.e., the attention area where the key and value
tokens are sampled. By adopting VSA independently for each attention head, it
can model long-term dependencies, capture rich context from diverse windows,
and promote information exchange among overlapped windows. VSA is an
easy-to-implement module that can replace the window attention in
state-of-the-art representative models with minor modifications and negligible
extra computational cost while improving their performance by a large margin,
e.g., 1.1\% for Swin-T on ImageNet classification. In addition, the performance
gain increases when using larger images for training and test. Experimental
results on more downstream tasks, including object detection, instance
segmentation, and semantic segmentation, further demonstrate the superiority of
VSA over the vanilla window attention in dealing with objects of different
sizes. The code will be released
https://github.com/ViTAE-Transformer/ViTAE-VSA.Comment: 23 pages, 13 tables, and 5 figure
Grapy-ML: Graph Pyramid Mutual Learning for Cross-dataset Human Parsing
Human parsing, or human body part semantic segmentation, has been an active
research topic due to its wide potential applications. In this paper, we
propose a novel GRAph PYramid Mutual Learning (Grapy-ML) method to address the
cross-dataset human parsing problem, where the annotations are at different
granularities. Starting from the prior knowledge of the human body hierarchical
structure, we devise a graph pyramid module (GPM) by stacking three levels of
graph structures from coarse granularity to fine granularity subsequently. At
each level, GPM utilizes the self-attention mechanism to model the correlations
between context nodes. Then, it adopts a top-down mechanism to progressively
refine the hierarchical features through all the levels. GPM also enables
efficient mutual learning. Specifically, the network weights of the first two
levels are shared to exchange the learned coarse-granularity information across
different datasets. By making use of the multi-granularity labels, Grapy-ML
learns a more discriminative feature representation and achieves
state-of-the-art performance, which is demonstrated by extensive experiments on
the three popular benchmarks, e.g. CIHP dataset. The source code is publicly
available at https://github.com/Charleshhy/Grapy-ML.Comment: Accepted as an oral paper in AAAI2020. 9 pages, 4 figures.
https://www.aaai.org/Papers/AAAI/2020GB/AAAI-HeH.2317.pd
Biochar Adsorption Treatment for Typical Pollutants Removal in Livestock Wastewater: A Review
Biochar, as an high efficiency, environmental friendly, and low-cost adsorbent, is usually used as soil conditioner, bio-fuel, and carbon sequestration regent. Recently, biochar has attracted much attention in wastewater treatment field. There are plenty of studies about application of biochar to adsorb pollutants in wastewater, because of its low-cost preparation, high surface area, large pore volume, plentiful functional groups, and environmental stability. Furthermore, it can be reused due to their high treatment efficiency and resource recovery potential. As biochar can be used for adsorption of typical pollutants in livestock wastewater, it becomes a promising method to treat livestock wastewater. The preparation methods, including pyrolysis, hydrothermal carbonization, and gasification, were introduced. The applications of biochar to adsorb typical pollutants, such as organic pollutants, heavy metals, and nutrients, in livestock wastewater were present. The organic structures, surface functional groups, surface electricity, and mineral component of biochar were investigated to explain the adsorption mechanism of organic pollutants, heavy metals, and nutrients in wastewater. Finally, outlooks were made for the better use of biochar in future. The relationship of preparation parameters, structures, and adsorption performance of biochar should be discussed. The quantitative analysis for the adsorption of organic structures, surface functional groups, surface electricity, and mineral component should be performed. The disposal of post-sorption biochar should be investigated
Rethinking Hierarchies in Pre-trained Plain Vision Transformer
Self-supervised pre-training vision transformer (ViT) via masked image
modeling (MIM) has been proven very effective. However, customized algorithms
should be carefully designed for the hierarchical ViTs, e.g., GreenMIM, instead
of using the vanilla and simple MAE for the plain ViT. More importantly, since
these hierarchical ViTs cannot reuse the off-the-shelf pre-trained weights of
the plain ViTs, the requirement of pre-training them leads to a massive amount
of computational cost, thereby incurring both algorithmic and computational
complexity. In this paper, we address this problem by proposing a novel idea of
disentangling the hierarchical architecture design from the self-supervised
pre-training. We transform the plain ViT into a hierarchical one with minimal
changes. Technically, we change the stride of linear embedding layer from 16 to
4 and add convolution (or simple average) pooling layers between the
transformer blocks, thereby reducing the feature size from 1/4 to 1/32
sequentially. Despite its simplicity, it outperforms the plain ViT baseline in
classification, detection, and segmentation tasks on ImageNet, MS COCO,
Cityscapes, and ADE20K benchmarks, respectively. We hope this preliminary study
could draw more attention from the community on developing effective
(hierarchical) ViTs while avoiding the pre-training cost by leveraging the
off-the-shelf checkpoints. The code and models will be released at
https://github.com/ViTAE-Transformer/HPViT.Comment: Tech report, work in progres
- …