220 research outputs found
Biologically Inspired Design Concept Generation Using Generative Pre-Trained Transformers
Biological systems in nature have evolved for millions of years to adapt and
survive the environment. Many features they developed can be inspirational and
beneficial for solving technical problems in modern industries. This leads to a
specific form of design-by-analogy called bio-inspired design (BID). Although
BID as a design method has been proven beneficial, the gap between biology and
engineering continuously hinders designers from effectively applying the
method. Therefore, we explore the recent advance of artificial intelligence
(AI) for a data-driven approach to bridge the gap. This paper proposes a
generative design approach based on the generative pre-trained language model
(PLM) to automatically retrieve and map biological analogy and generate BID in
the form of natural language. The latest generative pre-trained transformer,
namely GPT-3, is used as the base PLM. Three types of design concept generators
are identified and fine-tuned from the PLM according to the looseness of the
problem space representation. Machine evaluators are also fine-tuned to assess
the mapping relevancy between the domains within the generated BID concepts.
The approach is evaluated and then employed in a real-world project of
designing light-weighted flying cars during its conceptual design phase The
results show our approach can generate BID concepts with good performance.Comment: Accepted by J. Mech. Des. arXiv admin note: substantial text overlap
with arXiv:2204.0971
On optimal block resampling for Gaussian-subordinated long-range dependent processes
Block-based resampling estimators have been intensively investigated for
weakly dependent time processes, which has helped to inform implementation
(e.g., best block sizes). However, little is known about resampling performance
and block sizes under strong or long-range dependence. To establish guideposts
in block selection, we consider a broad class of strongly dependent time
processes, formed by a transformation of a stationary long-memory Gaussian
series, and examine block-based resampling estimators for the variance of the
prototypical sample mean; extensions to general statistical functionals are
also considered. Unlike weak dependence, the properties of resampling
estimators under strong dependence are shown to depend intricately on the
nature of non-linearity in the time series (beyond Hermite ranks) in addition
the long-memory coefficient and block size. Additionally, the intuition has
often been that optimal block sizes should be larger under strong dependence
(say for a sample size ) than the optimal order
known under weak dependence. This intuition turns out to be largely incorrect,
though a block order may be reasonable (and even optimal) in many
cases, owing to non-linearity in a long-memory time series. While optimal block
sizes are more complex under long-range dependence compared to short-range, we
provide a consistent data-driven rule for block selection, and numerical
studies illustrate that the guides for block selection perform well in other
block-based problems with long-memory time series, such as distribution
estimation and strategies for testing Hermite rank
Generalized Equivariance and Preferential Labeling for GNN Node Classification
Existing graph neural networks (GNNs) largely rely on node embeddings, which
represent a node as a vector by its identity, type, or content. However, graphs
with unattributed nodes widely exist in real-world applications (e.g.,
anonymized social networks). Previous GNNs either assign random labels to nodes
(which introduces artefacts to the GNN) or assign one embedding to all nodes
(which fails to explicitly distinguish one node from another). Further, when
these GNNs are applied to unattributed node classification problems, they have
an undesired equivariance property, which are fundamentally unable to address
the data with multiple possible outputs. In this paper, we analyze the
limitation of existing approaches to node classification problems. Inspired by
our analysis, we propose a generalized equivariance property and a Preferential
Labeling technique that satisfies the desired property asymptotically.
Experimental results show that we achieve high performance in several
unattributed node classification tasks
MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition
Recently, multi-expert methods have led to significant improvements in
long-tail recognition (LTR). We summarize two aspects that need further
enhancement to contribute to LTR boosting: (1) More diverse experts; (2) Lower
model variance. However, the previous methods didn't handle them well. To this
end, we propose More Diverse experts with Consistency Self-distillation (MDCS)
to bridge the gap left by earlier methods. Our MDCS approach consists of two
core components: Diversity Loss (DL) and Consistency Self-distillation (CS). In
detail, DL promotes diversity among experts by controlling their focus on
different categories. To reduce the model variance, we employ KL divergence to
distill the richer knowledge of weakly augmented instances for the experts'
self-distillation. In particular, we design Confident Instance Sampling (CIS)
to select the correctly classified instances for CS to avoid biased/noisy
knowledge. In the analysis and ablation study, we demonstrate that our method
compared with previous work can effectively increase the diversity of experts,
significantly reduce the variance of the model, and improve recognition
accuracy. Moreover, the roles of our DL and CS are mutually reinforcing and
coupled: the diversity of experts benefits from the CS, and the CS cannot
achieve remarkable results without the DL. Experiments show our MDCS
outperforms the state-of-the-art by 1% 2% on five popular long-tailed
benchmarks, including CIFAR10-LT, CIFAR100-LT, ImageNet-LT, Places-LT, and
iNaturalist 2018. The code is available at https://github.com/fistyee/MDCS.Comment: ICCV2023 Accept. 13 page
Efficient Folded Attention for 3D Medical Image Reconstruction and Segmentation
Recently, 3D medical image reconstruction (MIR) and segmentation (MIS) based
on deep neural networks have been developed with promising results, and
attention mechanism has been further designed to capture global contextual
information for performance enhancement. However, the large size of 3D volume
images poses a great computational challenge to traditional attention methods.
In this paper, we propose a folded attention (FA) approach to improve the
computational efficiency of traditional attention methods on 3D medical images.
The main idea is that we apply tensor folding and unfolding operations with
four permutations to build four small sub-affinity matrices to approximate the
original affinity matrix. Through four consecutive sub-attention modules of FA,
each element in the feature tensor can aggregate spatial-channel information
from all other elements. Compared to traditional attention methods, with
moderate improvement of accuracy, FA can substantially reduce the computational
complexity and GPU memory consumption. We demonstrate the superiority of our
method on two challenging tasks for 3D MIR and MIS, which are quantitative
susceptibility mapping and multiple sclerosis lesion segmentation.Comment: 9 pages, 7 figure
Biomass Straw Based Activated Porous Carbon Materials for High-Performance Supercapacitors
Biomass straws are often regarding as agricultural waste, usually burned off in rural areas, which results in severe resource waste and air pollution. In this work, biomass-based porous carbon material with a lamellar microstructure is obtained via simple hydrothermal and subsequent KOH activation, the optimum activate process is determined by the proportion of activator. Scanning electron microscopy (SEM) and nitrogen adsorption techniques are conducted to investigate the physical properties of the materials. Cyclic voltammetry and constant current discharge/charge in the three-electrode system and symmetrical double-layer capacitors results indicate the best electrochemical performance of SCA-1.5 electrode material, with a capacity of 250.0 F g-1 at 1.0 A g-1. And notably, high recycling stability at a high cycling rate of 1.0 A g-1 after 18,000 cycles
- …