65 research outputs found
Additive Manufacturing of Ti6Al4V Alloy: A Review
In this paper, the recent progress on Ti6Al4V fabricated by three mostly developed additive manufacturing (AM) techniques-directed energy deposition (DED), selective laser melting (SLM) and electron beammelting (EBM)-is thoroughly investigated and compared. Fundamental knowledge is provided for the creation of links between processing parameters, resultant microstructures and associated mechanical properties. Room temperature tensile and fatigue properties are also reviewed and compared to traditionally manufactured Ti6Al4V parts. The presence of defects in as-builtAMTi6Al4V components and the influences of these defects on mechanical performances are also critically discussed
Aspirin inhibits proliferation of gastric cancer cells via IL 6/STAT3 signaling pathway
Purpose: To study the effect of aspirin on the proliferation and apoptosis of gastric cancer cells, and its key molecular mechanism of action.
Methods: Gastric cancer SGC7901 cells were treated with aspirin at concentrations of 0, 1, 2 and 4 mmol/L. Cell proliferation was measured using cell counting kit (CCK)-8 assay, while messenger ribonucleic acid (mRNA) expressions of interleukin (IL)-6, B-cell lymphoma 2 (Bcl-2) and Bcl-2 associated X protein (Bax) were assessed by reverse transcription-polymerase chain reaction (RT-PCR). Cell apoptosis was determined by terminal deoxynucleotidyl transferase-mediated dUTP nick end labeling (TUNEL). Furthermore, the protein expression levels of the signal transducer and activator of transcription 3 (STAT3), phosphorylated STAT3 (p-STAT3), Bcl-2 and Bax were evaluated by Western blotting.
Results: Compared with control group, 1, 2 and 4 mmol/L aspirin groups showed lower cell proliferation, and decreased mRNA expressions of Bcl-2 and Bax and IL-6 release at 24, 48 and 72 h (p < 0.05). Cell apoptosis in the aspirin groups was higher than in the control group. Also, compared with the control group, 1 mmol/L aspirin group did not exhibit significant changes in the expressions of STAT3 and p-STAT3 at 72 h. On the other hand, the 2 mmol/L aspirin group at 72 h and the 4 mmol/L aspirin group exhibited significant increases in the expressions of STAT3 and p-STAT3 (p < 0.05). Furthermore, the levels of Bcl-2 and Bax declined in the aspirin groups when compared with the control group (p < 0.05).
Conclusion: Aspirin inhibits the proliferation of gastric cancer SGC7901 cells, and induces their apoptosis in vitro in IL-6/STAT3 signaling pathway. The results of the current study may provide new insight into the treatment of gastric cancer
Lookaround Optimizer: steps around, 1 step average
Weight Average (WA) is an active research topic due to its simplicity in
ensembling deep networks and the effectiveness in promoting generalization.
Existing weight average approaches, however, are often carried out along only
one training trajectory in a post-hoc manner (i.e., the weights are averaged
after the entire training process is finished), which significantly degrades
the diversity between networks and thus impairs the effectiveness in
ensembling. In this paper, inspired by weight average, we propose Lookaround, a
straightforward yet effective SGD-based optimizer leading to flatter minima
with better generalization. Specifically, Lookaround iterates two steps during
the whole training period: the around step and the average step. In each
iteration, 1) the around step starts from a common point and trains multiple
networks simultaneously, each on transformed data by a different data
augmentation, and 2) the average step averages these trained networks to get
the averaged network, which serves as the starting point for the next
iteration. The around step improves the functionality diversity while the
average step guarantees the weight locality of these networks during the whole
training, which is essential for WA to work. We theoretically explain the
superiority of Lookaround by convergence analysis, and make extensive
experiments to evaluate Lookaround on popular benchmarks including CIFAR and
ImageNet with both CNNs and ViTs, demonstrating clear superiority over
state-of-the-arts. Our code is available at
https://github.com/Ardcy/Lookaround.Comment: 18 pages, 9 figure
Laser Assisted Manufacturing: A Comparison of Mechanical Properties Between LAM and Conventional Manufacturing Techniques
Laser assisted manufacturing methods, such as direct metal deposition (DMD) and laser beam welding (LBW), are promising methods because of their higher precision and greater productivity when compared to traditional manufacturing methods. Because these methods are relatively new, the mechanical properties of samples produced by laser assisted manufacturing are not well understood. In this study the mechanical properties of samples produced by laser assisted manufacturing methods are analyzed and compared with data obtained from traditional manufacturing methods. The DMD process used Fe-TiC and Ti-TiC metal matrix composites, while LBW used AISI 304 stainless steel. The results vary widely with the materials and processes used. Although their use is highly dependent upon the individual applications and their needs, laser assisted manufacturing methods present an alternative to conventional techniques. This study can serve as a guide to comparing the results of various manufacturing methods and choosing the appropriate technique for the desired results
Large Language Model for Multi-objective Evolutionary Optimization
Multiobjective evolutionary algorithms (MOEAs) are major methods for solving
multiobjective optimization problems (MOPs). Many MOEAs have been proposed in
the past decades, of which the search operators need a carefully handcrafted
design with domain knowledge. Recently, some attempts have been made to replace
the manually designed operators in MOEAs with learning-based operators (e.g.,
neural network models). However, much effort is still required for designing
and training such models, and the learned operators might not generalize well
on new problems. To tackle the above challenges, this work investigates a novel
approach that leverages the powerful large language model (LLM) to design MOEA
operators. With proper prompt engineering, we successfully let a general LLM
serve as a black-box search operator for decomposition-based MOEA (MOEA/D) in a
zero-shot manner. In addition, by learning from the LLM behavior, we further
design an explicit white-box operator with randomness and propose a new version
of decomposition-based MOEA, termed MOEA/D-LO. Experimental studies on
different test benchmarks show that our proposed method can achieve competitive
performance with widely used MOEAs. It is also promising to see the operator
only learned from a few instances can have robust generalization performance on
unseen problems with quite different patterns and settings. The results reveal
the potential benefits of using pre-trained LLMs in the design of MOEAs
Let's Rectify Step by Step: Improving Aspect-based Sentiment Analysis with Diffusion Models
Aspect-Based Sentiment Analysis (ABSA) stands as a crucial task in predicting
the sentiment polarity associated with identified aspects within text. However,
a notable challenge in ABSA lies in precisely determining the aspects'
boundaries (start and end indices), especially for long ones, due to users'
colloquial expressions. We propose DiffusionABSA, a novel diffusion model
tailored for ABSA, which extracts the aspects progressively step by step.
Particularly, DiffusionABSA gradually adds noise to the aspect terms in the
training process, subsequently learning a denoising process that progressively
restores these terms in a reverse manner. To estimate the boundaries, we design
a denoising neural network enhanced by a syntax-aware temporal attention
mechanism to chronologically capture the interplay between aspects and
surrounding text. Empirical evaluations conducted on eight benchmark datasets
underscore the compelling advantages offered by DiffusionABSA when compared
against robust baseline models. Our code is publicly available at
https://github.com/Qlb6x/DiffusionABSA.Comment: Accepted to LREC-COLING 2024, submission versio
Interaction Pattern Disentangling for Multi-Agent Reinforcement Learning
Deep cooperative multi-agent reinforcement learning has demonstrated its
remarkable success over a wide spectrum of complex control tasks. However,
recent advances in multi-agent learning mainly focus on value decomposition
while leaving entity interactions still intertwined, which easily leads to
over-fitting on noisy interactions between entities. In this work, we introduce
a novel interactiOn Pattern disenTangling (OPT) method, to disentangle not only
the joint value function into agent-wise value functions for decentralized
execution, but also the entity interactions into interaction prototypes, each
of which represents an underlying interaction pattern within a subgroup of the
entities. OPT facilitates filtering the noisy interactions between irrelevant
entities and thus significantly improves generalizability as well as
interpretability. Specifically, OPT introduces a sparse disagreement mechanism
to encourage sparsity and diversity among discovered interaction prototypes.
Then the model selectively restructures these prototypes into a compact
interaction pattern by an aggregator with learnable weights. To alleviate the
training instability issue caused by partial observability, we propose to
maximize the mutual information between the aggregation weights and the history
behaviors of each agent. Experiments on both single-task and multi-task
benchmarks demonstrate that the proposed method yields results superior to the
state-of-the-art counterparts. Our code is available at
https://github.com/liushunyu/OPT
Agent-Aware Training for Agent-Agnostic Action Advising in Deep Reinforcement Learning
Action advising endeavors to leverage supplementary guidance from expert
teachers to alleviate the issue of sampling inefficiency in Deep Reinforcement
Learning (DRL). Previous agent-specific action advising methods are hindered by
imperfections in the agent itself, while agent-agnostic approaches exhibit
limited adaptability to the learning agent. In this study, we propose a novel
framework called Agent-Aware trAining yet Agent-Agnostic Action Advising (A7)
to strike a balance between the two. The underlying concept of A7 revolves
around utilizing the similarity of state features as an indicator for
soliciting advice. However, unlike prior methodologies, the measurement of
state feature similarity is performed by neither the error-prone learning agent
nor the agent-agnostic advisor. Instead, we employ a proxy model to extract
state features that are both discriminative (adaptive to the agent) and
generally applicable (robust to agent noise). Furthermore, we utilize behavior
cloning to train a model for reusing advice and introduce an intrinsic reward
for the advised samples to incentivize the utilization of expert guidance.
Experiments are conducted on the GridWorld, LunarLander, and six prominent
scenarios from Atari games. The results demonstrate that A7 significantly
accelerates the learning process and surpasses existing methods (both
agent-specific and agent-agnostic) by a substantial margin. Our code will be
made publicly available
Is Centralized Training with Decentralized Execution Framework Centralized Enough for MARL?
Centralized Training with Decentralized Execution (CTDE) has recently emerged
as a popular framework for cooperative Multi-Agent Reinforcement Learning
(MARL), where agents can use additional global state information to guide
training in a centralized way and make their own decisions only based on
decentralized local policies. Despite the encouraging results achieved, CTDE
makes an independence assumption on agent policies, which limits agents to
adopt global cooperative information from each other during centralized
training. Therefore, we argue that existing CTDE methods cannot fully utilize
global information for training, leading to an inefficient joint-policy
exploration and even suboptimal results. In this paper, we introduce a novel
Centralized Advising and Decentralized Pruning (CADP) framework for multi-agent
reinforcement learning, that not only enables an efficacious message exchange
among agents during training but also guarantees the independent policies for
execution. Firstly, CADP endows agents the explicit communication channel to
seek and take advices from different agents for more centralized training. To
further ensure the decentralized execution, we propose a smooth model pruning
mechanism to progressively constraint the agent communication into a closed one
without degradation in agent cooperation capability. Empirical evaluations on
StarCraft II micromanagement and Google Research Football benchmarks
demonstrate that the proposed framework achieves superior performance compared
with the state-of-the-art counterparts. Our code will be made publicly
available
- …