9,567 research outputs found
Meso-scale modelling of 3D woven composite T-joints with weave variations
A meso-scale modelling framework is proposed to simulate the 3D woven fibre architectures and the mechanical performance of the composite T-joints, subjected to quasi-static tensile pull-off loading. The proposed method starts with building the realistic reinforcement geometries of the 3D woven T-joints at the mesoscale, of which the modelling strategy is applicable for other types of geometries with weave variations at the T-joint junction. Damage modelling incorporates both interface and constituent material damage, in conjunction with a continuum damage mechanics approach to account for the progressive failure behaviour. With a voxel based cohesive zone model, the proposed method is able to model mode I delamination based on the voxel mesh technique, which has advantages in meshing. Predicted results are in good agreement with experimental data beyond initial failure, in terms of load-displacement responses, failure events, damage initiation and propagation. The significant effect of fibre architecture variations on mechanical behaviour is successfully predicted through this modelling method without any further correlation of input parameters in damage model. This predictive method will facilitate the design and optimisation of 3D woven T-joint preforms
Experimental assessment of the mechanical behaviour of 3D woven composite T-joints
To understand the influence of the fibre architecture of 3D woven composite T-joints on mechanical performance, as well as the benefits that 3D woven T-joints can offer over the equivalent 2D laminates, experimental testing is performed on two types of 3D woven T-joint with only weave variation at the junction, and one type of 2D woven laminate T-joint. A quasi-static tensile pull-off loading is selected in this work as this out-of-plane load case is one of the typical loading conditions for such T-joint structures. The significant advantages of 3D woven composite T-joints in terms of ultimate strength and damage tolerance over the 2D alternative were identified in the testing. More importantly, this work showed that variation in the fibre architecture can considerably enhance properties such as delamination resistance and total energy absorption to failure, as well as increasing slightly the stiffness and initial failure load. This experimental assessment has demonstrated that using 3D woven reinforcements is an effective way to improve the load-bearing capability of composite T-joints over laminates, and also that this improvement could be optimised with regard to fibre architecture
Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder
In this paper, we present a hierarchical path planning framework called SG-RL
(subgoal graphs-reinforcement learning), to plan rational paths for agents
maneuvering in continuous and uncertain environments. By "rational", we mean
(1) efficient path planning to eliminate first-move lags; (2) collision-free
and smooth for agents with kinematic constraints satisfied. SG-RL works in a
two-level manner. At the first level, SG-RL uses a geometric path-planning
method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract
paths, also called subgoal sequences. At the second level, SG-RL uses an RL
method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal
motion-planning policies which can generate kinematically feasible and
collision-free trajectories between adjacent subgoals. The first advantage of
the proposed method is that SSG can solve the limitations of sparse reward and
local minima trap for RL agents; thus, LSPI can be used to generate paths in
complex environments. The second advantage is that, when the environment
changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to
reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI
can deal with uncertainties by exploiting its generalization ability to handle
changes in environments. Simulation experiments in representative scenarios
demonstrate that, compared with existing methods, SG-RL can work well on
large-scale maps with relatively low action-switching frequencies and shorter
path lengths, and SG-RL can deal with small changes in environments. We further
demonstrate that the design of reward functions and the types of training
environments are important factors for learning feasible policies.Comment: 20 page
- …