1 research outputs found
Closed-Loop Control of Direct Ink Writing via Reinforcement Learning
Enabling additive manufacturing to employ a wide range of novel, functional
materials can be a major boost to this technology. However, making such
materials printable requires painstaking trial-and-error by an expert operator,
as they typically tend to exhibit peculiar rheological or hysteresis
properties. Even in the case of successfully finding the process parameters,
there is no guarantee of print-to-print consistency due to material differences
between batches. These challenges make closed-loop feedback an attractive
option where the process parameters are adjusted on-the-fly. There are several
challenges for designing an efficient controller: the deposition parameters are
complex and highly coupled, artifacts occur after long time horizons,
simulating the deposition is computationally costly, and learning on hardware
is intractable. In this work, we demonstrate the feasibility of learning a
closed-loop control policy for additive manufacturing using reinforcement
learning. We show that approximate, but efficient, numerical simulation is
sufficient as long as it allows learning the behavioral patterns of deposition
that translate to real-world experiences. In combination with reinforcement
learning, our model can be used to discover control policies that outperform
baseline controllers. Furthermore, the recovered policies have a minimal
sim-to-real gap. We showcase this by applying our control policy in-vivo on a
single-layer, direct ink writing printer