3 research outputs found

    The effects of cavitation position on the velocity of a laser-induced microjet extracted using explainable artificial intelligence

    Full text link
    The control of the velocity of a high-speed laser-induced microjet is crucial in applications such as needle-free injection. Previous studies have indicated that the jet velocity is heavily influenced by the volumes of secondary cavitation bubbles generated through laser absorption. However, there has been a lack of investigation of the relationship between the positions of cavitation bubbles and the jet velocity. In this study, we investigate the effects of cavitation bubbles on the jet velocity of laser-induced microjets extracted using explainable artificial intelligence (XAI). An XAI is used to classify the jet velocity from images of cavitation bubbles and to extract features from the images through visualization of the classification process. For this purpose, we run 1000 experiments and collect the corresponding images. The XAI model, which is a feedforward neural network (FNN), is trained to classify the jet velocity from the images of cavitation bubbles. After achieving a high classification accuracy, we analyze the classification process of the FNN. The predictions of the FNN, when considering the cavitation positions, show a higher correlation with the jet velocity than the results considering only cavitation volumes. Further investigation suggested that cavitation that occurs closer to the laser focus position has a higher acceleration effect. These results suggest that the velocity of a high-speed microjet is also affected by the cavitation position.Comment: 11 pages, 13 figures, 4 table

    Image features of a splashing drop on a solid surface extracted using a feedforward neural network

    Full text link
    This article reports nonintuitive characteristic of a splashing drop on a solid surface discovered through extracting image features using a feedforward neural network (FNN). Ethanol of area-equivalent radius about 1.29 mm was dropped from impact heights ranging from 4 cm to 60 cm (splashing threshold 20 cm) and impacted on a hydrophilic surface. The images captured when half of the drop impacted the surface were labeled according to their outcome, splashing or nonsplashing, and were used to train an FNN. A classification accuracy higher than 96% was achieved. To extract the image features identified by the FNN for classification, the weight matrix of the trained FNN for identifying splashing drops was visualized. Remarkably, the visualization showed that the trained FNN identified the contour height of the main body of the impacting drop as an important characteristic differentiating between splashing and nonsplashing drops, which has not been reported in previous studies. This feature was found throughout the impact, even when one and three-quarters of the drop impacted the surface. To confirm the importance of this image feature, the FNN was retrained to classify using only the main body without checking for the presence of ejected secondary droplets. The accuracy was still higher than 82%, confirming that the contour height is an important feature distinguishing splashing from nonsplashing drops. Several aspects of drop impact are analyzed and discussed with the aim of identifying the possible mechanism underlying the difference in contour height between splashing and nonsplashing drops.Comment: 19 pages, 17 figures. Source code is available on GitHub: https://github.com/yeejingzuTUAT/Image-features-of-a-splashing-drop-on-a-solid-surface-extracted-using-a-feedforward-neural-networ

    Prediction of the morphological evolution of a splashing drop using an encoder–decoder

    No full text
    The impact of a drop on a solid surface is an important phenomenon that has various implications and applications. However, the multiphase nature of this phenomenon causes complications in the prediction of its morphological evolution, especially when the drop splashes. While most machine-learning-based drop-impact studies have centred around physical parameters, this study used a computer-vision strategy by training an encoder–decoder to predict the drop morphologies using image data. Herein, we show that this trained encoder–decoder is able to successfully generate videos that show the morphologies of splashing and non-splashing drops. Remarkably, in each frame of these generated videos, the spreading diameter of the drop was found to be in good agreement with that of the actual videos. Moreover, there was also a high accuracy in splashing/non-splashing prediction. These findings demonstrate the ability of the trained encoder–decoder to generate videos that can accurately represent the drop morphologies. This approach provides a faster and cheaper alternative to experimental and numerical studies
    corecore