3 research outputs found

    Analysis and prevention of dent defects formed during strip casting of twin-induced plasticity steels

    Get PDF
    Rapid-solidification experiments were conducted for understanding dent defects formed during strip casting of twin-induced plasticity (TWIP) steels. The rapid-solidification experiments reproduced the dent defects formed on these steels, which were generally located at valleys of the shot-blasted roughness on the substrate. The rapid-solidification experiment results reveal that the number of dips, the Mn content of the steel, and the surface roughness of the substrate affect the depth and size of dents formed on the solidified-shell surfaces, while the composition of the atmosphere gases and the carbon content of the steel are not factors. The formation of dents was attributed to the entrapment of gases inside the roughness valleys of the substrate surface and their volume expansion due to the temperature of the steel melt and the latent heat. The dents could be prevented when the thermal expansion of gases was suppressed by making longitudinal grooves on the substrate surface, which allowed the entrapped gases to escape. Sound solidified shells were obtained by optimizing the width and depth of the longitudinal grooves and by controlling the shot-blasting conditions.ope

    Future Transformer for Long-term Action Anticipation

    Full text link
    The task of predicting future actions from a video is crucial for a real-world agent interacting with others. When anticipating actions in the distant future, we humans typically consider long-term relations over the whole sequence of actions, i.e., not only observed actions in the past but also potential actions in the future. In a similar spirit, we propose an end-to-end attention model for action anticipation, dubbed Future Transformer (FUTR), that leverages global attention over all input frames and output tokens to predict a minutes-long sequence of future actions. Unlike the previous autoregressive models, the proposed method learns to predict the whole sequence of future actions in parallel decoding, enabling more accurate and fast inference for long-term anticipation. We evaluate our method on two standard benchmarks for long-term action anticipation, Breakfast and 50 Salads, achieving state-of-the-art results.Comment: Accepted to CVPR 202
    corecore