7 research outputs found

    Learning with Noisy Labels by Efficient Transition Matrix Estimation to Combat Label Miscorrection

    Full text link
    Recent studies on learning with noisy labels have shown remarkable performance by exploiting a small clean dataset. In particular, model agnostic meta-learning-based label correction methods further improve performance by correcting noisy labels on the fly. However, there is no safeguard on the label miscorrection, resulting in unavoidable performance degradation. Moreover, every training step requires at least three back-propagations, significantly slowing down the training speed. To mitigate these issues, we propose a robust and efficient method that learns a label transition matrix on the fly. Employing the transition matrix makes the classifier skeptical about all the corrected samples, which alleviates the miscorrection issue. We also introduce a two-head architecture to efficiently estimate the label transition matrix every iteration within a single back-propagation, so that the estimated matrix closely follows the shifting noise distribution induced by label correction. Extensive experiments demonstrate that our approach shows the best performance in training efficiency while having comparable or better accuracy than existing methods.Comment: ECCV202

    Anthropomorphic Grasping of Complex-Shaped Objects Using Imitation Learning

    No full text
    This paper presents an autonomous grasping approach for complex-shaped objects using an anthropomorphic robotic hand. Although human-like robotic hands have a number of distinctive advantages, most of the current autonomous robotic pickup systems still use relatively simple gripper setups such as a two-finger gripper or even a suction gripper. The main difficulty of utilizing human-like robotic hands lies in the sheer complexity of the system; it is inherently tough to plan and control the motions of the high degree of freedom (DOF) system. Although data-driven approaches have been successfully used for motion planning of various robotic systems recently, it is hard to directly apply them to high-DOF systems due to the difficulty of acquiring training data. In this paper, we propose a novel approach for grasping complex-shaped objects using a high-DOF robotic manipulation system consisting of a seven-DOF manipulator and a four-fingered robotic hand with 16 DOFs. Human demonstration data are first acquired using a virtual reality controller with 6D pose tracking and individual capacitive finger sensors. Then, the 3D shape of the manipulation target object is reconstructed from multiple depth images recorded using the wrist-mounted RGBD camera. The grasping pose for the object is estimated using a residual neural network (ResNet), K-means clustering (KNN), and a point-set registration algorithm. Then, the manipulator moves to the grasping pose following the trajectory created by dynamic movement primitives (DMPs). Finally, the robot performs one of the object-specific grasping motions learned from human demonstration. The suggested system is evaluated by an official tester using five objects with promising results

    Anthropomorphic Grasping of Complex-Shaped Objects Using Imitation Learning

    No full text
    This paper presents an autonomous grasping approach for complex-shaped objects using an anthropomorphic robotic hand. Although human-like robotic hands have a number of distinctive advantages, most of the current autonomous robotic pickup systems still use relatively simple gripper setups such as a two-finger gripper or even a suction gripper. The main difficulty of utilizing human-like robotic hands lies in the sheer complexity of the system; it is inherently tough to plan and control the motions of the high degree of freedom (DOF) system. Although data-driven approaches have been successfully used for motion planning of various robotic systems recently, it is hard to directly apply them to high-DOF systems due to the difficulty of acquiring training data. In this paper, we propose a novel approach for grasping complex-shaped objects using a high-DOF robotic manipulation system consisting of a seven-DOF manipulator and a four-fingered robotic hand with 16 DOFs. Human demonstration data are first acquired using a virtual reality controller with 6D pose tracking and individual capacitive finger sensors. Then, the 3D shape of the manipulation target object is reconstructed from multiple depth images recorded using the wrist-mounted RGBD camera. The grasping pose for the object is estimated using a residual neural network (ResNet), K-means clustering (KNN), and a point-set registration algorithm. Then, the manipulator moves to the grasping pose following the trajectory created by dynamic movement primitives (DMPs). Finally, the robot performs one of the object-specific grasping motions learned from human demonstration. The suggested system is evaluated by an official tester using five objects with promising results

    Artistic Robotic Arm: Drawing Portraits on Physical Canvas under 80 Seconds

    No full text
    In recent years, the field of robotic portrait drawing has garnered considerable interest, as evidenced by the growing number of researchers focusing on either the speed or quality of the output drawing. However, the pursuit of either speed or quality alone has resulted in a trade-off between the two objectives. Therefore, in this paper, we propose a new approach that combines both objectives by leveraging advanced machine learning techniques and a variable line width Chinese calligraphy pen. Our proposed system emulates the human drawing process, which entails planning the sketch and creating it on the canvas, thus providing a realistic and high-quality output. One of the main challenges in portrait drawing is preserving the facial features, such as the eyes, mouth, nose, and hair, which are crucial for capturing the essence of a person. To overcome this challenge, we employ CycleGAN, a powerful technique that retains important facial details while transferring the visualized sketch onto the canvas. Moreover, we introduce the Drawing Motion Generation and Robot Motion Control Modules to transfer the visualized sketch onto a physical canvas. These modules enable our system to create high-quality portraits within seconds, surpassing existing methods in terms of both time efficiency and detail quality. Our proposed system was evaluated through extensive real-life experiments and showcased at the RoboWorld 2022 exhibition. During the exhibition, our system drew portraits of more than 40 visitors, yielding a survey outcome with a satisfaction rate of 95%. This result indicates the effectiveness of our approach in creating high-quality portraits that are not only visually pleasing but also accurate

    RoboCup@Home 2021 Domestic Standard Platform League Winner

    No full text
    Adoption of the World Robot Summit (WRS) rules and simulated environments for the RoboCup@Home Leagues in 2021 pose significant challenges for perception, manipulation and autonomy of the robot. Especially, the randomized item placement and longer task time highlight the need for a robust long-term autonomy that can recover from various failure cases. In this paper, we present how we have prepared our software for such challenges, which helped us to get the highest score among all the teams participated in RoboCup@Home 2021.N

    Comparison of Phase States of PM<sub>2.5</sub> over Megacities, Seoul and Beijing, and Their Implications on Particle Size Distribution

    No full text
    Although the particle phase state is an important property, there is scant information on it, especially, for real-world aerosols. To explore the phase state of fine mode aerosols (PM2.5) in two megacities, Seoul and Beijing, we collected PM2.5 filter samples daily from Dec 2020 to Jan 2021. Using optical microscopy combined with the poke-and-flow technique, the phase states of the bulk of PM2.5 as a function of relative humidity (RH) were determined and compared to the ambient RH ranges in the two cities. PM2.5 was found to be liquid to semisolid in Seoul but mostly semisolid to solid in Beijing. The liquid state was dominant on polluted days, while a semisolid state was dominant on clean days in Seoul. These findings can be explained by the aerosol liquid water content related to the chemical compositions of the aerosols at ambient RH; the water content of PM2.5 was much higher in Seoul than in Beijing. Furthermore, the overall phase states of PM2.5 observed in Seoul and Beijing were interrelated with the particle size distribution. The results of this study aid in a better understanding of the fundamental physical properties of aerosols and in examining how these are linked to PM2.5 in polluted urban atmospheres
    corecore