36 research outputs found

    Action-conditioned generation of bimanual object manipulation sequences

    No full text
    The generation of bimanual object manipulation sequences given a semantic action label has broad applications in collaborative robots or augmented reality. This relatively new problem differs from existing works that generate whole-body motions without any object interaction as it now requires the model to additionally learn the spatio-temporal relationship that exists between the human joints and object motion given said label. To tackle this task, we leverage the varying degree each muscle or joint is involved during object manipulation. For instance, the wrists act as the prime movers for the objects while the finger joints are angled to provide a firm grip. The remaining body joints are the least involved in that they are positioned as naturally and comfortably as possible. We thus design an architecture that comprises 3 main components: (i) a graph recurrent network that generates the wrist and object motion, (ii) an attention-based recurrent network that estimates the required finger joint angles given the graph configuration, and (iii) a recurrent network that reconstructs the body pose given the locations of the wrist. We evaluate our approach on the KIT Motion Capture and KIT RGBD Bi-manual Manipulation datasets and show improvements over a simplified approach that treats the entire body as a single entity, and existing whole-body-only methods

    Using eye-gaze to forecast human pose in everyday pick and place actions

    No full text
    Collaborative robots that operate alongside hu- mans require the ability to understand their intent and forecast their pose. Among the various indicators of intent, the eye gaze is particularly important as it signals action towards the gazed object. By observing a person’s gaze, one can effectively predict the object of interest and subsequently, forecast the person’s pose. We leverage this and present a method that forecasts the human pose using gaze information for everyday pick and place actions in a home environment. Our method first attends to fixations to locate the coordinates of the object of interest before inputting said coordinates to a pose forecasting network. Experiments on the MoGaze dataset show that our gaze network lowers the errors of existing pose forecasting methods and that incorporating prior in the form of textual instructions further lowers the errors by a significant amount. Furthermore, the use of eye gaze now allows a simple multilayer perceptron network to directly forecast the keypose

    Using a single input to forecast human action keystates in everyday pick and place actions

    No full text
    We define action keystates as the start or end of an action that contains information such as the human pose and time. Existing methods that forecast the human pose use recurrent networks that input and output a sequence of poses. In this pa- per, we present a method tailored for everyday pick and place actions where the object of interest is known. In contrast to existing methods, ours uses an input from a single timestep to directly forecast (i) the key pose the instant the pick or place action is performed and (ii) the time it takes to get to the pre- dicted key pose. Experimental results show that our method outperforms the state-of-the-art for key pose forecasting and is comparable for time forecasting while running at least an order of magnitude faster. Further ablative studies reveal the significance of the object of interest in enabling the total num- ber of parameters across all existing methods to be reduced by at least 90% without any degradation in performance

    In Vitro and In Vivo Biological Assessments of 3D-Bioprinted Scaffolds for Dental Applications

    No full text
    Three-dimensional (3D) bioprinting is a unique combination of technological advances in 3D printing and tissue engineering. It has emerged as a promising approach to address the dilemma in current dental treatments faced by clinicians in order to repair or replace injured and diseased tissues. The exploration of 3D bioprinting technology provides high reproducibility and precise control of the bioink containing the desired cells and biomaterial over the architectural and dimensional features of the scaffolds in fabricating functional tissue constructs that are specific to the patient treatment need. In recent years, the dental applications of different 3D bioprinting techniques, types of novel bioinks, and the types of cells used have been extensively explored. Most of the findings noted significant challenges compared to the non-biological 3D printing approach in constructing the bioscaffolds that mimic native tissues. Hence, this review focuses solely on the implementation of 3D bioprinting techniques and strategies based on cell-laden bioinks. It discusses the in vitro applications of 3D-bioprinted scaffolds on cell viabilities, cell functionalities, differentiation ability, and expression of the markers as well as the in vivo evaluations of the implanted bioscaffolds on the animal models for bone, periodontal, dentin, and pulp tissue regeneration. Finally, it outlines some perspectives for future developments in dental applications
    corecore