1,010 research outputs found

    Mechatronic design of the Twente humanoid head

    Get PDF
    This paper describes the mechatronic design of the Twente humanoid head, which has been realized in the purpose of having a research platform for human-machine interaction. The design features a fast, four degree of freedom neck, with long range of motion, and a vision system with three degrees of freedom, mimicking the eyes. To achieve fast target tracking, two degrees of freedom in the neck are combined in a differential drive, resulting in a low moving mass and the possibility to use powerful actuators. The performance of the neck has been optimized by minimizing backlash in the mechanisms, and using gravity compensation. The vision system is based on a saliency algorithm that uses the camera images to determine where the humanoid head should look at, i.e. the focus of attention computed according to biological studies. The motion control algorithm receives, as input, the output of the vision algorithm and controls the humanoid head to focus on and follow the target point. The control architecture exploits the redundancy of the system to show human-like motions while looking at a target. The head has a translucent plastic cover, onto which an internal LED system projects the mouth and the eyebrows, realizing human-like facial expressions

    On-the-fly adaptivity for nonlinear twoscale simulations using artificial neural networks and reduced order modeling

    Get PDF
    A multi-fidelity surrogate model for highly nonlinear multiscale problems is proposed. It is based on the introduction of two different surrogate models and an adaptive on-the-fly switching. The two concurrent surrogates are built incrementally starting from a moderate set of evaluations of the full order model. Therefore, a reduced order model (ROM) is generated. Using a hybrid ROM-preconditioned FE solver, additional effective stress-strain data is simulated while the number of samples is kept to a moderate level by using a dedicated and physics-guided sampling technique. Machine learning (ML) is subsequently used to build the second surrogate by means of artificial neural networks (ANN). Different ANN architectures are explored and the features used as inputs of the ANN are fine tuned in order to improve the overall quality of the ML model. Additional ANN surrogates for the stress errors are generated. Therefore, conservative design guidelines for error surrogates are presented by adapting the loss functions of the ANN training in pure regression or pure classification settings. The error surrogates can be used as quality indicators in order to adaptively select the appropriate -- i.e. efficient yet accurate -- surrogate. Two strategies for the on-the-fly switching are investigated and a practicable and robust algorithm is proposed that eliminates relevant technical difficulties attributed to model switching. The provided algorithms and ANN design guidelines can easily be adopted for different problem settings and, thereby, they enable generalization of the used machine learning techniques for a wide range of applications. The resulting hybrid surrogate is employed in challenging multilevel FE simulations for a three-phase composite with pseudo-plastic micro-constituents. Numerical examples highlight the performance of the proposed approach

    Online estimation of the hand-eye transformation from surgical scenes

    Full text link
    Hand-eye calibration algorithms are mature and provide accurate transformation estimations for an effective camera-robot link but rely on a sufficiently wide range of calibration data to avoid errors and degenerate configurations. To solve the hand-eye problem in robotic-assisted minimally invasive surgery and also simplify the calibration procedure by using neural network method cooporating with the new objective function. We present a neural network-based solution that estimates the transformation from a sequence of images and kinematic data which significantly simplifies the calibration procedure. The network utilises the long short-term memory architecture to extract temporal information from the data and solve the hand-eye problem. The objective function is derived from the linear combination of remote centre of motion constraint, the re-projection error and its derivative to induce a small change in the hand-eye transformation. The method is validated with the data from da Vinci Si and the result shows that the estimated hand-eye matrix is able to re-project the end-effector from the robot coordinate to the camera coordinate within 10 to 20 pixels of accuracy in both testing dataset. The calibration performance is also superior to the previous neural network-based hand-eye method. The proposed algorithm shows that the calibration procedure can be simplified by using deep learning techniques and the performance is improved by the assumption of non-static hand-eye transformations.Comment: 6 pages, 4 main figure

    Hand-eye calibration for robotic assisted minimally invasive surgery without a calibration object

    Get PDF
    In a robot mounted camera arrangement, handeye calibration estimates the rigid relationship between the robot and camera coordinate frames. Most hand-eye calibration techniques use a calibration object to estimate the relative transformation of the camera in several views of the calibration object and link these to the forward kinematics of the robot to compute the hand-eye transformation. Such approaches achieve good accuracy for general use but for applications such as robotic assisted minimally invasive surgery, acquiring a calibration sequence multiple times during a procedure is not practical. In this paper, we present a new approach to tackle the problem by using the robotic surgical instruments as the calibration object with well known geometry from CAD models used for manufacturing. Our approach removes the requirement of a custom sterile calibration object to be used in the operating room and it simplifies the process of acquiring calibration data when the laparoscope is constrained to move around a remote centre of motion. This is the first demonstration of the feasibility to perform hand-eye calibration using components of the robotic system itself and we show promising validation results on synthetic data as well as data acquired with the da Vinci Research Kit

    Learning to Calibrate - Estimating the Hand-eye Transformation without Calibration Objects

    Get PDF
    Hand-eye calibration is a method to determine the transformation linking between the robot and camera coordinate systems. Conventional calibration algorithms use a calibration grid to determine camera poses, corresponding to the robot poses, both of which are used in the main calibration procedure. Although such methods yield good calibration accuracy and are suitable for offline applications, they are not applicable in a dynamic environment such as robotic-assisted minimally invasive surgery (RMIS) because changes in the setup can be disruptive and time-consuming to the workflow as it requires yet another calibration procedure. In this paper, we propose a neural network-based hand-eye calibration method that does not require camera poses from a calibration grid but only uses the motion from surgical instruments in a camera frame and their corresponding robot poses as input to recover the hand-eye matrix. The advantages of using neural network are that the method is not limited by a single rigid transformation alignment and can learn dynamic changes correlated with kinematics and tool motion/interactions. Its loss function is derived from the original hand-eye transformation, the re-projection error and also the pose error in comparison to the remote centre of motion. The proposed method is validated with data from da Vinci Si and the results indicate that the designed network architecture can extract the relevant information and estimate the hand-eye matrix. Unlike the conventional hand-eye approaches, it does not require camera pose estimations which significantly simplifies the hand-eye problem in RMIS context as updating the hand-eye relationship can be done with a trained network and sequence of images. This introduces a potential of creating a hand-eye calibratio

    Hand-eye calibration, constraints and source synchronisation for robotic-assisted minimally invasive surgery

    Get PDF
    In robotic-assisted minimally invasive surgery (RMIS), the robotic system allows surgeons to remotely control articulated instruments to perform surgical interventions and introduces a potential to implement computer-assisted interventions (CAI). However, the information in the camera must be correctly transformed into the robot coordinate as its movement is controlled by the robot kinematic. Therefore, determining the rigid transformation connecting the coordinates is necessary. Such process is called hand-eye calibration. One of the challenges in solving the hand-eye problem in the RMIS setup is data asynchronicity, which occurs when tracking equipments are integrated into a robotic system and create temporal misalignment. For the calibration itself, noise in the robot and camera motions can be propagated to the calibrated result and as a result of a limited motion range, the error cannot be fully suppressed. Finally, the calibration procedure must be adaptive and simple so a disruption in a surgical workflow is minimal since any change in the setup may require another calibration procedure. We propose solutions to deal with the asynchronicity, noise sensitivity, and a limited motion range. We also propose a potential to use a surgical instrument as the calibration target to reduce the complexity in the calibration procedure. The proposed algorithms are validated through extensive experiments with synthetic and real data from the da Vinci Research Kit and the KUKA robot arms. The calibration performance is compared with existing hand-eye algorithms and it shows promising results. Although the calibration using a surgical instrument as the calibration target still requires a further development, results indicate that the proposed methods increase the calibration performance, and contribute to finding an optimal solution to the hand-eye problem in robotic surgery

    Machine-Learning-Augmented Predictive Modeling of Turbulent Separated Flows over Airfoils

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/143090/1/1.J055595.pd

    Cluster-based reduced-order modelling of a mixing layer

    Full text link
    We propose a novel cluster-based reduced-order modelling (CROM) strategy of unsteady flows. CROM combines the cluster analysis pioneered in Gunzburger's group (Burkardt et al. 2006) and and transition matrix models introduced in fluid dynamics in Eckhardt's group (Schneider et al. 2007). CROM constitutes a potential alternative to POD models and generalises the Ulam-Galerkin method classically used in dynamical systems to determine a finite-rank approximation of the Perron-Frobenius operator. The proposed strategy processes a time-resolved sequence of flow snapshots in two steps. First, the snapshot data are clustered into a small number of representative states, called centroids, in the state space. These centroids partition the state space in complementary non-overlapping regions (centroidal Voronoi cells). Departing from the standard algorithm, the probabilities of the clusters are determined, and the states are sorted by analysis of the transition matrix. Secondly, the transitions between the states are dynamically modelled using a Markov process. Physical mechanisms are then distilled by a refined analysis of the Markov process, e.g. using finite-time Lyapunov exponent and entropic methods. This CROM framework is applied to the Lorenz attractor (as illustrative example), to velocity fields of the spatially evolving incompressible mixing layer and the three-dimensional turbulent wake of a bluff body. For these examples, CROM is shown to identify non-trivial quasi-attractors and transition processes in an unsupervised manner. CROM has numerous potential applications for the systematic identification of physical mechanisms of complex dynamics, for comparison of flow evolution models, for the identification of precursors to desirable and undesirable events, and for flow control applications exploiting nonlinear actuation dynamics.Comment: 48 pages, 30 figures. Revised version with additional material. Accepted for publication in Journal of Fluid Mechanic
    • …
    corecore