36,639 research outputs found

    Mechatronic design of the Twente humanoid head

    Get PDF
    This paper describes the mechatronic design of the Twente humanoid head, which has been realized in the purpose of having a research platform for human-machine interaction. The design features a fast, four degree of freedom neck, with long range of motion, and a vision system with three degrees of freedom, mimicking the eyes. To achieve fast target tracking, two degrees of freedom in the neck are combined in a differential drive, resulting in a low moving mass and the possibility to use powerful actuators. The performance of the neck has been optimized by minimizing backlash in the mechanisms, and using gravity compensation. The vision system is based on a saliency algorithm that uses the camera images to determine where the humanoid head should look at, i.e. the focus of attention computed according to biological studies. The motion control algorithm receives, as input, the output of the vision algorithm and controls the humanoid head to focus on and follow the target point. The control architecture exploits the redundancy of the system to show human-like motions while looking at a target. The head has a translucent plastic cover, onto which an internal LED system projects the mouth and the eyebrows, realizing human-like facial expressions

    Capture, Learning, and Synthesis of 3D Speaking Styles

    Full text link
    Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address this, we introduce a unique 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio from 12 speakers. We then train a neural network on our dataset that factors identity from facial motion. The learned model, VOCA (Voice Operated Character Animation) takes any speech signal as input - even speech in languages other than English - and realistically animates a wide range of adult faces. Conditioning on subject labels during training allows the model to learn a variety of realistic speaking styles. VOCA also provides animator controls to alter speaking style, identity-dependent facial shape, and pose (i.e. head, jaw, and eyeball rotations) during animation. To our knowledge, VOCA is the only realistic 3D facial animation model that is readily applicable to unseen subjects without retargeting. This makes VOCA suitable for tasks like in-game video, virtual reality avatars, or any scenario in which the speaker, speech, or language is not known in advance. We make the dataset and model available for research purposes at http://voca.is.tue.mpg.de.Comment: To appear in CVPR 201

    Towards the production of radiotherapy treatment shells on 3D printers using data derived from DICOM CT and MRI: preclinical feasibility studies

    Get PDF
    Background: Immobilisation for patients undergoing brain or head and neck radiotherapy is achieved using perspex or thermoplastic devices that require direct moulding to patient anatomy. The mould room visit can be distressing for patients and the shells do not always fit perfectly. In addition the mould room process can be time consuming. With recent developments in three-dimensional (3D) printing technologies comes the potential to generate a treatment shell directly from a computer model of a patient. Typically, a patient requiring radiotherapy treatment will have had a computed tomography (CT) scan and if a computer model of a shell could be obtained directly from the CT data it would reduce patient distress, reduce visits, obtain a close fitting shell and possibly enable the patient to start their radiotherapy treatment more quickly. Purpose: This paper focuses on the first stage of generating the front part of the shell and investigates the dosimetric properties of the materials to show the feasibility of 3D printer materials for the production of a radiotherapy treatment shell. Materials and methods: Computer algorithms are used to segment the surface of the patient’s head from CT and MRI datasets. After segmentation approaches are used to construct a 3D model suitable for printing on a 3D printer. To ensure that 3D printing is feasible the properties of a set of 3D printing materials are tested. Conclusions: The majority of the possible candidate 3D printing materials tested result in very similar attenuation of a therapeutic radiotherapy beam as the Orfit soft-drape masks currently in use in many UK radiotherapy centres. The costs involved in 3D printing are reducing and the applications to medicine are becoming more widely adopted. In this paper we show that 3D printing of bespoke radiotherapy masks is feasible and warrants further investigation

    Single camera pose estimation using Bayesian filtering and Kinect motion priors

    Full text link
    Traditional approaches to upper body pose estimation using monocular vision rely on complex body models and a large variety of geometric constraints. We argue that this is not ideal and somewhat inelegant as it results in large processing burdens, and instead attempt to incorporate these constraints through priors obtained directly from training data. A prior distribution covering the probability of a human pose occurring is used to incorporate likely human poses. This distribution is obtained offline, by fitting a Gaussian mixture model to a large dataset of recorded human body poses, tracked using a Kinect sensor. We combine this prior information with a random walk transition model to obtain an upper body model, suitable for use within a recursive Bayesian filtering framework. Our model can be viewed as a mixture of discrete Ornstein-Uhlenbeck processes, in that states behave as random walks, but drift towards a set of typically observed poses. This model is combined with measurements of the human head and hand positions, using recursive Bayesian estimation to incorporate temporal information. Measurements are obtained using face detection and a simple skin colour hand detector, trained using the detected face. The suggested model is designed with analytical tractability in mind and we show that the pose tracking can be Rao-Blackwellised using the mixture Kalman filter, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. In addition, the use of the proposed upper body model allows reliable three-dimensional pose estimates to be obtained indirectly for a number of joints that are often difficult to detect using traditional object recognition strategies. Comparisons with Kinect sensor results and the state of the art in 2D pose estimation highlight the efficacy of the proposed approach.Comment: 25 pages, Technical report, related to Burke and Lasenby, AMDO 2014 conference paper. Code sample: https://github.com/mgb45/SignerBodyPose Video: https://www.youtube.com/watch?v=dJMTSo7-uF

    Aerospace Medicine and Biology: A continuing bibliography, supplement 191

    Get PDF
    A bibliographical list of 182 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1979 is presented

    Aerospace Medicine and Biology: A continuing supplement 180, May 1978

    Get PDF
    This special bibliography lists 201 reports, articles, and other documents introduced into the NASA scientific and technical information system in April 1978

    Endoscopic Camera Control by Head Movements for Thoracic Surgery

    Get PDF
    In current video-assisted thoracic surgery, the endoscopic camera is operated by an assistant of the surgeon, which has several disadvantages. This paper describes a system which enables the surgeon to control the endoscopic camera without the help of an assistant. The system is controlled using head movements, so the surgeon can use his/her hands to oper- ate the instruments. The system is based on a flexible endoscope, which leaves more space for the surgeon to operate his/her instruments compared to a rigid endoscope. The endoscopic image is shown either on a monitor or by means of a head- mounted display. Several trial sessions were performed with an anatomical model. Results indicate that the developed concept may provide a solution to some of the problems currently encountered in video-assisted thoracic surgery. The use of a head-mounted display turned out to be a valuable addition since it ensures the image is always in front of the surgeon’s eyes
    corecore