96 research outputs found

    Antimicrobial Peptide Evolution in the Asiatic Honey Bee Apis cerana

    Get PDF
    The Asiatic honeybee, Apis cerana Fabricius, is an important honeybee species in Asian countries. It is still found in the wild, but is also one of the few bee species that can be domesticated. It has acquired some genetic advantages and significantly different biological characteristics compared with other Apis species. However, it has been less studied, and over the past two decades, has become a threatened species in China. We designed primers for the sequences of the four antimicrobial peptide cDNA gene families (abaecin, defensin, apidaecin, and hymenoptaecin) of the Western honeybee, Apis mellifera L. and identified all the antimicrobial peptide cDNA genes in the Asiatic honeybee for the first time. All the sequences were amplified by reverse transcriptase-polymerase chain reaction (RT-PCR). In all, 29 different defensin cDNA genes coding 7 different defensin peptides, 11 different abaecin cDNA genes coding 2 different abaecin peptides, 13 different apidaecin cDNA genes coding 4 apidaecin peptides and 34 different hymenoptaecin cDNA genes coding 13 different hymenoptaecin peptides were cloned and identified from the Asiatic honeybee adult workers. Detailed comparison of these four antimicrobial peptide gene families with those of the Western honeybee revealed that there are many similarities in the quantity and amino acid components of peptides in the abaecin, defensin and apidaecin families, while many more hymenoptaecin peptides are found in the Asiatic honeybee than those in the Western honeybee (13 versus 1). The results indicated that the Asiatic honeybee adult generated more variable antimicrobial peptides, especially hymenoptaecin peptides than the Western honeybee when stimulated by pathogens or injury. This suggests that, compared to the Western honeybee that has a longer history of domestication, selection on the Asiatic honeybee has favored the generation of more variable antimicrobial peptides as protection against pathogens

    High-Detail Temporally Consistent 3D Capture of Facial Performance.

    No full text
    Capturing a realistic digital copy of a facial performance has high importance for film and television production. This allows high-quality replay of the performance under different conditions such as a new illumination or viewpoint. The model of performance can be altered by space-time editing or can be used for building and driving a facial animation rig. This thesis presents a novel system to capture high-detail 4D models of facial performances. A geometric model without appearance is reconstructed from videos of an actor’s face recorded from multiple views in a controlled studio environment. The focus is on achieving temporal consistency and a high level of detail of the 4D performance model which are crucial aspects for the use in film production. A baseline method for dense surface tracking in multi-view image sequences is investigated for facial performance capture. Evaluation shows limitations of previous sequential methods which provide accurate temporal alignment only for faces with a painted random pattern. A novel robust sequential tracking is proposed to handle weak skin texture and rapid non-rigid facial motions. However, gradual accumulation of frame-to-frame alignment errors still results in significant drift of the tracked mesh. A non-sequential tracking framework is introduced which processes an input sequence according to a tree derived from a measure of dissimilarity between all pairs of frames. A novel cluster tree enables balancing between sequential drift and non-sequential jump artefacts. Comprehensive evaluation shows temporally consistent mesh sequences with very little drift for highly dynamic facial performances. Improvements are also demonstrated on whole-body performances and cloth deformation. Photometric stereo with colour lights is used for capturing pore-level skin detail. An original error analysis of the technique is conducted for image noise and calibration errors. The proposed markerless capture system for facial performances combines photometric stereo with non-sequential surface tracking based on the cluster tree. A practical capture setup is constructed from standard video equipment without active illumination or high-speed recording. Errors in the photometric normals are corrected using the temporally aligned mesh sequence. The resulting 3D models enhanced by the normal maps capture fine skin dynamics such as skin wrinkling. High-quality temporal consistency of the models is also demonstrated with minimal drift in comparison to the previous approaches. Qualitative and quantitative comparison with the best state-of-the-art system shows comparable results

    High-Detail Temporally Consistent 3D Capture of Facial Performance.

    No full text
    Capturing a realistic digital copy of a facial performance has high importance for film and television production. This allows high-quality replay of the performance under different conditions such as a new illumination or viewpoint. The model of performance can be altered by space-time editing or can be used for building and driving a facial animation rig. This thesis presents a novel system to capture high-detail 4D models of facial performances. A geometric model without appearance is reconstructed from videos of an actor’s face recorded from multiple views in a controlled studio environment. The focus is on achieving temporal consistency and a high level of detail of the 4D performance model which are crucial aspects for the use in film production. A baseline method for dense surface tracking in multi-view image sequences is investigated for facial performance capture. Evaluation shows limitations of previous sequential methods which provide accurate temporal alignment only for faces with a painted random pattern. A novel robust sequential tracking is proposed to handle weak skin texture and rapid non-rigid facial motions. However, gradual accumulation of frame-to-frame alignment errors still results in significant drift of the tracked mesh. A non-sequential tracking framework is introduced which processes an input sequence according to a tree derived from a measure of dissimilarity between all pairs of frames. A novel cluster tree enables balancing between sequential drift and non-sequential jump artefacts. Comprehensive evaluation shows temporally consistent mesh sequences with very little drift for highly dynamic facial performances. Improvements are also demonstrated on whole-body performances and cloth deformation. Photometric stereo with colour lights is used for capturing pore-level skin detail. An original error analysis of the technique is conducted for image noise and calibration errors. The proposed markerless capture system for facial performances combines photometric stereo with non-sequential surface tracking based on the cluster tree. A practical capture setup is constructed from standard video equipment without active illumination or high-speed recording. Errors in the photometric normals are corrected using the temporally aligned mesh sequence. The resulting 3D models enhanced by the normal maps capture fine skin dynamics such as skin wrinkling. High-quality temporal consistency of the models is also demonstrated with minimal drift in comparison to the previous approaches. Qualitative and quantitative comparison with the best state-of-the-art system shows comparable results

    Error analysis of photometric stereo with colour lights

    No full text
    This paper presents a comprehensive error analysis of photometric stereo with colour lights for surface normal estimation. An analytic formulation is introduced for the error in albedo-scaled normal estimation with respect to all inputs to the photometric stereo - pixel colour, light directions and light-sensor-material interaction. This characterises the error in the estimated normal for all possible directions with respect to the light setup given discrepancies in the inputs. The theoretical formulation is validated by an extensive set of experiments with synthetic data. Example discrepancies in each input to the photometric stereo calculation show a complex distribution of the error of an albedo-scaled normal over the space of possible orientations. This is generalised in the empirical sensitivity analysis which demonstrates that the magnitude of the error propagation from the light directions and the light-sensor-material interaction depends on the surface orientation. However, the image noise is propagated uniformly to all normal directions. There is a linear relationship between the uncertainty in the individual inputs and in the output normals. The theoretical and experimental findings provide several recommendations on designing a capture setup which is the least sensitive to the inaccuracies in the pixel colours, the light directions and the light-sensor-material interaction. An example is provided showing how to assess the inaccuracies in PSCL calculation for a real-world setup. © 2014 Elsevier B.V. All rights reserved

    High-fidelity facial performance capture with non-sequential temporal alignment.

    No full text

    High-Detail 3D Capture and Non-sequential Alignment of Facial Performance

    No full text
    This paper presents a novel system for the 3D capture of facial performance using standard video and lighting equipment. The mesh of an actor's face is tracked non-sequentially throughout a performance using multi-view image sequences. The minimum spanning tree calculated in expression dissimilarity space defines the traversal of the sequences optimal with respect to error accumulation. A robust patch-based frame-to-frame surface alignment combined with the optimal traversal significantly reduces drift compared to previous sequential techniques. Multi-path temporal fusion resolves inconsistencies between different alignment paths and yields a final mesh sequence which is temporally consistent. The surface tracking framework is coupled with photometric stereo using colour lights which captures metrically correct skin geometry. High-detail UV normal maps corrected for shadow and bias artefacts augment the temporally consistent mesh sequence. Evaluation on challenging performances by several actors demonstrates the acquisition of subtle skin dynamics and minimal drift over long sequences. A quantitative comparison to a state-of-the-art system shows similar quality of temporal alignment. © 2012 IEEE

    High-fidelity facial performance capture with non-sequential temporal alignment.

    No full text

    Error analysis of photometric stereo with colour lights

    No full text
    This paper presents a comprehensive error analysis of photometric stereo with colour lights for surface normal estimation. An analytic formulation is introduced for the error in albedo-scaled normal estimation with respect to all inputs to the photometric stereo - pixel colour, light directions and light-sensor-material interaction. This characterises the error in the estimated normal for all possible directions with respect to the light setup given discrepancies in the inputs. The theoretical formulation is validated by an extensive set of experiments with synthetic data. Example discrepancies in each input to the photometric stereo calculation show a complex distribution of the error of an albedo-scaled normal over the space of possible orientations. This is generalised in the empirical sensitivity analysis which demonstrates that the magnitude of the error propagation from the light directions and the light-sensor-material interaction depends on the surface orientation. However, the image noise is propagated uniformly to all normal directions. There is a linear relationship between the uncertainty in the individual inputs and in the output normals. The theoretical and experimental findings provide several recommendations on designing a capture setup which is the least sensitive to the inaccuracies in the pixel colours, the light directions and the light-sensor-material interaction. An example is provided showing how to assess the inaccuracies in PSCL calculation for a real-world setup. © 2014 Elsevier B.V. All rights reserved
    corecore