36 research outputs found

    Antimicrobial Peptide Evolution in the Asiatic Honey Bee Apis cerana

    Get PDF
    The Asiatic honeybee, Apis cerana Fabricius, is an important honeybee species in Asian countries. It is still found in the wild, but is also one of the few bee species that can be domesticated. It has acquired some genetic advantages and significantly different biological characteristics compared with other Apis species. However, it has been less studied, and over the past two decades, has become a threatened species in China. We designed primers for the sequences of the four antimicrobial peptide cDNA gene families (abaecin, defensin, apidaecin, and hymenoptaecin) of the Western honeybee, Apis mellifera L. and identified all the antimicrobial peptide cDNA genes in the Asiatic honeybee for the first time. All the sequences were amplified by reverse transcriptase-polymerase chain reaction (RT-PCR). In all, 29 different defensin cDNA genes coding 7 different defensin peptides, 11 different abaecin cDNA genes coding 2 different abaecin peptides, 13 different apidaecin cDNA genes coding 4 apidaecin peptides and 34 different hymenoptaecin cDNA genes coding 13 different hymenoptaecin peptides were cloned and identified from the Asiatic honeybee adult workers. Detailed comparison of these four antimicrobial peptide gene families with those of the Western honeybee revealed that there are many similarities in the quantity and amino acid components of peptides in the abaecin, defensin and apidaecin families, while many more hymenoptaecin peptides are found in the Asiatic honeybee than those in the Western honeybee (13 versus 1). The results indicated that the Asiatic honeybee adult generated more variable antimicrobial peptides, especially hymenoptaecin peptides than the Western honeybee when stimulated by pathogens or injury. This suggests that, compared to the Western honeybee that has a longer history of domestication, selection on the Asiatic honeybee has favored the generation of more variable antimicrobial peptides as protection against pathogens

    Error analysis of photometric stereo with colour lights

    No full text
    This paper presents a comprehensive error analysis of photometric stereo with colour lights for surface normal estimation. An analytic formulation is introduced for the error in albedo-scaled normal estimation with respect to all inputs to the photometric stereo - pixel colour, light directions and light-sensor-material interaction. This characterises the error in the estimated normal for all possible directions with respect to the light setup given discrepancies in the inputs. The theoretical formulation is validated by an extensive set of experiments with synthetic data. Example discrepancies in each input to the photometric stereo calculation show a complex distribution of the error of an albedo-scaled normal over the space of possible orientations. This is generalised in the empirical sensitivity analysis which demonstrates that the magnitude of the error propagation from the light directions and the light-sensor-material interaction depends on the surface orientation. However, the image noise is propagated uniformly to all normal directions. There is a linear relationship between the uncertainty in the individual inputs and in the output normals. The theoretical and experimental findings provide several recommendations on designing a capture setup which is the least sensitive to the inaccuracies in the pixel colours, the light directions and the light-sensor-material interaction. An example is provided showing how to assess the inaccuracies in PSCL calculation for a real-world setup. © 2014 Elsevier B.V. All rights reserved

    High-fidelity facial performance capture with non-sequential temporal alignment.

    No full text

    High-Detail 3D Capture and Non-sequential Alignment of Facial Performance

    No full text
    This paper presents a novel system for the 3D capture of facial performance using standard video and lighting equipment. The mesh of an actor's face is tracked non-sequentially throughout a performance using multi-view image sequences. The minimum spanning tree calculated in expression dissimilarity space defines the traversal of the sequences optimal with respect to error accumulation. A robust patch-based frame-to-frame surface alignment combined with the optimal traversal significantly reduces drift compared to previous sequential techniques. Multi-path temporal fusion resolves inconsistencies between different alignment paths and yields a final mesh sequence which is temporally consistent. The surface tracking framework is coupled with photometric stereo using colour lights which captures metrically correct skin geometry. High-detail UV normal maps corrected for shadow and bias artefacts augment the temporally consistent mesh sequence. Evaluation on challenging performances by several actors demonstrates the acquisition of subtle skin dynamics and minimal drift over long sequences. A quantitative comparison to a state-of-the-art system shows similar quality of temporal alignment. © 2012 IEEE

    High-fidelity facial performance capture with non-sequential temporal alignment.

    No full text

    Error analysis of photometric stereo with colour lights

    No full text
    This paper presents a comprehensive error analysis of photometric stereo with colour lights for surface normal estimation. An analytic formulation is introduced for the error in albedo-scaled normal estimation with respect to all inputs to the photometric stereo - pixel colour, light directions and light-sensor-material interaction. This characterises the error in the estimated normal for all possible directions with respect to the light setup given discrepancies in the inputs. The theoretical formulation is validated by an extensive set of experiments with synthetic data. Example discrepancies in each input to the photometric stereo calculation show a complex distribution of the error of an albedo-scaled normal over the space of possible orientations. This is generalised in the empirical sensitivity analysis which demonstrates that the magnitude of the error propagation from the light directions and the light-sensor-material interaction depends on the surface orientation. However, the image noise is propagated uniformly to all normal directions. There is a linear relationship between the uncertainty in the individual inputs and in the output normals. The theoretical and experimental findings provide several recommendations on designing a capture setup which is the least sensitive to the inaccuracies in the pixel colours, the light directions and the light-sensor-material interaction. An example is provided showing how to assess the inaccuracies in PSCL calculation for a real-world setup. © 2014 Elsevier B.V. All rights reserved

    Cooperative patch-based 3D surface tracking

    No full text
    This paper presents a novel dense motion capture technique which creates a temporally consistent mesh sequence from several calibrated and synchronised video sequences of a dynamic object. A surface patch model based on the topology of a user-specified reference mesh is employed to track the surface of the object over time. Multi-view 3D matching of surface patches using a novel cooperative minimisation approach provides initial motion estimates which are robust to large, rapid non-rigid changes of shape. A Laplacian deformation subsequently regularises the motion of the whole mesh using the weighted vertex displacements as soft constraints. An unregistered surface geometry independently reconstructed at each frame is incorporated as a shape prior to improve the quality of tracking. The method is evaluated in a challenging scenario of facial performance capture. Results demonstrate accurate tracking of fast, complex expressions over long sequences without use of markers or a pattern. © 2011 IEEE

    Cooperative patch-based 3D surface tracking

    No full text
    This paper presents a novel dense motion capture technique which creates a temporally consistent mesh sequence from several calibrated and synchronised video sequences of a dynamic object. A surface patch model based on the topology of a user-specified reference mesh is employed to track the surface of the object over time. Multi-view 3D matching of surface patches using a novel cooperative minimisation approach provides initial motion estimates which are robust to large, rapid non-rigid changes of shape. A Laplacian deformation subsequently regularises the motion of the whole mesh using the weighted vertex displacements as soft constraints. An unregistered surface geometry independently reconstructed at each frame is incorporated as a shape prior to improve the quality of tracking. The method is evaluated in a challenging scenario of facial performance capture. Results demonstrate accurate tracking of fast, complex expressions over long sequences without use of markers or a pattern. © 2011 IEEE
    corecore