32 research outputs found

    Between male variation in semen characteristics and preliminary results on the dilution of semen in the ostrich

    Get PDF
    Abstract This study is part of an ongoing project on artificial insemination in ostriches. The physical output of neat semen from four ostrich males was investigated and the effect of reconstituting semen with: 1) seminal plasma of the same male (SPS); 2) seminal plasma of another male (SPD), and 3) Dulbecco's Modified Eagles Medium (DMEM). Semen was collected daily from one or two pairs of males using the dummy female method, each pair being replicated twice. Spermatozoa viability in neat semen, SPS, SPD and DMEM was assessed using nigrosin-eosin staining and the proportions of live normal, live abnormal and dead sperm were determined. Semen volume (mean ± SE) was 1.27 ± 0.13 mL, the concentration of spermatozoa 3.68 ± 0.17 x 10 9 /mL and the number of spermatozoa 4.92 ± 0.64 x 10 9 /ejaculate. Furthermore, the live normal, live abnormal and dead spermatozoa in the neat semen were 61.2 ± 4.5%, 21.2 ± 2.7% and 17.7 ± 4.3% respectively. The ejaculate volume and the number of dead spermatozoa were not affected by collection time. However, the number of live abnormal spermatozoa increased through the day causing a reduction in live normal spermatozoa. Furthermore, re-suspending spermatozoa in DMEM reduced the number of live normal (31.4 ± 4.6%) and live abnormal spermatozoa (11.0 ± 2.7%) and increased the number of dead spermatozoa (57.6 ± 4.4%). In contrast, numbers of live spermatozoa were higher when suspended in seminal plasma and similar in SPS (53.9 ± 4.6%) and SPD (50.7 ± 4.6%). These are the first crucial steps to determining the optimum semen collection time and to improving the viability of diluted spermatozoa

    Analysis of pattern recognition techniques for in-air signature biometrics

    Full text link
    As a result of advances in mobile technology, new services which benefit from the ubiquity of these devices are appearing. Some of these services require the identification of the subject since they may access private user information. In this paper, we propose to identify each user by drawing his/her handwritten signature in the air (in-airsignature). In order to assess the feasibility of an in-airsignature as a biometric feature, we have analysed the performance of several well-known patternrecognitiontechniques—Hidden Markov Models, Bayes classifiers and dynamic time warping—to cope with this problem. Each technique has been tested in the identification of the signatures of 96 individuals. Furthermore, the robustness of each method against spoofing attacks has also been analysed using six impostors who attempted to emulate every signature. The best results in both experiments have been reached by using a technique based on dynamic time warping which carries out the recognition by calculating distances to an average template extracted from several training instances. Finally, a permanence analysis has been carried out in order to assess the stability of in-airsignature over time

    Contribution à la modélisation et à l'exploitation de systèmes hybrides multi-réseaux de neurone (application au traitement intellignet de l'information)

    No full text
    Pour un grand nombre de problèmes que l'on rencontre actuellement (modélisation de processus complexes, reconnaissance de formes, aide au diagnostique médical, détection de pannes) les données sont présentes sous forme de base de données. Ces données sont ensuite traitées et transformées. Ce travail est concentré sur le développement d'une structure de traitement semi-automatique de données. L'approche proposée est construite sur des techniques basées sur la décomposition successive (itérative) du problème initial. L'idée de base puise en partie son origine dans la décomposition d'un traitement initialement complexe par la division de ce dernier pour obtenir une Simplification à la fois au niveau structurel et au niveau du moyen de traitement. Ainsi l'idée directrice du présent travail est liée aux techniques de décomposition de tâche appelées aussi "Divide To Conquer". Un point clé sur lequel s'appuie notre approche est l'intégration de techniques d'estimation de complexité.For a great number of actually encountered problems (complex processes modelization, pattern recognition, medical diagnosis support, fault detection) data is presented in form of database. The data is next transformed and processed. This work is concentrated on the development of semi-automatic data processing structures. Proposed approach is based on iterative decomposition of an initial problem. The main idea is to decompose initia!ly complex problems in order to obtain simplification simultaneously on structural level and processing level. Thus, the principal idea of present work is con nected to task decomposition techniques called "Divide to Conquer". A key point of our approach is the integration of Complexity Estimation techniques.PARIS12-CRETEIL BU Multidisc. (940282102) / SudocSudocFranceF

    Tuning of a Knowledge-Driven Harmonization Model for Tonal Music

    No full text
    Part 5: Algorithms and Data ManagementInternational audienceThe paper presents and discusses direct and indirect tuning of a knowledge-driven harmonization model for tonal music. Automatic harmonization is a data analysis problem: an algorithm processes a music notation document and generates specific meta-data (harmonic functions). The proposed model could be seen as an Expert System with manually selected weights, based largely on the music theory. It emphasizes universality - a possibility of obtaining varied but controllable harmonies. It is directly tunable by changing the internal parameters of harmonization mechanisms, as well as an importance weight corresponding to each mechanism. The authors propose also indirect model tuning, using supervised learning with a preselected set of examples. Indirect tuning algorithms are evaluated experimentally and discussed. The proposed harmonization model is prone both to direct (expert-based) and indirect (data-driven) modifications, what allows for a mixed learning and relatively easy interpretation of internal knowledge

    Modelling Human Cognitive Processes

    No full text
    Part 5: Modelling and OptimizationInternational audienceThe article presents an application of fuzzy sets with triangular norms and balanced fuzzy sets with balanced norms to decision making modelling. We elaborate on a vector-based method for decision problem representation, where each element of a vector corresponds to an argument analysed by a decision maker. Vectors gather information that influence given decision making task. Decision is an outcome of aggregation of information gathered in such vectors. We have capitalized on an inherent ability of balanced norms to aggregate positive and negative premises of different intensity. We have contrasted properties of a bipolar model with a unipolar model based on triangular norms and fuzzy sets. Secondly, we have proposed several aggregation schemes that illustrate different real-life decision making situations. We have shown suitability of the proposed model to represent complex and biased decision making cases

    FE8R - A Universal Method for Face Expression Recognition

    No full text
    Part 9: Biometrics, Identification, SecurityInternational audienceThis paper proposes a new method for recognition of face expressions, called FE8R. We studied 6 standard expressions: anger, disgust, fear, happiness, sadness, surprise, and additional two: cry and natural. For experimental evaluation samples from MUG Facial Expression Database and color FERET Database were taken, with addition of cry expression. The proposed method is based on the extraction of characteristic objects from images by gradient transformation depending on the coordinates of the minimum and maximum points in each object on the face area. The gradient is ranked in [15,+35][-15,+35] degrees. Essential objects are studied in two ways: the first way incorporates slant tracking, the second is based on feature encoding using BPCC algorithm with classification by Backpropagation Artificial Neural Networks. The achieved classification rates have reached 95 %. The second method is proved to be fast and producing satisfactory results, as compared to other approaches

    K3M: A universal algorithm for image skeletonization and a review of thinning techniques

    No full text
    This paper aims at three aspects closely related to each other: first, it presents the state of the art in the area of thinning methodologies, by giving descriptions of general ideas of the most significant algorithms with a comparison between them. Secondly, it proposes a new thinning algorithm that presents interesting properties in terms of processing quality and algorithm clarity, enriched with examples. Thirdly, the work considers parallelization issues for intrinsically sequential algorithms of thinning. The main advantage of the suggested algorithm is its universality, which makes it useful and versatile for a variety of applications
    corecore