20 research outputs found

    Spoken Language Understanding in a Latent Topic-based Subspace

    Get PDF
    International audiencePerformance of spoken language understanding applications declines when spoken documents are automatically transcribed in noisy conditions due to high Word Error Rates (WER). To improve the robustness to transcription errors, recent solutions propose to map these automatic transcriptions in a latent space. These studies have proposed to compare classical topic-based representations such as Latent Dirichlet Allocation (LDA), supervised LDA and author-topic (AT) models. An original compact representation, called c-vector, has recently been introduced to walk around the tricky choice of the number of latent topics in these topic-based representations. Moreover, c-vectors allow to increase the robustness of document classification with respect to transcription errors by compacting different LDA representations of a same speech document in a reduced space and then compensate most of the noise of the document representation. The main drawback of this method is the number of sub-tasks needed to build the c-vector space. This paper proposes to both improve this compact representation (c-vector) of spoken documents and to reduce the number of needed sub-tasks, using an original framework in a robust low dimensional space of features from a set of AT models called "Latent Topic-based Sub-space" (LTS). In comparison to LDA, the AT model considers not only the dialogue content (words), but also the class related to the document. Experiments are conducted on the DECODA corpus containing speech conversations from the call-center of the RATP Paris transportation company. Results show that the original LTS representation outperforms the best previous compact representation (c-vector), with a substantial gain of more than 2.5% in terms of correctly labeled conversations

    Titanium Particles Modulate Lymphocyte and Macrophage Polarization in Peri-Implant Gingival Tissues

    Get PDF
    Titanium dental implants are one of the modalities to replace missing teeth. The release of titanium particles from the implant’s surface may modulate the immune cells, resulting in implant failure. However, little is known about the immune microenvironment that plays a role in peri-implant inflammation as a consequence of titanium particles. In this study, the peri-implant gingival tissues were collected from patients with failed implants, successful implants and no implants, and then a whole transcriptome analysis was performed. The gene set enrichment analysis confirmed that macrophage M1/M2 polarization and lymphocyte proliferation were differentially expressed between the study groups. The functional clustering and pathway analysis of the differentially expressed genes between the failed implants and successful implants versus no implants revealed that the immune response pathways were the most common in both comparisons, implying the critical role of infiltrating immune cells in the peri-implant tissues. The H&E and IHC staining confirmed the presence of titanium particles and immune cells in the tissue samples, with an increase in the infiltration of lymphocytes and macrophages in the failed implant samples. The in vitro validation showed a significant increase in the level of IL-1β, IL-8 and IL-18 expression by macrophages. Our findings showed evidence that titanium particles modulate lymphocyte and macrophage polarization in peri-implant gingival tissues, which can help in the understanding of the imbalance in osteoblast–osteoclast activity and failure of dental implant osseointegration

    Reconnaissance du locuteur en milieux difficiles

    No full text
    Speaker recognition witnessed considerable progress in the last decade, achieving very low error rates in controlled conditions. However, the implementation of this technology in real applications is hampered by the great degradation of performances in presence of acoustic nuisances. A lot of effort has been invested by the research community in the design of nuisance compensation techniques in the past years. These algorithms operate at different levels : signal, acoustic parameters, models or scores. With the development of the "total variability" paradigm, new possibilities can be explored due to the simple statistical properties of the i-vector space. Our work falls within this framework and presents new compensation techniques which operate directly in the i-vector space. These algorithms use simple relationships between corrupted i-vectors and the corresponding clean versions and ignore the real effect of nuisances in this domain. In order to implement this methodology, pairs of clean and corrupted data are artificially generated then used to develop nuisance compensation algorithms. This method avoids making complex derivations and approximations. The techniques developed in this thesis are divided into two classes : The first class of techniques is based on a distortion model in the i-vector space. A relationships between the clean version of an i-vector and its corrupted version is set and an estimator is built to transform a corrupted test i-vector to its clean counterpart. The second class of techniques does not use any distortion model in the i-vectors domain. It takes into account both the distribution of the clean, corrupt i-vectors as well as the joint distribution. Experiments are carried-out on noisy data and short utterances ; artificially corrupted NIST SRE 2008 data and natural SITW (short / noisy segments).Le domaine de la reconnaissance automatique du locuteur (RAL) a vu des avancées considérables dans la dernière décennie permettant d’atteindre des taux d’erreurs très faibles dans des conditions contrôlées. Cependant, l’implémentation de cette technologie dans des applications réelles est entravée par la grande dégradation des performances en présence de nuisances acoustiques en phase d’utilisation. Un grand effort a été investi par la communauté de recherche en RAL dans la conception de techniques de compensation des nuisances acoustiques. Ces techniques opèrent à différents niveaux : signal, paramètres acoustiques, modèles ou scores. Avec le développement du paradigme de "variabilité totale", de nouvelles possibilités peuvent être explorées profitant des propriété statistiques simples de l’espace des i-vecteurs. Notre travail de thèse s’inscrit dans ce cadre et propose des techniques de compensation des nuisances acoustiques qui opèrent directement dans le domaine des i-vecteurs. Ces algorithmes utilisent des relations simples entre les i-vecteurs corrompus et leurs versions propres et font abstraction de l’effet réel des nuisances dans cet espace. Afin de mettre en œuvre cette méthodologie, des exemples de données propres / corrompues sont générés artificiellement et utilisés pour construire des algorithmes de compensation des nuisances acoustiques. Ce procédé permet d’éviter les dérivations qui peuvent être complexes, voire très approximatives. Les techniques développées dans cette thèse se divisent en deux classes : La première classe de techniques se base sur un modèle de distorsion dans le domaine des i-vecteurs. Une relation entre la version propre et la version corrompue d’un i-vecteur est posée et un estimateur permettant de transformer un i-vecteur de test corrompu en sa version propre est construit. La deuxième classe de techniques n’utilise aucun modèle de distorsion dans le domaine des i-vecteurs. Elle permet de tenir compte à la fois de la distribution des i-vecteurs propres, corrompus ainsi que la distribution jointe. Des expériences ont été réalisées sur les données bruitées ainsi que les données de courte durée ; donnés de NIST SRE 2008 bruitées/découpées artificiellement ainsi que les données du challenge SITW bruitées naturellement / de courte durée

    Speaker recognition in noisy environments

    No full text
    Le domaine de la reconnaissance automatique du locuteur (RAL) a vu des avancées considérables dans la dernière décennie permettant d’atteindre des taux d’erreurs très faibles dans des conditions contrôlées. Cependant, l’implémentation de cette technologie dans des applications réelles est entravée par la grande dégradation des performances en présence de nuisances acoustiques en phase d’utilisation. Un grand effort a été investi par la communauté de recherche en RAL dans la conception de techniques de compensation des nuisances acoustiques. Ces techniques opèrent à différents niveaux : signal, paramètres acoustiques, modèles ou scores. Avec le développement du paradigme de "variabilité totale", de nouvelles possibilités peuvent être explorées profitant des propriété statistiques simples de l’espace des i-vecteurs. Notre travail de thèse s’inscrit dans ce cadre et propose des techniques de compensation des nuisances acoustiques qui opèrent directement dans le domaine des i-vecteurs. Ces algorithmes utilisent des relations simples entre les i-vecteurs corrompus et leurs versions propres et font abstraction de l’effet réel des nuisances dans cet espace. Afin de mettre en œuvre cette méthodologie, des exemples de données propres / corrompues sont générés artificiellement et utilisés pour construire des algorithmes de compensation des nuisances acoustiques. Ce procédé permet d’éviter les dérivations qui peuvent être complexes, voire très approximatives. Les techniques développées dans cette thèse se divisent en deux classes : La première classe de techniques se base sur un modèle de distorsion dans le domaine des i-vecteurs. Une relation entre la version propre et la version corrompue d’un i-vecteur est posée et un estimateur permettant de transformer un i-vecteur de test corrompu en sa version propre est construit. La deuxième classe de techniques n’utilise aucun modèle de distorsion dans le domaine des i-vecteurs. Elle permet de tenir compte à la fois de la distribution des i-vecteurs propres, corrompus ainsi que la distribution jointe. Des expériences ont été réalisées sur les données bruitées ainsi que les données de courte durée ; donnés de NIST SRE 2008 bruitées/découpées artificiellement ainsi que les données du challenge SITW bruitées naturellement / de courte durée.Speaker recognition witnessed considerable progress in the last decade, achieving very low error rates in controlled conditions. However, the implementation of this technology in real applications is hampered by the great degradation of performances in presence of acoustic nuisances. A lot of effort has been invested by the research community in the design of nuisance compensation techniques in the past years. These algorithms operate at different levels : signal, acoustic parameters, models or scores. With the development of the "total variability" paradigm, new possibilities can be explored due to the simple statistical properties of the i-vector space. Our work falls within this framework and presents new compensation techniques which operate directly in the i-vector space. These algorithms use simple relationships between corrupted i-vectors and the corresponding clean versions and ignore the real effect of nuisances in this domain. In order to implement this methodology, pairs of clean and corrupted data are artificially generated then used to develop nuisance compensation algorithms. This method avoids making complex derivations and approximations. The techniques developed in this thesis are divided into two classes : The first class of techniques is based on a distortion model in the i-vector space. A relationships between the clean version of an i-vector and its corrupted version is set and an estimator is built to transform a corrupted test i-vector to its clean counterpart. The second class of techniques does not use any distortion model in the i-vectors domain. It takes into account both the distribution of the clean, corrupt i-vectors as well as the joint distribution. Experiments are carried-out on noisy data and short utterances ; artificially corrupted NIST SRE 2008 data and natural SITW (short / noisy segments)

    Assessing The Position And Angulation Of Single Implants Restored In The Predoctoral Dentistry Program

    No full text
    Assessing The Position And Angulation Of Single Implants Restored In The Predoctoral Dentistry ProgramWaad M. KhederMasters of ScienceGraduate ProsthodonticsUniversity of Toronto2014AbstractObjective: to assess if single implants restored in the undergraduate clinic at the Faculty of Dentistry, University of Toronto, are placed in a compromised position and angulation relative to the adjacent natural teeth. Materials and Methods: The study sample consists of 108 patients treated with single implants placed in the Implants Placement Unit and restored by predoctoral students at the Faculty of Dentistry, University of Toronto. Assessing the angulation and 3D position of implant relative to adjacent teeth were conducted by using the measurement tool in the 3D scanner. Results: The highest percentage of the non-ideal implant position was for mesiodistal implant position and the lowest percentage was for the non-ideal buccolingual implant angulation. Conclusion: The placement of the implant in a non-ideal position/angulation may be due to: Gingival biotype, buccal cortical plate concavity, selected implant diameter and Implant site relation to vital anatomical structures and roots of adjacent teeth.M.Sc

    Sécurité des applications Web : Analyse, modélisation et détection des attaques par apprentissage automatique

    Get PDF
    Web applications are the backbone of modern information systems. The Internet exposure of these applications continually generates new forms of threats that can jeopardize the security of the entire information system. To counter these threats, there are robust and feature-rich solutions. These solutions are based on well-proven attack detection models, with advantages and limitations for each model. Our work consists in integrating functionalities of several models into a single solution in order to increase the detection capacity. To achieve this objective, we define in a first contribution, a classification of the threats adapted to the context of the Web applications. This classification also serves to solve some problems of scheduling analysis operations during the detection phase of the attacks. In a second contribution, we propose an architecture of Web application firewall based on two analysis models. The first is a behavioral analysis module, and the second uses the signature inspection approach. The main challenge to be addressed with this architecture is to adapt the behavioral analysis model to the context of Web applications. We are responding to this challenge by using a modeling approach of malicious behavior. Thus, it is possible to construct for each attack class its own model of abnormal behavior. To construct these models, we use classifiers based on supervised machine learning. These classifiers use learning datasets to learn the deviant behaviors of each class of attacks. Thus, a second lock in terms of the availability of the learning data has been lifted. Indeed, in a final contribution, we defined and designed a platform for automatic generation of training datasets. The data generated by this platform is standardized and categorized for each class of attacks. The learning data generation model we have developed is able to learn "from its own errors" continuously in order to produce higher quality machine learning datasets .Les applications Web sont l’épine dorsale des systèmes d’information modernes. L’exposition sur Internet de ces applications engendre continuellement de nouvelles formes de menaces qui peuvent mettre en péril la sécurité de l’ensemble du système d’information. Pour parer à ces menaces, il existe des solutions robustes et riches en fonctionnalités. Ces solutions se basent sur des modèles de détection des attaques bien éprouvés, avec pour chaque modèle, des avantages et des limites. Nos travaux consistent à intégrer des fonctionnalités de plusieurs modèles dans une seule solution afin d’augmenter la capacité de détection. Pour atteindre cet objectif, nous définissons dans une première contribution, une classification des menaces adaptée au contexte des applications Web. Cette classification sert aussi à résoudre certains problèmes d’ordonnancement des opérations d’analyse lors de la phase de détection des attaques. Dans une seconde contribution, nous proposons une architecture de filtrage des attaques basée sur deux modèles d’analyse. Le premier est un module d’analyse comportementale, et le second utilise l’approche d’inspection par signature. Le principal défi à soulever avec cette architecture est d’adapter le modèle d’analyse comportementale au contexte des applications Web. Nous apportons des réponses à ce défi par l’utilisation d’une approche de modélisation des comportements malicieux. Ainsi, il est possible de construire pour chaque classe d’attaque son propre modèle de comportement anormal. Pour construire ces modèles, nous utilisons des classifieurs basés sur l’apprentissage automatique supervisé. Ces classifieurs utilisent des jeux de données d’apprentissage pour apprendre les comportements déviants de chaque classe d’attaques. Ainsi, un deuxième verrou en termes de disponibilité des données d’apprentissage a été levé. En effet, dans une dernière contribution, nous avons défini et conçu une plateforme de génération automatique des données d’entrainement. Les données générées par cette plateforme sont normalisées et catégorisées pour chaque classe d’attaques. Le modèle de génération des données d’apprentissage que nous avons développé est capable d’apprendre "de ses erreurs" d’une manière continue afin de produire des ensembles de données d’apprentissage de meilleure qualité

    Automatic Prediction of Speech Evaluation Metrics for Dysarthric Speech

    No full text
    International audienceDuring the last decades, automatic speech processing systems witnessed an important progress and achieved remarkable reliability. As a result, such technologies have been exploited in new areas and applications including medical practice. In disordered speech evaluation context, perceptual evaluation is still the most common method used in clinical practice for the diagnosing and the following of the condition progression of patients despite its well documented limits (such as subjectivity). In this paper, we propose an automatic approach for the prediction of dysarthric speech evaluation metrics (intelligibility, severity, articulation impairment) based on the representation of the speech acoustics in the total variability subspace based on the i-vectors paradigm. The proposed approach, evaluated on 129 French dysarthric speakers from the DesPhoAPady and VML databases, is proven to be efficient for the modeling of patient's production and capable of detecting the evolution of speech quality. Also, low RMSE and high correlation measures are obtained between automatically predicted metrics and perceptual evaluations

    Marginal bone loss around platform-switched and platform-matched implants following immediate dental implant placement – Systematic Review

    No full text
    Objective: This study aimed to examine marginal bone loss (MBL) around immediately placed platform-switched implants (PS) compared to platform-matched (PM) implants; and to critically appraise the available literature on this topic. Materials and Methods: Randomized control trials (RCTs), non- randomized control trials (NRCT) and case series of immediate placement platform-switched and platform-matched implant, published in English were included in the study. Two databases, namely Medline and PubMed covering the period between July 1966 and July 2023 were searched. A total of five case series, five RCTs and one NRCT were included in this systematic review by using pre-defined study selection criteria and following the PRISMA protocol. A critical appraisal of the selected studies was completed using standardized appraisal checklists, including CASP tool for critical appraisal of RCTs, the Downs and Black checklist for NRCT, and the CEBMa checklist for case series studies. Results: Five studies showed a statistically significant difference in MBL (PS: 0.18–0.78 mm, PM 0.51–1.19 mm). The studies featured a small sample size, and substantial methodological variability in patients’ selection criteria, implant and abutment designs, connection types and surgical protocols. A high risk of bias was identified, especially in case series studies. Conclusion: The use of PS implants in immediate placement protocols can lead to a statistically significant reduction in MBL compared to PM implants. However, the results need to be interpreted with caution, given the numerous confounding variables and clinical heterogeneity existing between the studies

    A Unified Joint Model to Deal With Nuisance Variabilities in the i-Vector Space

    No full text
    International audienc

    Er:YAG Laser Debonding of Lithium Disilicate Laminate Veneers: Effect of Laser Power Settings and Veneer Thickness on The Debonding Time and Pulpal Temperature: Er:YAG Laser Debonding of Laminate Veneers

    No full text
    Aim: This study aimed to investigate the influence of different laser power output on the pulpal temperature and the time required to achieve debonding of lithium disilicate laminate veneers with two different thicknesses. Methods: labial enamel of forty-eight maxillary central incisors was flattened and polished. The teeth restored with flat lithium disilicate ceramic veneers (4.0 mm X 6.0 mm), with one of two different thicknesses (0.5 and 1.0 mm). Veneer debonding was done using Er:YAG laser (Fidelis AT, Fotona) with 2940 nm wavelength and a 100 μm pulse duration (VSP mode) and 10 Hz and one of three laser power output settings: 1.5W (150 mj), 3W (300 mj), and 5.4W (360 mj) (n=8). The veneer debonding time and intra-pulpal temperature changes (∆T) were measured. Statistical analysis was done using Two-way ANOVA and Bonferroni post-hoc test (a = 0.05). Correlation between the debonding time and temperature changes was computed using a Pearson’s correlation. Results: Debonding 1.0 mm veneers with 1W power recorded the longest time (p<0.05), while debonding 0.5 veneers with 3W and 5.4 W recorded the shortest times (p<0.05). There was a significant decrease in ∆T with the increase of laser power. A low correlation was found between the debonding time and ∆T (R² = 0.113) Conclusions: The laser power and veneer thickness are crucial factors in debonding, with the thinner veneers being faster to remove. When debonding thick veneers, a laser power output 5.4W is more efficient and less harmful to the pulp tissues
    corecore