27 research outputs found

    Verbing and nouning in French : toward an ecologically valid approach to sentence processing

    Full text link
    La présente thèse utilise la technique des potentiels évoqués afin d’étudier les méchanismes neurocognitifs qui sous-tendent la compréhension de la phrase. Plus particulièrement, cette recherche vise à clarifier l’interaction entre les processus syntaxiques et sémantiques chez les locuteurs natifs et les apprenants d’une deuxième langue (L2). Le modèle “syntaxe en premier” (Friederici, 2002, 2011) prédit que les catégories syntaxiques sont analysées de façon précoce: ce stade est reflété par la composante ELAN (Early anterior negativity, Négativité antérieure gauche), qui est induite par les erreurs de catégorie syntaxique. De plus, ces erreurs semblent empêcher l’apparition de la composante N400 qui reflète les processus lexico-sémantiques. Ce phénomène est défini comme le bloquage sémantique (Friederici et al., 1999). Cependant, la plupart des études qui observent la ELAN utilisent des protocoles expérimentaux problématiques dans lesquels les différences entre les contextes qui précèdent la cible pourraient être à l’origine de résultats fallacieux expliquant à la fois l’apparente “ELAN” et l’absence de N400 (Steinhauer & Drury, 2012). La première étude rééevalue l’approche de la “syntaxe en premier” en adoptant un paradigme expériemental novateur en français qui introduit des erreurs de catégorie syntaxique et les anomalies de sémantique lexicale. Ce dessin expérimental équilibré contrôle à la fois le mot-cible (nom vs. verbe) et le contexte qui le précède. Les résultats récoltés auprès de locuteurs natifs du français québécois ont révélé un complexe N400-P600 en réponse à toutes les anomalies, en contradiction avec les prédictions du modèle de Friederici. Les effets additifs des manipulations syntaxique et sémantique sur la N400 suggèrent la détection d’une incohérence entre la racine du mot qui avait été prédite et la cible, d’une part, et l’activation lexico-sémantique, d’autre part. Les réponses individuelles se sont pas caractérisées par une dominance vers la N400 ou la P600: au contraire, une onde biphasique est présente chez la majorité des participants. Cette activation peut donc être considérée comme un index fiable des mécanismes qui sous-tendent le traitement des structures syntagmatiques. La deuxième étude se concentre sur les même processus chez les apprenants tardifs du français L2. L’hypothèse de la convergence (Green, 2003 ; Steinhauer, 2014) prédit que les apprenants d’une L2, s’ils atteignent un niveau avancé, mettent en place des processus de traitement en ligne similaires aux locuteurs natifs. Cependant, il est difficile de considérer en même temps un grand nombre de facteurs qui se rapportent à leurs compétences linguistiques, à l’exposition à la L2 et à l’âge d’acquisition. Cette étude continue d’explorer les différences inter-individuelles en modélisant les données de potentiels-évoqués avec les Forêts aléatoires, qui ont révélé que le pourcentage d’explosition au français ansi que le niveau de langue sont les prédicteurs les plus fiables pour expliquer les réponses électrophysiologiques des participants. Plus ceux-ci sont élevés, plus l’amplitude des composantes N400 et P600 augmente, ce qui confirme en partie les prédictions faites par l’hypothèse de la convergence. En conclusion, le modèle de la “syntaxe en premier” n’est pas viable et doit être remplacé. Nous suggérons un nouveau paradigme basé sur une approche prédictive, où les informations sémantiques et syntaxiques sont activées en parallèle dans un premier temps, puis intégrées via un recrutement de mécanismes contrôlés. Ces derniers sont modérés par les capacités inter-individuelles reflétées par l’exposition et la performance.The present thesis uses event-related potentials (ERPs) to investigate neurocognitve mechanisms underlying sentence comprehension. In particular, these two experiments seek to clarify the interplay between syntactic and semantic processes in native speakers and second language learners. Friederici’s (2002, 2011) “syntax-first” model predicts that syntactic categories are analyzed at the earliest stages of speech perception reflected by the ELAN (Early left anterior negativity), reported for syntactic category violations. Further, syntactic category violations seem to prevent the appearance of N400s (linked to lexical-semantic processing), a phenomenon known as “semantic blocking” (Friederici et al., 1999). However, a review article by Steinhauer and Drury (2012) argued that most ELAN studies used flawed designs, where pre-target context differences may have caused ELAN-like artifacts as well as the absence of N400s. The first study reevaluates syntax-first approaches to sentence processing by implementing a novel paradigm in French that included correct sentences, pure syntactic category violations, lexical-semantic anomalies, and combined anomalies. This balanced design systematically controlled for target word (noun vs. verb) and the context immediately preceding it. Group results from native speakers of Quebec French revealed an N400-P600 complex in response to all anomalous conditions, providing strong evidence against the syntax-first and semantic blocking hypotheses. Additive effects of syntactic category and lexical-semantic anomalies on the N400 may reflect a mismatch detection between a predicted word-stem and the actual target, in parallel with lexical-semantic retrieval. An interactive rather than additive effect on the P600 reveals that the same neurocognitive resources are recruited for syntactic and semantic integration. Analyses of individual data showed that participants did not rely on one single cognitive mechanism reflected by either the N400 or the P600 effect but on both, suggesting that the biphasic N400-P600 ERP wave can indeed be considered to be an index of phrase-structure violation processing in most individuals. The second study investigates the underlying mechanisms of phrase-structure building in late second language learners of French. The convergence hypothesis (Green, 2003; Steinhauer, 2014) predicts that second language learners can achieve native-like online- processing with sufficient proficiency. However, considering together different factors that relate to proficiency, exposure, and age of acquisition has proven challenging. This study further explores individual data modeling using a Random Forests approach. It revealed that daily usage and proficiency are the most reliable predictors in explaining the ERP responses, with N400 and P600 effects getting larger as these variables increased, partly confirming and extending the convergence hypothesis. This thesis demonstrates that the “syntax-first” model is not viable and should be replaced. A new account is suggested, based on predictive approaches, where semantic and syntactic information are first used in parallel to facilitate retrieval, and then controlled mechanisms are recruited to analyze sentences at the interface of syntax and semantics. Those mechanisms are mediated by inter-individual abilities reflected by language exposure and performance

    The priming of priming : Evidence that the N400 reflects context-dependent post-retrieval word integration in working memory

    Full text link
    Which cognitive processes are reflected by the N400 in ERPs is still controversial. Various recent articles(Lau et al., 2008; Brouwer et al., 2012) have revived the idea that only lexical pre-activation processes(such as automatic spreading activation, ASA) are strongly supported, while post-lexical integrative pro-cesses are not. Challenging this view, the present ERP study replicates a behavioral study by McKoon andRatcliff (1995) who demonstrated that a prime-target pair such as finger − hand shows stronger primingwhen a majority of other pairs in the list share the analogous semantic relationship (here: part-whole),even at short stimulus onset asynchronies (250 ms). We created lists with four different types of semanticrelationship (synonyms, part-whole, category-member, and opposites) and compared priming for pairsin a consistent list with those in an inconsistent list as well as unrelated items. Highly significant N400reductions were found for both relatedness priming (unrelated vs. inconsistent) and relational priming(inconsistent vs. consistent). These data are taken as strong evidence that N400 priming effects are notexclusively carried by ASA-like mechanisms during lexical retrieval but also include post-lexical inte-gration in working memory. We link the present findings to a neurocomputational model for relationalreasoning (Knowlton et al., 2012) and to recent discussions of context-dependent conceptual activations(Yee and Thompson-Schill, 2016)

    Beacon v2 and Beacon networks: A "lingua franca" for federated data discovery in biomedical genomics, and beyond

    Full text link
    Beacon is a basic data discovery protocol issued by the Global Alliance for Genomics and Health (GA4GH). The main goal addressed by version 1 of the Beacon protocol was to test the feasibility of broadly sharing human genomic data, through providing simple "yes" or "no" responses to queries about the presence of a given variant in datasets hosted by Beacon providers. The popularity of this concept has fostered the design of a version 2, that better serves real-world requirements and addresses the needs of clinical genomics research and healthcare, as assessed by several contributing projects and organizations. Particularly, rare disease genetics and cancer research will benefit from new case level and genomic variant level requests and the enabling of richer phenotype and clinical queries as well as support for fuzzy searches. Beacon is designed as a "lingua franca" to bridge data collections hosted in software solutions with different and rich interfaces. Beacon version 2 works alongside popular standards like Phenopackets, OMOP, or FHIR, allowing implementing consortia to return matches in beacon responses and provide a handover to their preferred data exchange format. The protocol is being explored by other research domains and is being tested in several international projects

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    GA4GH: International policies and standards for data sharing across genomic research and healthcare.

    Get PDF
    The Global Alliance for Genomics and Health (GA4GH) aims to accelerate biomedical advances by enabling the responsible sharing of clinical and genomic data through both harmonized data aggregation and federated approaches. The decreasing cost of genomic sequencing (along with other genome-wide molecular assays) and increasing evidence of its clinical utility will soon drive the generation of sequence data from tens of millions of humans, with increasing levels of diversity. In this perspective, we present the GA4GH strategies for addressing the major challenges of this data revolution. We describe the GA4GH organization, which is fueled by the development efforts of eight Work Streams and informed by the needs of 24 Driver Projects and other key stakeholders. We present the GA4GH suite of secure, interoperable technical standards and policy frameworks and review the current status of standards, their relevance to key domains of research and clinical care, and future plans of GA4GH. Broad international participation in building, adopting, and deploying GA4GH standards and frameworks will catalyze an unprecedented effort in data sharing that will be critical to advancing genomic medicine and ensuring that all populations can access its benefits

    Multidimensional signals and analytic flexibility: Estimating degrees of freedom in human speech analyses

    Get PDF
    Recent empirical studies have highlighted the large degree of analytic flexibility in data analysis which can lead to substantially different conclusions based on the same data set. Thus, researchers have expressed their concerns that these researcher degrees of freedom might facilitate bias and can lead to claims that do not stand the test of time. Even greater flexibility is to be expected in fields in which the primary data lend themselves to a variety of possible operationalizations. The multidimensional, temporally extended nature of speech constitutes an ideal testing ground for assessing the variability in analytic approaches, which derives not only from aspects of statistical modeling, but also from decisions regarding the quantification of the measured behavior. In the present study, we gave the same speech production data set to 46 teams of researchers and asked them to answer the same research question, resulting insubstantial variability in reported effect sizes and their interpretation. Using Bayesian meta-analytic tools, we further find little to no evidence that the observed variability can be explained by analysts’ prior beliefs, expertise or the perceived quality of their analyses. In light of this idiosyncratic variability, we recommend that researchers more transparently share details of their analysis, strengthen the link between theoretical construct and quantitative system and calibrate their (un)certainty in their conclusions

    Definiteness and Maximality in French Language Acquisition, More Adult-Like Than You Would Expect

    No full text
    This study considers the mastery of maximality, or domain restrictions, in a group of 47 children acquiring French (aged 4.06–8.09), as well as a control group of young adults. Singular definite (le “the”) and indefinite (un “a/one”) plural (des “some,” les “the”) and explicitly maximal contexts (tous les “all the”) were provided to participants. Animals were arranged in groups of three. Participants were asked to select one or more animals from these groups and give them to the experimenter (similar to Munn et al., 2006). Following Munn, we expected children to make maximality errors on the singular definite items. However, we did not observe this pattern. On the contrary we observed more errors on plurals generally. Further, the developmental patterns show that participants become less maximal in their responses to indefinite plurals (an adult-like pattern, also found in Caponigro et al., 2012) with no important changes on definite types: no strong age effects are observed on maximality patterns. These point to the importance of cross-linguistic data for the understanding of child language acquisition and error patterns in psycholinguistic theory
    corecore