31 research outputs found

    Verbing and nouning in French : toward an ecologically valid approach to sentence processing

    Full text link
    La preĢsente theĢ€se utilise la technique des potentiels eĢvoqueĢs afin dā€™eĢtudier les meĢchanismes neurocognitifs qui sous-tendent la compreĢhension de la phrase. Plus particulieĢ€rement, cette recherche vise aĢ€ clarifier lā€™interaction entre les processus syntaxiques et seĢmantiques chez les locuteurs natifs et les apprenants dā€™une deuxieĢ€me langue (L2). Le modeĢ€le ā€œsyntaxe en premierā€ (Friederici, 2002, 2011) preĢdit que les cateĢgories syntaxiques sont analyseĢes de facĢ§on preĢcoce: ce stade est refleĢteĢ par la composante ELAN (Early anterior negativity, NeĢgativiteĢ anteĢrieure gauche), qui est induite par les erreurs de cateĢgorie syntaxique. De plus, ces erreurs semblent empeĢ‚cher lā€™apparition de la composante N400 qui refleĢ€te les processus lexico-seĢmantiques. Ce pheĢnomeĢ€ne est deĢfini comme le bloquage seĢmantique (Friederici et al., 1999). Cependant, la plupart des eĢtudes qui observent la ELAN utilisent des protocoles expeĢrimentaux probleĢmatiques dans lesquels les diffeĢrences entre les contextes qui preĢceĢ€dent la cible pourraient eĢ‚tre aĢ€ lā€™origine de reĢsultats fallacieux expliquant aĢ€ la fois lā€™apparente ā€œELANā€ et lā€™absence de N400 (Steinhauer & Drury, 2012). La premieĢ€re eĢtude reĢeĢevalue lā€™approche de la ā€œsyntaxe en premierā€ en adoptant un paradigme expeĢriemental novateur en francĢ§ais qui introduit des erreurs de cateĢgorie syntaxique et les anomalies de seĢmantique lexicale. Ce dessin expeĢrimental eĢquilibreĢ controĢ‚le aĢ€ la fois le mot-cible (nom vs. verbe) et le contexte qui le preĢceĢ€de. Les reĢsultats reĢcolteĢs aupreĢ€s de locuteurs natifs du francĢ§ais queĢbeĢcois ont reĢveĢleĢ un complexe N400-P600 en reĢponse aĢ€ toutes les anomalies, en contradiction avec les preĢdictions du modeĢ€le de Friederici. Les effets additifs des manipulations syntaxique et seĢmantique sur la N400 suggeĢ€rent la deĢtection dā€™une incoheĢrence entre la racine du mot qui avait eĢteĢ preĢdite et la cible, dā€™une part, et lā€™activation lexico-seĢmantique, dā€™autre part. Les reĢponses individuelles se sont pas caracteĢriseĢes par une dominance vers la N400 ou la P600: au contraire, une onde biphasique est preĢsente chez la majoriteĢ des participants. Cette activation peut donc eĢ‚tre consideĢreĢe comme un index fiable des meĢcanismes qui sous-tendent le traitement des structures syntagmatiques. La deuxieĢ€me eĢtude se concentre sur les meĢ‚me processus chez les apprenants tardifs du francĢ§ais L2. Lā€™hypotheĢ€se de la convergence (Green, 2003 ; Steinhauer, 2014) preĢdit que les apprenants dā€™une L2, sā€™ils atteignent un niveau avanceĢ, mettent en place des processus de traitement en ligne similaires aux locuteurs natifs. Cependant, il est difficile de consideĢrer en meĢ‚me temps un grand nombre de facteurs qui se rapportent aĢ€ leurs compeĢtences linguistiques, aĢ€ lā€™exposition aĢ€ la L2 et aĢ€ lā€™aĢ‚ge dā€™acquisition. Cette eĢtude continue dā€™explorer les diffeĢrences inter-individuelles en modeĢlisant les donneĢes de potentiels-eĢvoqueĢs avec les ForeĢ‚ts aleĢatoires, qui ont reĢveĢleĢ que le pourcentage dā€™explosition au francĢ§ais ansi que le niveau de langue sont les preĢdicteurs les plus fiables pour expliquer les reĢponses eĢlectrophysiologiques des participants. Plus ceux-ci sont eĢleveĢs, plus lā€™amplitude des composantes N400 et P600 augmente, ce qui confirme en partie les preĢdictions faites par lā€™hypotheĢ€se de la convergence. En conclusion, le modeĢ€le de la ā€œsyntaxe en premierā€ nā€™est pas viable et doit eĢ‚tre remplaceĢ. Nous suggeĢrons un nouveau paradigme baseĢ sur une approche preĢdictive, ouĢ€ les informations seĢmantiques et syntaxiques sont activeĢes en paralleĢ€le dans un premier temps, puis inteĢgreĢes via un recrutement de meĢcanismes controĢ‚leĢs. Ces derniers sont modeĢreĢs par les capaciteĢs inter-individuelles refleĢteĢes par lā€™exposition et la performance.The present thesis uses event-related potentials (ERPs) to investigate neurocognitve mechanisms underlying sentence comprehension. In particular, these two experiments seek to clarify the interplay between syntactic and semantic processes in native speakers and second language learners. Friedericiā€™s (2002, 2011) ā€œsyntax-firstā€ model predicts that syntactic categories are analyzed at the earliest stages of speech perception reflected by the ELAN (Early left anterior negativity), reported for syntactic category violations. Further, syntactic category violations seem to prevent the appearance of N400s (linked to lexical-semantic processing), a phenomenon known as ā€œsemantic blockingā€ (Friederici et al., 1999). However, a review article by Steinhauer and Drury (2012) argued that most ELAN studies used flawed designs, where pre-target context differences may have caused ELAN-like artifacts as well as the absence of N400s. The first study reevaluates syntax-first approaches to sentence processing by implementing a novel paradigm in French that included correct sentences, pure syntactic category violations, lexical-semantic anomalies, and combined anomalies. This balanced design systematically controlled for target word (noun vs. verb) and the context immediately preceding it. Group results from native speakers of Quebec French revealed an N400-P600 complex in response to all anomalous conditions, providing strong evidence against the syntax-first and semantic blocking hypotheses. Additive effects of syntactic category and lexical-semantic anomalies on the N400 may reflect a mismatch detection between a predicted word-stem and the actual target, in parallel with lexical-semantic retrieval. An interactive rather than additive effect on the P600 reveals that the same neurocognitive resources are recruited for syntactic and semantic integration. Analyses of individual data showed that participants did not rely on one single cognitive mechanism reflected by either the N400 or the P600 effect but on both, suggesting that the biphasic N400-P600 ERP wave can indeed be considered to be an index of phrase-structure violation processing in most individuals. The second study investigates the underlying mechanisms of phrase-structure building in late second language learners of French. The convergence hypothesis (Green, 2003; Steinhauer, 2014) predicts that second language learners can achieve native-like online- processing with sufficient proficiency. However, considering together different factors that relate to proficiency, exposure, and age of acquisition has proven challenging. This study further explores individual data modeling using a Random Forests approach. It revealed that daily usage and proficiency are the most reliable predictors in explaining the ERP responses, with N400 and P600 effects getting larger as these variables increased, partly confirming and extending the convergence hypothesis. This thesis demonstrates that the ā€œsyntax-firstā€ model is not viable and should be replaced. A new account is suggested, based on predictive approaches, where semantic and syntactic information are first used in parallel to facilitate retrieval, and then controlled mechanisms are recruited to analyze sentences at the interface of syntax and semantics. Those mechanisms are mediated by inter-individual abilities reflected by language exposure and performance

    The priming of priming : Evidence that the N400 reflects context-dependent post-retrieval word integration in working memory

    Full text link
    Which cognitive processes are reflected by the N400 in ERPs is still controversial. Various recent articles(Lau et al., 2008; Brouwer et al., 2012) have revived the idea that only lexical pre-activation processes(such as automatic spreading activation, ASA) are strongly supported, while post-lexical integrative pro-cesses are not. Challenging this view, the present ERP study replicates a behavioral study by McKoon andRatcliff (1995) who demonstrated that a prime-target pair such as finger āˆ’ hand shows stronger primingwhen a majority of other pairs in the list share the analogous semantic relationship (here: part-whole),even at short stimulus onset asynchronies (250 ms). We created lists with four different types of semanticrelationship (synonyms, part-whole, category-member, and opposites) and compared priming for pairsin a consistent list with those in an inconsistent list as well as unrelated items. Highly significant N400reductions were found for both relatedness priming (unrelated vs. inconsistent) and relational priming(inconsistent vs. consistent). These data are taken as strong evidence that N400 priming effects are notexclusively carried by ASA-like mechanisms during lexical retrieval but also include post-lexical inte-gration in working memory. We link the present findings to a neurocomputational model for relationalreasoning (Knowlton et al., 2012) and to recent discussions of context-dependent conceptual activations(Yee and Thompson-Schill, 2016)

    Beacon v2 and Beacon networks: A "lingua franca" for federated data discovery in biomedical genomics, and beyond

    Full text link
    Beacon is a basic data discovery protocol issued by the Global Alliance for Genomics and Health (GA4GH). The main goal addressed by version 1 of the Beacon protocol was to test the feasibility of broadly sharing human genomic data, through providing simple "yes" or "no" responses to queries about the presence of a given variant in datasets hosted by Beacon providers. The popularity of this concept has fostered the design of a version 2, that better serves real-world requirements and addresses the needs of clinical genomics research and healthcare, as assessed by several contributing projects and organizations. Particularly, rare disease genetics and cancer research will benefit from new case level and genomic variant level requests and the enabling of richer phenotype and clinical queries as well as support for fuzzy searches. Beacon is designed as a "lingua franca" to bridge data collections hosted in software solutions with different and rich interfaces. Beacon version 2 works alongside popular standards like Phenopackets, OMOP, or FHIR, allowing implementing consortia to return matches in beacon responses and provide a handover to their preferred data exchange format. The protocol is being explored by other research domains and is being tested in several international projects

    New implementation of data standards for AI research in precision oncology. Experience from EuCanImage

    Get PDF
    An unprecedented amount of personal health data, with the potential to revolutionise precision medicine, is generated at healthcare institutions worldwide. The exploitation of such data using artificial intelligence relies on the ability to combine heterogeneous, multicentric, multimodal and multiparametric data, as well as thoughtful representation of knowledge and data availability. Despite these possibilities, significant methodological challenges and ethico-legal constraints still impede the real-world implementation of data models. The EuCanImage is an international consortium aimed at developing AI algorithms for precision medicine in oncology and enabling secondary use of the data based on necessary ethical approvals. The use of well-defined clinical data standards to allow interoperability was a central element within the initiative. The consortium is focused on three different cancer types and addresses seven unmet clinical needs. This article synthesises our experience and procedures for healthcare data interoperability and standardisation.Competing Interest StatementThe authors have declared no competing interest.Funding StatementThis project has received funding from the European Unionā€™s Horizon 2020 research and innovation programme under grant agreement No 952103.Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.YesI confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.YesI understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).YesI have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.YesThis study describes a new process to harmonize and standardize clinical data. The data will be available upon request to the authors

    New implementation of data standards for AI research in precision oncology. Experience from EuCanImage

    Get PDF
    An unprecedented amount of personal health data, with the potential to revolutionise precision medicine, is generated at healthcare institutions worldwide. The exploitation of such data using artificial intelligence relies on the ability to combine heterogeneous, multicentric, multimodal and multiparametric data, as well as thoughtful representation of knowledge and data availability. Despite these possibilities, significant methodological challenges and ethico-legal constraints still impede the real-world implementation of data models. The EuCanImage is an international consortium aimed at developing AI algorithms for precision medicine in oncology and enabling secondary use of the data based on necessary ethical approvals. The use of well-defined clinical data standards to allow interoperability was a central element within the initiative. The consortium is focused on three different cancer types and addresses seven unmet clinical needs. This article synthesises our experience and procedures for healthcare data interoperability and standardisation.Competing Interest StatementThe authors have declared no competing interest.Funding StatementThis project has received funding from the European Unionā€™s Horizon 2020 research and innovation programme under grant agreement No 952103.Author DeclarationsI confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.YesI confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.YesI understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).YesI have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.YesThis study describes a new process to harmonize and standardize clinical data. The data will be available upon request to the authors

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
    corecore