28 research outputs found

    Detecting System Errors in Virtual Reality Using EEG Through Error-Related Potentials

    Get PDF
    When persons interact with the environment and experience or wit-ness an error (e.g. an unexpected event), a specific brain pattern,known as error-related potential (ErrP) can be observed in the elec-troencephalographic signals (EEG). Virtual Reality (VR) technologyenables users to interact with computer-generated simulated envi-ronments and to provide multi-modal sensory feedback. Using VRsystems can, however, be error-prone. In this paper, we investigatethe presence of ErrPs when Virtual Reality users face 3 types ofvisualization errors: (Te) tracking errors when manipulating virtualobjects, (Fe) feedback errors, and (Be) background anomalies. Weconducted an experiment in which 15 participants were exposed tothe 3 types of errors while performing a center-out pick and placetask in virtual reality. The results showed that tracking errors gener-ate error-related potentials, the other types of errors did not generatesuch discernible patterns. In addition, we show that it is possible todetect the ErrPs generated by tracking losses in single trial, with anaccuracy of 85%. This constitutes a first step towards the automaticdetection of error-related potentials in VR applications, paving theway to the design of adaptive and self-corrective VR/AR applicationsby exploiting information directly from the user’s brain

    Decoding auditory and tactile attention for use in an EEG-based brain-computer interface

    Get PDF
    International audienceBrain-computer interface (BCI) systems offer a non-verbal and covert way for humans to interact with a machine. They are designed to interpret a user's brain state that can be translated into action or for other communication purposes. This study investigates the feasibility of developing a hands-and eyes-free BCI system based on auditory and tactile attention. Users were presented with multiple simultaneous streams of auditory or tactile stimuli, and were directed to detect a pattern in one particular stream. We applied a linear classifier to decode the stream-tracking attention from the EEG signal. The results showed that the proposed BCI system could capture attention from most study participants using multisensory inputs, and showed potential in transfer learning across multiple sessions

    A user-centred approach to unlock the potential of non-invasive BCIs: an unprecedented international translational effort

    Get PDF
    Non-invasive Mental Task-based Brain-Computer Interfaces (MT-BCIs) enable their users to interact with the environment through their brain activity alone (measured using electroencephalography for example), by performing mental tasks such as mental calculation or motor imagery. Current developments in technology hint at a wide range of possible applications, both in the clinical and non-clinical domains. MT-BCIs can be used to control (neuro)prostheses or interact with video games, among many other applications. They can also be used to restore cognitive and motor abilities for stroke rehabilitation, or even improve athletic performance.Nonetheless, the expected transfer of MT-BCIs from the lab to the marketplace will be greatly impeded if all resources are allocated to technological aspects alone. We cannot neglect the Human End-User that sits in the centre of the loop. Indeed, self-regulating one’s brain activity through mental tasks to interact is an acquired skill that requires appropriate training. Yet several studies have shown that current training procedures do not enable MT-BCI users to reach adequate levels of performance. Therefore, one significant challenge for the community is that of improving end-user training.To do so, another fundamental challenge must be taken into account: we need to understand the processes that underlie MT-BCI performance and user learning. It is currently estimated that 10 to 30% of people cannot control an MT-BCI. These people are often referred to as “BCI inefficient”. But the concept of “BCI inefficiency” is debated. Does it really exist? Or, are low performances due to insufficient training, training procedures that are unsuited to these users or is the BCI data processing not sensitive enough? The currently available literature does not allow for a definitive answer to these questions as most published studies either include a limited number of participants (i.e., 10 to 20 participants) and/or training sessions (i.e., 1 or 2). We still have very little insight into what the MT-BCI learning curve looks like, and into which factors (including both user-related and machine-related factors) influence this learning curve. Finding answers will require a large number of experiments, involving a large number of participants taking part in multiple training sessions. It is not feasible for one research lab or even a small consortium to undertake such experiments alone. Therefore, an unprecedented coordinated effort from the research community is necessary.We are convinced that combining forces will allow us to characterise in detail MT-BCI user learning, and thereby provide a mandatory step toward transferring BCIs “out of the lab”. This is why we gathered an international, interdisciplinary consortium of BCI researchers from more than 20 different labs across Europe and Japan, including pioneers in the field. This collaboration will enable us to collect considerable amounts of data (at least 100 participants for 20 training sessions each) and establish a large open database. Based on this precious resource, we could then lead sound analyses to answer the previously mentioned questions. Using this data, our consortium could offer solutions on how to improve MT-BCI training procedures using innovative approaches (e.g., personalisation using intelligent tutoring systems) and technologies (e.g., virtual reality). The CHIST-ERA programme represents a unique opportunity to conduct this ambitious project, which will foster innovation in our field and strengthen our community

    Impact of opioid-free analgesia on pain severity and patient satisfaction after discharge from surgery: multispecialty, prospective cohort study in 25 countries

    Get PDF
    Background: Balancing opioid stewardship and the need for adequate analgesia following discharge after surgery is challenging. This study aimed to compare the outcomes for patients discharged with opioid versus opioid-free analgesia after common surgical procedures.Methods: This international, multicentre, prospective cohort study collected data from patients undergoing common acute and elective general surgical, urological, gynaecological, and orthopaedic procedures. The primary outcomes were patient-reported time in severe pain measured on a numerical analogue scale from 0 to 100% and patient-reported satisfaction with pain relief during the first week following discharge. Data were collected by in-hospital chart review and patient telephone interview 1 week after discharge.Results: The study recruited 4273 patients from 144 centres in 25 countries; 1311 patients (30.7%) were prescribed opioid analgesia at discharge. Patients reported being in severe pain for 10 (i.q.r. 1-30)% of the first week after discharge and rated satisfaction with analgesia as 90 (i.q.r. 80-100) of 100. After adjustment for confounders, opioid analgesia on discharge was independently associated with increased pain severity (risk ratio 1.52, 95% c.i. 1.31 to 1.76; P < 0.001) and re-presentation to healthcare providers owing to side-effects of medication (OR 2.38, 95% c.i. 1.36 to 4.17; P = 0.004), but not with satisfaction with analgesia (beta coefficient 0.92, 95% c.i. -1.52 to 3.36; P = 0.468) compared with opioid-free analgesia. Although opioid prescribing varied greatly between high-income and low- and middle-income countries, patient-reported outcomes did not.Conclusion: Opioid analgesia prescription on surgical discharge is associated with a higher risk of re-presentation owing to side-effects of medication and increased patient-reported pain, but not with changes in patient-reported satisfaction. Opioid-free discharge analgesia should be adopted routinely

    Laparoscopy in management of appendicitis in high-, middle-, and low-income countries: a multicenter, prospective, cohort study.

    Get PDF
    BACKGROUND: Appendicitis is the most common abdominal surgical emergency worldwide. Differences between high- and low-income settings in the availability of laparoscopic appendectomy, alternative management choices, and outcomes are poorly described. The aim was to identify variation in surgical management and outcomes of appendicitis within low-, middle-, and high-Human Development Index (HDI) countries worldwide. METHODS: This is a multicenter, international prospective cohort study. Consecutive sampling of patients undergoing emergency appendectomy over 6 months was conducted. Follow-up lasted 30 days. RESULTS: 4546 patients from 52 countries underwent appendectomy (2499 high-, 1540 middle-, and 507 low-HDI groups). Surgical site infection (SSI) rates were higher in low-HDI (OR 2.57, 95% CI 1.33-4.99, p = 0.005) but not middle-HDI countries (OR 1.38, 95% CI 0.76-2.52, p = 0.291), compared with high-HDI countries after adjustment. A laparoscopic approach was common in high-HDI countries (1693/2499, 67.7%), but infrequent in low-HDI (41/507, 8.1%) and middle-HDI (132/1540, 8.6%) groups. After accounting for case-mix, laparoscopy was still associated with fewer overall complications (OR 0.55, 95% CI 0.42-0.71, p < 0.001) and SSIs (OR 0.22, 95% CI 0.14-0.33, p < 0.001). In propensity-score matched groups within low-/middle-HDI countries, laparoscopy was still associated with fewer overall complications (OR 0.23 95% CI 0.11-0.44) and SSI (OR 0.21 95% CI 0.09-0.45). CONCLUSION: A laparoscopic approach is associated with better outcomes and availability appears to differ by country HDI. Despite the profound clinical, operational, and financial barriers to its widespread introduction, laparoscopy could significantly improve outcomes for patients in low-resource environments. TRIAL REGISTRATION: NCT02179112

    Pooled analysis of WHO Surgical Safety Checklist use and mortality after emergency laparotomy

    Get PDF
    Background The World Health Organization (WHO) Surgical Safety Checklist has fostered safe practice for 10 years, yet its place in emergency surgery has not been assessed on a global scale. The aim of this study was to evaluate reported checklist use in emergency settings and examine the relationship with perioperative mortality in patients who had emergency laparotomy. Methods In two multinational cohort studies, adults undergoing emergency laparotomy were compared with those having elective gastrointestinal surgery. Relationships between reported checklist use and mortality were determined using multivariable logistic regression and bootstrapped simulation. Results Of 12 296 patients included from 76 countries, 4843 underwent emergency laparotomy. After adjusting for patient and disease factors, checklist use before emergency laparotomy was more common in countries with a high Human Development Index (HDI) (2455 of 2741, 89.6 per cent) compared with that in countries with a middle (753 of 1242, 60.6 per cent; odds ratio (OR) 0.17, 95 per cent c.i. 0.14 to 0.21, P <0001) or low (363 of 860, 422 per cent; OR 008, 007 to 010, P <0.001) HDI. Checklist use was less common in elective surgery than for emergency laparotomy in high-HDI countries (risk difference -94 (95 per cent c.i. -11.9 to -6.9) per cent; P <0001), but the relationship was reversed in low-HDI countries (+121 (+7.0 to +173) per cent; P <0001). In multivariable models, checklist use was associated with a lower 30-day perioperative mortality (OR 0.60, 0.50 to 073; P <0.001). The greatest absolute benefit was seen for emergency surgery in low- and middle-HDI countries. Conclusion Checklist use in emergency laparotomy was associated with a significantly lower perioperative mortality rate. Checklist use in low-HDI countries was half that in high-HDI countries.Peer reviewe

    The evolving SARS-CoV-2 epidemic in Africa: Insights from rapidly expanding genomic surveillance

    Get PDF
    INTRODUCTION Investment in Africa over the past year with regard to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) sequencing has led to a massive increase in the number of sequences, which, to date, exceeds 100,000 sequences generated to track the pandemic on the continent. These sequences have profoundly affected how public health officials in Africa have navigated the COVID-19 pandemic. RATIONALE We demonstrate how the first 100,000 SARS-CoV-2 sequences from Africa have helped monitor the epidemic on the continent, how genomic surveillance expanded over the course of the pandemic, and how we adapted our sequencing methods to deal with an evolving virus. Finally, we also examine how viral lineages have spread across the continent in a phylogeographic framework to gain insights into the underlying temporal and spatial transmission dynamics for several variants of concern (VOCs). RESULTS Our results indicate that the number of countries in Africa that can sequence the virus within their own borders is growing and that this is coupled with a shorter turnaround time from the time of sampling to sequence submission. Ongoing evolution necessitated the continual updating of primer sets, and, as a result, eight primer sets were designed in tandem with viral evolution and used to ensure effective sequencing of the virus. The pandemic unfolded through multiple waves of infection that were each driven by distinct genetic lineages, with B.1-like ancestral strains associated with the first pandemic wave of infections in 2020. Successive waves on the continent were fueled by different VOCs, with Alpha and Beta cocirculating in distinct spatial patterns during the second wave and Delta and Omicron affecting the whole continent during the third and fourth waves, respectively. Phylogeographic reconstruction points toward distinct differences in viral importation and exportation patterns associated with the Alpha, Beta, Delta, and Omicron variants and subvariants, when considering both Africa versus the rest of the world and viral dissemination within the continent. Our epidemiological and phylogenetic inferences therefore underscore the heterogeneous nature of the pandemic on the continent and highlight key insights and challenges, for instance, recognizing the limitations of low testing proportions. We also highlight the early warning capacity that genomic surveillance in Africa has had for the rest of the world with the detection of new lineages and variants, the most recent being the characterization of various Omicron subvariants. CONCLUSION Sustained investment for diagnostics and genomic surveillance in Africa is needed as the virus continues to evolve. This is important not only to help combat SARS-CoV-2 on the continent but also because it can be used as a platform to help address the many emerging and reemerging infectious disease threats in Africa. In particular, capacity building for local sequencing within countries or within the continent should be prioritized because this is generally associated with shorter turnaround times, providing the most benefit to local public health authorities tasked with pandemic response and mitigation and allowing for the fastest reaction to localized outbreaks. These investments are crucial for pandemic preparedness and response and will serve the health of the continent well into the 21st century

    Global variation in anastomosis and end colostomy formation following left-sided colorectal resection

    Get PDF
    Background End colostomy rates following colorectal resection vary across institutions in high-income settings, being influenced by patient, disease, surgeon and system factors. This study aimed to assess global variation in end colostomy rates after left-sided colorectal resection. Methods This study comprised an analysis of GlobalSurg-1 and -2 international, prospective, observational cohort studies (2014, 2016), including consecutive adult patients undergoing elective or emergency left-sided colorectal resection within discrete 2-week windows. Countries were grouped into high-, middle- and low-income tertiles according to the United Nations Human Development Index (HDI). Factors associated with colostomy formation versus primary anastomosis were explored using a multilevel, multivariable logistic regression model. Results In total, 1635 patients from 242 hospitals in 57 countries undergoing left-sided colorectal resection were included: 113 (6·9 per cent) from low-HDI, 254 (15·5 per cent) from middle-HDI and 1268 (77·6 per cent) from high-HDI countries. There was a higher proportion of patients with perforated disease (57·5, 40·9 and 35·4 per cent; P < 0·001) and subsequent use of end colostomy (52·2, 24·8 and 18·9 per cent; P < 0·001) in low- compared with middle- and high-HDI settings. The association with colostomy use in low-HDI settings persisted (odds ratio (OR) 3·20, 95 per cent c.i. 1·35 to 7·57; P = 0·008) after risk adjustment for malignant disease (OR 2·34, 1·65 to 3·32; P < 0·001), emergency surgery (OR 4·08, 2·73 to 6·10; P < 0·001), time to operation at least 48 h (OR 1·99, 1·28 to 3·09; P = 0·002) and disease perforation (OR 4·00, 2·81 to 5·69; P < 0·001). Conclusion Global differences existed in the proportion of patients receiving end stomas after left-sided colorectal resection based on income, which went beyond case mix alone

    Conception et Etude de Systèmes Interactifs basées sur les Interfaces Cerveau-Ordinateur et la Réalité Augmentée

    No full text
    Brain-Computer Interfaces (BCI) enable interaction directly from brain activity. Augmented Reality (AR) on the other hand, enables the integration of virtual elements in the real world. In this thesis, we aimed at designing interactive systems associating BCIs and AR, to offer new means of hands-free interaction with real and virtual elements. In the first part, we have studied the possibility to extract different BCI paradigms in AR. We have shown that it was possible to use Steady-State Visual Evoked Potentials (SSVEP) in AR. Then, we have studied the possibility to extract Error-Related Potentials (ErrPs) in AR, showing that ErrPs were elicited in users facing errors, often occurring in AR. In the second part, we have deepened our research in the use of SSVEP for direct interaction in AR. We have proposed HCCA, a new algorithm for self-paced detection of SSVEP responses. Then, we have studied the design of AR interfaces, for the development of intuitive and efficient interactive systems. Lastly, we have illustrated our results, through the development of a smart-home system combining SSVEP and AR, which integrates in a commercially available smart-home system.Les Interfaces Cerveau Ordinateur (ICO) permettent l’interaction à partir de l’activité cérébrale. La Réalité Augmentée (RA) elle, permet d’intégrer des éléments virtuels dans un environnement réel. Dans cette thèse, nous avons cherché à concevoir des systèmes interactifs exploitant des ICO dans des environnements RA, afin de proposer de nouveaux moyens d’interagir avec des éléments réels et virtuels. Dans la première partie de cette thèse, nous avons étudié la possibilité d’extraire différents signaux cérébraux dans un contexte de RA. Nous avons ainsi montré qu’il était possible d’exploiter les Potentiels Evoqués Visuels Stationnaires (SSVEP) en RA. Puis, nous avons montré la possibilité d’extraire des Potentiels d’Erreur des signaux cérébraux, lorsqu’un utilisateur est soumis à des types d’erreurs fréquents en RA. Dans la seconde partie, nous avons approfondi nos recherches sur l’utilisation des SSVEP pour l’interaction en RA. Nous avons notamment proposé HCCA, un nouvel algorithme permettant la reconnaissance asynchrone de réponses SSVEP. Nous avons ensuite étudié la conception d’interfaces de RA, pour des systèmes interactifs, intuitifs performants. Enfin nous avons illustré nos résultats à travers le développement d’un système de domotique utilisant les SSVEP et la RA, qui s’intègre à une plateforme de maison intelligente industrielle

    A pilot study for a more Immersive Virtual Reality Brain-Computer Interface

    No full text
    International audienceWe are presenting a pilot study for a more Immersive Virtual Reality (IVR) Brain-Computer Interface (BCI). The originality of our approach lies in the fact of recording, thanks to physical VR trackers, the real movements made by users when they are asked to make feet movements, and to reproduce them precisely, through a virtual agent, when asked to imagine mentally reproducing the same movements. We are showing the technical feasibility of this approach and explain how BCIs based on motor imagery can benefit from these advances in order to better involve the user in the interaction loop with the computer system
    corecore