29 research outputs found

    MEG Can Map Short and Long-Term Changes in Brain Activity following Deep Brain Stimulation for Chronic Pain

    Get PDF
    Deep brain stimulation (DBS) has been shown to be clinically effective for some forms of treatment-resistant chronic pain, but the precise mechanisms of action are not well understood. Here, we present an analysis of magnetoencephalography (MEG) data from a patient with whole-body chronic pain, in order to investigate changes in neural activity induced by DBS for pain relief over both short- and long-term. This patient is one of the few cases treated using DBS of the anterior cingulate cortex (ACC). We demonstrate that a novel method, null-beamforming, can be used to localise accurately brain activity despite the artefacts caused by the presence of DBS electrodes and stimulus pulses. The accuracy of our source localisation was verified by correlating the predicted DBS electrode positions with their actual positions. Using this beamforming method, we examined changes in whole-brain activity comparing pain relief achieved with deep brain stimulation (DBS ON) and compared with pain experienced with no stimulation (DBS OFF). We found significant changes in activity in pain-related regions including the pre-supplementary motor area, brainstem (periaqueductal gray) and dissociable parts of caudal and rostral ACC. In particular, when the patient reported experiencing pain, there was increased activity in different regions of ACC compared to when he experienced pain relief. We were also able to demonstrate long-term functional brain changes as a result of continuous DBS over one year, leading to specific changes in the activity in dissociable regions of caudal and rostral ACC. These results broaden our understanding of the underlying mechanisms of DBS in the human brain

    A Strategy for Building the WFO Plant List

    No full text
    All scientists will face the challenge of explaining what they do to a friend or relative. Fortunately it is easy for us to explain our work. We are building a list of all known plants. Unfortunately this elicits the awkward question: Hasn’t that been done already? Everyone knows that Linnaeus started the naming convention in the 18th century. Surely we would have created a list of species in the intervening 270 years. Alas, there is no single, global species list. In 2022, when the team at the Royal Botanic Garden Edinburgh (RBGE) took on the coordination of the World Flora Online (WFO) Plant List, we considered what we could do differently to save our successors from this awkward dinner party question.The WFO Plant List’s primary purpose is as a structure for the WFO information portal. The portal contains a large amount of information. The list is a simple database of names and their taxonomic statuses. It currently contains 1.52 million names and 440,000 accepted taxa. Because the list has a global scope and includes all vascular plants and bryophytes, it has great potential to be of use outside the WFO portal. Functions might include a:common vocabulary for ecological monitoring networks;drop down list in a garden management system;destination for taxonomic output beyond a monographic paper;bridge from historical, observational studies to contemporary, molecular, phylogenetic research.In short, the WFO Plant List can be a single, shared lookup table for plant taxa.There are four well known elements of project management: resources, timescale, quality and scope. We have limited control over the first three of these elements. For resources, our institutes have committed a part of our salaried time to facilitate the project but the vast amount of the work has to be done through collaboration with others. We can only inspire people to contribute and this must be done through principles of FAIR (Findable, Accessible, Interoperable and Reusable) data discussed below. There is no natural timescale for our work; we have therefore established a somewhat artificial drum beat of twice-yearly data releases. This enables us to prioritise smaller batches of work. In a list like this, quality is synonymous with accuracy and non-negotiable. If we have an error in our list, it must be corrected. The only element we have full control over is scope. We can choose what is included and what is not. We do this through the design of our data model. The simpler we can make the model, the more complete we can make the list and the easier it will be to improve quality.We only include names that appear effectively/validly published under the International Code of Nomenclature for Algae, Fungi and Plants (ICNAFP). This is an explicit set of rules we can use to enforce data integrity. Unlike the Catalogue of Life, the Global Biodiversity Information Facility (GBIF) or the Global Names Architecture (GNA), we do not have to model names governed by other nomenclatural codes and can focus our resources. From the start, we have separated nomenclature from taxonomy. This gives us a clear set of nomenclatural facts supported by appropriate references that will not change over time, alongside taxonomic opinion that is linked to relevant supporting literature. We only support a single consensus taxonomy but by keeping snapshots of the taxonomy every six months, we allow changes in the science to be tracked through time. The separation of nomenclature from taxonomy within our identifier schema allows third parties to maintain their own classifications whilst mapping to our classification through taxonomically neutral name identifiers.If we had been working a decade or more ago, we would have created tables for ancillary data such as literature, specimens and people. Today we can take advantage of the many data sources available via web links and only store data on nomenclatural acts and taxonomic placement. All other data is represented by a generic referencing mechanism. A reference consists of a URL (including digital object identifiers (DOIs) in URL form) and a citation string. This approach dramatically increases our ability to focus on taxonomic coverage and leaves specialist systems such as International Plant Names Index (IPNI), Biodiversity Heritage Library (BHL) and WikiData to handle other classes of data.More important than the way we model the data is how it is produced and consumed by others. As a node in a graph of linked biodiversity information, our success is measured by the number of links we have to other nodes and people.The data is being produced and maintained by a growing community organised into Taxonomic Expert Networks (TENs). There are about 300 individual scientists in 44 approved TENs. These TENs can contribute to the live dataset via submission of bulk data or by using a dedicated editing platform called Rhakhis. Care is taken to give attribution for contributions at the finest level of granularity possible using Open Researcher Contributor Identifiers (ORCID). We strive to have the data available in bulk and at the level of each name under FAIR principles. All data is released under a Creative Commons CC0 licence. It is made available through the WFO portal, a dedicated API, ChecklistBank and Zenodo on a six-monthly release cycle. The dataset has a citable DOI as well as each version having its own DOI. All names have a stable URI and each version of each taxon has a stable URI.  There is a name-to-ID matching service available through the API and as a web interface, and there are two R packages (WorldFlora and wfor) to facilitate analysis workflows

    Rhakhis: A workflow for managing the WFO taxonomic backbone

    No full text
    In 2021, the World Flora Online (WFO) Council agreed that the team at the Royal Botanic Garden Edinburgh would take on the technical role of managing the WFO Taxonomic Backbone (WFO-TB). This presentation outlines the implementation of a system to manage the associated data and explores possible future developments.The WFO-TB is a global concensus checklist of plants including bryophytes, pteridophytes, gymnosperms and angiosperms. The checklist data consists of two parts: facts concerning the nomenclatural acts that establish the names under the nomenclatural code, and consensus expert opinion on the placement of those names into a single, authoritative taxonomy. WFO-TB is unique in that it is both global in scope and curated by a large team of experts from multiple institutions. There are currently around 280 specialist contributors organised into 37 Taxonomic Expert Networks (TENs).The primary function of the WFO-TB is to provide structure to the hundreds of thousands of descriptions, images and other pieces of Content that make up the WFO’s main web portal, but periodic snapshots of the taxonomic data are also made available as public downloads and published as the WFO Plant List. The WFO Plant List is unique in that it remembers each version of WFO-TB that was published and links between them thus providing a stable, citable resource. In addition, data can be released in any format that researchers may find useful including through ChecklistBank and so be potentially incorporated into the Catalogue of Life and GBIF (Global Biodiversity Information Facility). All checklist data are released with a Creative Commons (CC) license of CC0 so that others may freely build upon, enhance and reuse the works for any purposes without restriction under copyright or database law.Prior to 2022, the WFO-TB was managed as part of the main WFO data resources supporting the WFO Portal by the team at Missouri Botanical Garden. Moving responsibility for hosting the backbone to the team at Edinburgh added significant resources to the project as a whole, but created technical challenges. The first requirement of the new system (called Rhakhis, a Greek form of rachis, the ‘backbone’ of a leaf or inflorescence) was to demonstrate that it could feed data back to Missouri in a way that could be incorporated into the WFO infrastructure without causing disruption to the Content curation process. This was completed in early 2022, and, by June, the primary copy of the WFO-TB data had been entrusted to Rhakhis. The June 2022 version of the WFO Plant List and the WFO-TB was published using the previous system, switching to publication from Rhakhis in December 2022.The system architecture of Rhakhis is quite simple. A MySQL database holds all the data and is exposed via a GraphQL API to a web-based user interface written in Javascript using React and Bootstrap. User authentication is handled via a link to ORCiD (Open Researcher and Contributor ID). Authorisation for editors is delegated hierarchically down the taxonomic tree. This provides the ability for TENs and the TEN Manager to oversee and manage the live data directly and to delegate authority to colleagues. The TEN Manager has access to a plain HTML bulk loader interface that enables the ingestion of CSV and files in DwC-A (Darwin Core Archive) format supplied by TENs as well as updates from other data sources, such as IPNI (International Plant Names Index) and WCVP (RBG Kew’s World Checklist of Vascular Plants). Rhakhis is designed as a standalone data management tool for taxonomists involved in the project rather than a public website, and is hosted on a WFO server on the Google Cloud.The WFO Plant List is run on a separate system and designed to be a performant public-facing website. Periodic snapshots of the WFO-TB are imported into a Solr Index that is then exposed via another GraphQL API as well as Semantic Web-compliant HTTPS URIs. A web-based user interface to WFO Plant List is implemented as part of the Craft CMS (Content Management System) that also runs the About pages of the WFO Web Portal, but it would be possible to build other interfaces to this data.The periodic snapshots of the WFO-TB, which are published through the WFO Plant List, are archived in Zenodo and assigned a DOI (Digital Object Identifier), as well as contributed to Catalogue of Life ChecklistBank. The archive formats currently supported include: Darwin Core Archive and Catalogue of Life Data Package, but other formats will be considered in the future. We are interested in investigating the creation of data papers, e.g., through the Biodiversity Data Journal at Pensoft, to provide additional accreditation for contributors and exposure of their data

    Gondwana breakup and plate kinematics: Business as usual

    Get PDF
    A tectonic model of the Weddell Sea is built by composing a simple circuit with optimized rotations describing the growth of the South Atlantic and SW Indian oceans. The model independently and accurately reproduces the consensus elements of the Weddell Sea's spreading record and continental margins, and offers solutions to remaining controversies there. At their present resolutions, plate kinematic data from the South Atlantic and SW Indian oceans and Weddell Sea rule against the proposed, but controversial, independent movements of small plates during Gondwana breakup that have been attributed to the presence or impact of a mantle plume. Hence, although supercontinent breakup here was accompanied by extraordinary excess volcanism, there is no indication from plate kinematics that the causes of that volcanism provided a unique driving mechanism for it. Citation: Eagles, G., and A. P. M. Vaughan (2009), Gondwana breakup and plate kinematics: Business as usual, Geophys. Res. Lett., 36, L10302, doi:10.1029/2009GL037552

    Impact of emotion on consciousness: positive stimuli enhance conscious reportability

    Get PDF
    Emotion and reward have been proposed to be closely linked to conscious experience, but empirical data are lacking. The anterior cingulate cortex (ACC) plays a central role in the hedonic dimension of conscious experience; thus potentially a key region in interactions between emotion and consciousness. Here we tested the impact of emotion on conscious experience, and directly investigated the role of the ACC. We used a masked paradigm that measures conscious reportability in terms of subjective confidence and objective accuracy in identifying the briefly presented stimulus in a forced-choice test. By manipulating the emotional valence (positive, neutral, negative) and the presentation time (16 ms, 32 ms, 80 ms) we measured the impact of these variables on conscious and subliminal (i.e. below threshold) processing. First, we tested normal participants using face and word stimuli. Results showed that participants were more confident and accurate when consciously seeing happy versus sad/neutral faces and words. When stimuli were presented subliminally, we found no effect of emotion. To investigate the neural basis of this impact of emotion, we recorded local field potentials (LFPs) directly in the ACC in a chronic pain patient. Behavioural findings were replicated: the patient was more confident and accurate when (consciously) seeing happy versus sad faces, while no effect was seen in subliminal trials. Mirroring behavioural findings, we found significant differences in the LFPs after around 500 ms (lasting 30 ms) in conscious trials between happy and sad faces, while no effect was found in subliminal trials. We thus demonstrate a striking impact of emotion on consciou

    Implementing novel trial methods to evaluate surgery for essential tremor

    No full text
    Introduction. Deep brain stimulation (DBS) can provide dramatic essential tremor (ET) relief, however no Class I evidence exists. Materials and methods. Analysis methods: I) traditional cohort analysis; II) N-of-1 single patient randomised control trial and III) signal-to-noise (S/N) analysis. 20 DBS electrodes in ET patients were switched on and off for 3-min periods. Six pairs of on and off periods in each case, with the pair order determined randomly. Tremor severity was quantified with tremor evaluator and patient was blinded to stimulation. Patients also stated whether they perceived the stimulation to be on after each trial. Results. I) Mean end-of-trial tremor severity 0.84 out of 10 on, 6.62 Off, t = − 13.218, p 80% tremor reduction occurred in 99/114 ‘On’ trials (87%), and 3/114 ‘Off’ trials (3%). S/N ratio for 80% improvement with DBS versus spontaneous improvement was 487,757-to-1. Conclusions. DBS treatment effect on ET is too large for bias to be a plausible explanation. Formal N-of-1 trial design, and S/N ratio method for presenting results, allows this to be demonstrated convincingly where conventional randomised controlled trials are not possible. Classification of evidence. This study is the first to provide Class I evidence for the efficacy of DBS for ET.</p

    Changes in activity in brain regions one week and one year after surgery.

    No full text
    <p>The table shows the mean amplitudes (and their variances) and source powers in different range of frequencies in DBS ON and OFF conditions (A) one week after surgery and (B) one year after surgery.</p

    Application of the null-beamformer.

    No full text
    <p>The figure shows the estimated power of the sources in the mid-sagittal (top) and mid-coronal (bottom) view following the use of A) conventional beamformer and B) null-beamformer. The threshold value is 70% of the peak of the power spectrum. As can be seen, the null-beamformer has successfully removed the interference outside of the brain. Please note that in order to best depict the sources of brain activity, the null location is approximate and its actual location is in other anatomical planes (not shown).</p

    Comparison of the accuracy of using null and conventional beamformers for the localization of known locations of DBS electrodes.

    No full text
    <p>A) We used the null beamformer to localize the DBS electrodes when the stimulator was ON at 130 Hz. The coronal view of the lowest electrodes as localised on the patient’s MRI (red markers) compared with the overlay of contours of the estimated power using the null-beamformer. Two sagittal slices through lower left electrode and sagittal view of the lower right electrode. The fit is especially good on the left side. B) Similar localization using the conventional beamformer method shows a less good fit. In particular the method is unable to localize both electrodes.</p
    corecore