357 research outputs found

    Mental state estimation for brain-computer interfaces

    Get PDF
    Mental state estimation is potentially useful for the development of asynchronous brain-computer interfaces. In this study, four mental states have been identified and decoded from the electrocorticograms (ECoGs) of six epileptic patients, engaged in a memory reach task. A novel signal analysis technique has been applied to high-dimensional, statistically sparse ECoGs recorded by a large number of electrodes. The strength of the proposed technique lies in its ability to jointly extract spatial and temporal patterns, responsible for encoding mental state differences. As such, the technique offers a systematic way of analyzing the spatiotemporal aspects of brain information processing and may be applicable to a wide range of spatiotemporal neurophysiological signals

    Recording advances for neural prosthetics

    Get PDF
    An important challenge for neural prosthetics research is to record from populations of neurons over long periods of time, ideally for the lifetime of the patient. Two new advances toward this goal are described, the use of local field potentials (LFPs) and autonomously positioned recording electrodes. LFPs are the composite extracellular potential field from several hundreds of neurons around the electrode tip. LFP recordings can be maintained for longer periods of time than single cell recordings. We find that similar information can be decoded from LFP and spike recordings, with better performance for state decodes with LFPs and, depending on the area, equivalent or slightly less than equivalent performance for signaling the direction of planned movements. Movable electrodes in microdrives can be adjusted in the tissue to optimize recordings, but their movements must be automated to be a practical benefit to patients. We have developed automation algorithms and a meso-scale autonomous electrode testbed, and demonstrated that this system can autonomously isolate and maintain the recorded signal quality of single cells in the cortex of awake, behaving monkeys. These two advances show promise for developing very long term recording for neural prosthetic applications

    Towards the development of data governance standards for using clinical free-text data in health research: a position paper

    Get PDF
    Background: Free-text clinical data (such as outpatient letters or nursing notes) represent a vast, untapped source of rich information that, if more accessible for research, would clarify and supplement information coded in structured data fields. Data usually need to be de-identified or anonymised before they can be reused for research, but there is a lack of established guidelines to govern effective de-identification and use of free-text information and avoid damaging data utility as a by-product. / Objective: We set out to work towards data governance standards to integrate with existing frameworks for personal data use, to enable free-text data to be used safely for research for patient/public benefit. / Methods: We outlined (UK) data protection legislation and regulations for context, and conducted a rapid literature review and UK-based case studies to explore data governance models used in working with free-text data. We also engaged with stakeholders including text mining researchers and the general public to explore perceived barriers and solutions in working with clinical free-text. / Results: We propose a set of recommendations, including the need: for authoritative guidance on data governance for the reuse of free-text data; to ensure public transparency in data flows and uses; to treat de-identified free-text as potentially identifiable with use limited to accredited data safe-havens; and, to commit to a culture of continuous improvement to understand the relationships between efficacy of de-identification and re-identification risks, so this can be communicated to all stakeholders. / Conclusions: By drawing together the findings of a combination of activities, our unique study has added new knowledge towards the development of data governance standards for the reuse of clinical free-text data for secondary purposes. Whilst working in accord with existing data governance frameworks, there is a need for further work to take forward the recommendations we have proposed, with commitment and investment, to assure and expand the safe reuse of clinical free-text data for public benefit

    Exploring the Consistency, Quality and Challenges in Manual and Automated Coding of Free-text Diagnoses from Hospital Outpatient Letters

    Full text link
    Coding of unstructured clinical free-text to produce interoperable structured data is essential to improve direct care, support clinical communication and to enable clinical research.However, manual clinical coding is difficult and time consuming, which motivates the development and use of natural language processing for automated coding. This work evaluates the quality and consistency of both manual and automated clinical coding of diagnoses from hospital outpatient letters. Using 100 randomly selected letters, two human clinicians performed coding of diagnosis lists to SNOMED CT. Automated coding was also performed using IMO's Concept Tagger. A gold standard was constructed by a panel of clinicians from a subset of the annotated diagnoses. This was used to evaluate the quality and consistency of both manual and automated coding via (1) a distance-based metric, treating SNOMED CT as a graph, and (2) a qualitative metric agreed upon by the panel of clinicians. Correlation between the two metrics was also evaluated. Comparing human and computer-generated codes to the gold standard, the results indicate that humans slightly out-performed automated coding, while both performed notably better when there was only a single diagnosis contained in the free-text description. Automated coding was considered acceptable by the panel of clinicians in approximately 90% of cases

    ELIXIR-UK role in bioinformatics training at the national level and across ELIXIR

    Get PDF
    ELIXIR-UK is the UK node of ELIXIR, the European infrastructure for life science data. Since its foundation in 2014, ELIXIR-UK has played a leading role in training both within the UK and in the ELIXIR Training Platform, which coordinates and delivers training across all ELIXIR members. ELIXIR-UK contributes to the Training Platform’s coordination and supports the development of training to address key skill gaps amongst UK scientists. As part of this work it acts as a conduit for nationally-important bioinformatics training resources to promote their activities to the ELIXIR community. ELIXIR-UK also leads ELIXIR’s flagship Training Portal, TeSS, which collects information about a diverse range of training and makes it easily accessible to the community. ELIXIR-UK also works with others to provide key digital skills training, partnering with the Software Sustainability Institute to provide Software Carpentry training to the ELIXIR community and to establish the Data Carpentry initiative, and taking a lead role amongst national stakeholders to deliver the StaTS project – a coordinated effort to drive engagement with training in statistics

    A scalable machine-learning approach to recognize chemical names within large text databases

    Get PDF
    MOTIVATION: The use or study of chemical compounds permeates almost every scientific field and in each of them, the amount of textual information is growing rapidly. There is a need to accurately identify chemical names within text for a number of informatics efforts such as database curation, report summarization, tagging of named entities and keywords, or the development/curation of reference databases. RESULTS: A first-order Markov Model (MM) was evaluated for its ability to distinguish chemical names from words, yielding ~93% recall in recognizing chemical terms and ~99% precision in rejecting non-chemical terms on smaller test sets. However, because total false-positive events increase with the number of words analyzed, the scalability of name recognition was measured by processing 13.1 million MEDLINE records. The method yielded precision ranges from 54.7% to 100%, depending upon the cutoff score used, averaging 82.7% for approximately 1.05 million putative chemical terms extracted. Extracted chemical terms were analyzed to estimate the number of spelling variants per term, which correlated with the total number of times the chemical name appeared in MEDLINE. This variability in term construction was found to affect both information retrieval and term mapping when using PubMed and Ovid
    corecore