7,983 research outputs found
A multi-view approach to cDNA micro-array analysis
The official published version can be obtained from the link below.Microarray has emerged as a powerful technology that enables biologists to study thousands of genes simultaneously, therefore, to obtain a better understanding of the gene interaction and regulation mechanisms. This paper is concerned with improving the processes involved in the analysis of microarray image data. The main focus is to clarify an image's feature space in an unsupervised manner. In this paper, the Image Transformation Engine (ITE), combined with different filters, is investigated. The proposed methods are applied to a set of real-world cDNA images. The MatCNN toolbox is used during the segmentation process. Quantitative comparisons between different filters are carried out. It is shown that the CLD filter is the best one to be applied with the ITE.This work was supported in part by the Engineering and Physical Sciences Research
Council (EPSRC) of the UK under Grant GR/S27658/01, the National Science Foundation of China under Innovative Grant 70621001, Chinese Academy of Sciences
under Innovative Group Overseas Partnership Grant, the BHP Billiton Cooperation of Australia Grant, the International Science and Technology Cooperation Project of China
under Grant 2009DFA32050 and the Alexander von Humboldt Foundation of Germany
Recommended from our members
SigMate: a MATLAB-based automated tool for extracellular neuronal signal processing and analysis
Rapid advances in neuronal probe technology for multisite recording of brain activity have posed a significant challenge to neuroscientists for processing and analyzing the recorded signals. To be able to infer meaningful conclusions quickly and accurately from large datasets, automated and sophisticated signal processing and analysis tools are required. This paper presents a Matlab-based novel tool, âSigMateâ, incorporating standard methods to analyze spikes and EEG signals, and in-house solutions for local field potentials (LFPs) analysis. Available modules at present are â 1. In-house developed algorithms for: data display (2D and 3D), file operations (file splitting, file concatenation, and file column rearranging), baseline correction, slow stimulus artifact removal, noise characterization and signal quality assessment, current source density (CSD) analysis, latency estimation from LFPs and CSDs, determination of cortical layer activation order using LFPs and CSDs, and single LFP clustering; 2. Existing modules: spike detection, sorting and spike train analysis, and EEG signal analysis. SigMate has the flexibility of analyzing multichannel signals as well as signals from multiple recording sources. The in-house developed tools for LFP analysis have been extensively tested with signals recorded using standard extracellular recording electrode, and planar and implantable multi transistor array (MTA) based neural probes. SigMate will be disseminated shortly to the neuroscience community under the open-source GNU-General Public License
Improving data quality in neuronal population recordings
Understanding how the brain operates requires understanding how large sets of neurons function together. Modern recording technology makes it possible to simultaneously record the activity of hundreds of neurons, and technological developments will soon allow recording of thousands or tens of thousands. As with all experimental techniques, these methods are subject to confounds that complicate the interpretation of such recordings, and could lead to erroneous scientific conclusions. Here, we discuss methods for assessing and improving the quality of data from these techniques, and outline likely future directions in this field
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Developing implant technologies and evaluating brain-machine interfaces using information theory
Brain-machine interfaces (BMIs) hold promise for restoring motor functions in severely paralyzed individuals. Invasive BMIs are capable of recording signals from individual neurons and typically provide the highest signal-to-noise ratio. Despite many efforts in the scientific community, BMI technology is still not reliable enough for widespread clinical application. The most prominent challenges include biocompatibility, stability, longevity, and lack of good models for informed signal processing and BMI comparison.
To address the problem of low signal quality of chronic probes, in the first part of the thesis one such design, the Neurotrophic Electrode, was modified by increasing its channel capacity to form a Neurotrophic Array (NA). Specifically, single wires were replaced with stereotrodes and the total number of recording wires was increased. This new array design was tested in a rhesus macaque performing a delayed saccade task. The NA recorded little single unit spiking activity, and its local field potentials (LFPs) correlated with presented visual stimuli and saccade locations better than did extracted spikes.
The second part of the thesis compares the NA to the Utah Array (UA), the only other micro-array approved for chronic implantation in a human brain. The UA recorded significantly more spiking units, which had larger amplitudes than NA spikes. This was likely due to differences in the array geometry and construction. LFPs on the NA electrodes were more correlated with each other than those on the UA. These correlations negatively impacted the NA's information capacity when considering more than one recording site.
The final part of this dissertation applies information theory to develop objective measures of BMI performance. Currently, decoder information transfer rate (ITR) is the most popular BMI information performance metric. However, it is limited by the selected decoding algorithm and does not represent the full task information embedded in the recorded neural signal. A review of existing methods to estimate ITR is presented, and these methods are interpreted within a BMI context. A novel Gaussian mixture Monte Carlo method is developed to produce good ITR estimates with a low number of trials and high number of dimensions, as is typical for BMI applications
High-dimensional cluster analysis with the masked EM algorithm
This is an Open Access article published under a Creative Commons Attribution 3.0 Unported (CC BY 3.0) license https://creativecommons.org/licenses/by/3.0/Cluster analysis faces two problems in high dimensions: the "curse of dimensionality" that can lead to overfitting and poor generalization performance and the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of spike sorting for nextgeneration, high-channel-count neural probes. In this problem, only a small subset of features provides information about the cluster membership of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective.We introduce a "masked EM" algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data and to real-world high-channel-count spike sorting data.Peer reviewe
Real-time neural signal processing and low-power hardware co-design for wireless implantable brain machine interfaces
Intracortical Brain-Machine Interfaces (iBMIs) have advanced significantly over the past
two decades, demonstrating their utility in various aspects, including neuroprosthetic control
and communication. To increase the information transfer rate and improve the devicesâ
robustness and longevity, iBMI technology aims to increase channel counts to access more
neural data while reducing invasiveness through miniaturisation and avoiding percutaneous
connectors (wired implants). However, as the number of channels increases, the raw data
bandwidth required for wireless transmission also increases becoming prohibitive, requiring
efficient on-implant processing to reduce the amount of data through data compression or
feature extraction.
The fundamental aim of this research is to develop methods for high-performance neural spike processing co-designed within low-power hardware that is scaleable for real-time
wireless BMI applications. The specific original contributions include the following:
Firstly, a new method has been developed for hardware-efficient spike detection, which
achieves state-of-the-art spike detection performance and significantly reduces the hardware
complexity. Secondly, a novel thresholding mechanism for spike detection has been introduced. By incorporating firing rate information as a key determinant in establishing the spike
detection threshold, we have improved the adaptiveness of spike detection. This eventually
allows the spike detection to overcome the signal degradation that arises due to scar tissue
growth around the recording site, thereby ensuring enduringly stable spike detection results.
The long-term decoding performance, as a consequence, has also been improved notably.
Thirdly, the relationship between spike detection performance and neural decoding accuracy has been investigated to be nonlinear, offering new opportunities for further reducing
transmission bandwidth by at least 30% with minor decoding performance degradation.
In summary, this thesis presents a journey toward designing ultra-hardware-efficient spike
detection algorithms and applying them to reduce the data bandwidth and improve neural
decoding performance. The software-hardware co-design approach is essential for the next
generation of wireless brain-machine interfaces with increased channel counts and a highly
constrained hardware budget.
The fundamental aim of this research is to develop methods for high-performance neural spike processing co-designed within low-power hardware that is scaleable for real-time wireless BMI applications. The specific original contributions include the following:
Firstly, a new method has been developed for hardware-efficient spike detection, which achieves state-of-the-art spike detection performance and significantly reduces the hardware complexity. Secondly, a novel thresholding mechanism for spike detection has been introduced. By incorporating firing rate information as a key determinant in establishing the spike detection threshold, we have improved the adaptiveness of spike detection. This eventually allows the spike detection to overcome the signal degradation that arises due to scar tissue growth around the recording site, thereby ensuring enduringly stable spike detection results. The long-term decoding performance, as a consequence, has also been improved notably. Thirdly, the relationship between spike detection performance and neural decoding accuracy has been investigated to be nonlinear, offering new opportunities for further reducing transmission bandwidth by at least 30\% with only minor decoding performance degradation.
In summary, this thesis presents a journey toward designing ultra-hardware-efficient spike detection algorithms and applying them to reduce the data bandwidth and improve neural decoding performance. The software-hardware co-design approach is essential for the next generation of wireless brain-machine interfaces with increased channel counts and a highly constrained hardware budget.Open Acces
Model Based Automatic and Robust Spike Sorting for Large Volumes of Multi-channel Extracellular Data
abstract: Spike sorting is a critical step for single-unit-based analysis of neural activities extracellularly and simultaneously recorded using multi-channel electrodes. When dealing with recordings from very large numbers of neurons, existing methods, which are mostly semiautomatic in nature, become inadequate.
This dissertation aims at automating the spike sorting process. A high performance, automatic and computationally efficient spike detection and clustering system, namely, the M-Sorter2 is presented. The M-Sorter2 employs the modified multiscale correlation of wavelet coefficients (MCWC) for neural spike detection. At the center of the proposed M-Sorter2 are two automatic spike clustering methods. They share a common hierarchical agglomerative modeling (HAM) model search procedure to strategically form a sequence of mixture models, and a new model selection criterion called difference of model evidence (DoME) to automatically determine the number of clusters. The M-Sorter2 employs two methods differing by how they perform clustering to infer model parameters: one uses robust variational Bayes (RVB) and the other uses robust Expectation-Maximization (REM) for Studentâs -mixture modeling. The M-Sorter2 is thus a significantly improved approach to sorting as an automatic procedure.
M-Sorter2 was evaluated and benchmarked with popular algorithms using simulated, artificial and real data with truth that are openly available to researchers. Simulated datasets with known statistical distributions were first used to illustrate how the clustering algorithms, namely REMHAM and RVBHAM, provide robust clustering results under commonly experienced performance degrading conditions, such as random initialization of parameters, high dimensionality of data, low signal-to-noise ratio (SNR), ambiguous clusters, and asymmetry in cluster sizes. For the artificial dataset from single-channel recordings, the proposed sorter outperformed Wave_Clus, Plexonâs Offline Sorter and Klusta in most of the comparison cases. For the real dataset from multi-channel electrodes, tetrodes and polytrodes, the proposed sorter outperformed all comparison algorithms in terms of false positive and false negative rates. The software package presented in this dissertation is available for open access.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings
Measuring the dynamics of neural processing across time scales requires following the spiking of thousands of individual neurons over milliseconds and months. To address this need, we introduce the Neuropixels 2.0 probe together with newly designed analysis algorithms. The probe has more than 5000 sites and is miniaturized to facilitate chronic implants in small mammals and recording during unrestrained behavior. High-quality recordings over long time scales were reliably obtained in mice and rats in six laboratories. Improved site density and arrangement combined with newly created data processing methods enable automatic post hoc correction for brain movements, allowing recording from the same neurons for more than 2 months. These probes and algorithms enable stable recordings from thousands of sites during free behavior, even in small animals such as mice
- âŠ