1,356 research outputs found

    Heuristic Spike Sorting Tuner (HSST), a framework to determine optimal parameter selection for a generic spike sorting algorithm

    Get PDF
    Extracellular microelectrodes frequently record neural activity from more than one neuron in the vicinity of the electrode. The process of labeling each recorded spike waveform with the identity of its source neuron is called spike sorting and is often approached from an abstracted statistical perspective. However, these approaches do not consider neurophysiological realities and may ignore important features that could improve the accuracy of these methods. Further, standard algorithms typically require selection of at least one free parameter, which can have significant effects on the quality of the output. We describe a Heuristic Spike Sorting Tuner (HSST) that determines the optimal choice of the free parameters for a given spike sorting algorithm based on the neurophysiological qualification of unit isolation and signal discrimination. A set of heuristic metrics are used to score the output of a spike sorting algorithm over a range of free parameters resulting in optimal sorting quality. We demonstrate that these metrics can be used to tune parameters in several spike sorting algorithms. The HSST algorithm shows robustness to variations in signal to noise ratio, number and relative size of units per channel. Moreover, the HSST algorithm is computationally efficient, operates unsupervised, and is parallelizable for batch processing

    Multi-Oriented Multi-Resolution Edge Detection

    Get PDF
    In order to build an edge detector that provides information on the degree of importance spatial features represent in the visual field, I used the wavelet transform applied to two-dimensional signals and performed a multi-resolution multi-oriented edge detection. The wavelets are functions well-localized in spatial domain and in frequency domain. Thus the wavelet decomposition of a signal or an image provides outputs in which you can still extract spatial features and not only frequency components. In order to detect edges the wavelet I chose is the first derivative of a smoothing function. I decompose the images as many times as I have directions of detection. I decided to work for the moment on the X-direction and the Y-direction only. Each step of the decomposition corresponds to a different scale. I use a discrete scale s = 2j (dyadic wavelet) and a finite number of decomposed images. Instead of scaling the filters at each step I sample the image by 2 (gain in processing time). Then, I extract the extrema, track and link them from the coarsest scale to the finest one. I build a symbolic image in which the edge-pixels are not only localized but labelled too, according to the number of appearances in the different scales and according to the contrast range of the edge. Without any arbitrary threshold I can subsequently classify the edges according to their physical properties in the scene and their degree of importance. This process is subsequently intended to be part of more general perceptual learning procedures. The context should be: none or as little as possible a priori knowledge, and the ultimate goal is to integrate this detector in a feedback system dealing with color information, texture and smooth surfaces extraction. Then decisions must be taken on symbolic levels in order to make new interpretation or even new edge detection on ambiguous areas of the visual field

    Statistical shape analysis for bio-structures : local shape modelling, techniques and applications

    Get PDF
    A Statistical Shape Model (SSM) is a statistical representation of a shape obtained from data to study variation in shapes. Work on shape modelling is constrained by many unsolved problems, for instance, difficulties in modelling local versus global variation. SSM have been successfully applied in medical image applications such as the analysis of brain anatomy. Since brain structure is so complex and varies across subjects, methods to identify morphological variability can be useful for diagnosis and treatment. The main objective of this research is to generate and develop a statistical shape model to analyse local variation in shapes. Within this particular context, this work addresses the question of what are the local elements that need to be identified for effective shape analysis. Here, the proposed method is based on a Point Distribution Model and uses a combination of other well known techniques: Fractal analysis; Markov Chain Monte Carlo methods; and the Curvature Scale Space representation for the problem of contour localisation. Similarly, Diffusion Maps are employed as a spectral shape clustering tool to identify sets of local partitions useful in the shape analysis. Additionally, a novel Hierarchical Shape Analysis method based on the Gaussian and Laplacian pyramids is explained and used to compare the featured Local Shape Model. Experimental results on a number of real contours such as animal, leaf and brain white matter outlines have been shown to demonstrate the effectiveness of the proposed model. These results show that local shape models are efficient in modelling the statistical variation of shape of biological structures. Particularly, the development of this model provides an approach to the analysis of brain images and brain morphometrics. Likewise, the model can be adapted to the problem of content based image retrieval, where global and local shape similarity needs to be measured

    Eddy current automatic flaw detection system for heat exchanger tubes in steam generators

    Get PDF
    In this dissertation we present an automatic flaw detection system for heat exchanger tubes in steam generators. The system utilizes two well-known techniques, wavelets and fuzzy logic, to automatically detect the flaws in tubing data. The analysis of eddy current inspection data is a difficult task which requires intensive labor by experienced human analysts. To aid the analysts, an accurate and consistent automatic data analysis package was developed. The software development is divided into three parts: data preprocessing, wavelet analysis, and a fuzzy inference system. The data preprocessing procedure is used to set up a signal analysis standard for different data and also to remove the variations due to lift-off and other geometrical effects. The wavelet technique is used to reduce noise and identify possible flaw indications. Due to multiresolution and the unique time-frequency localization properties of the wavelet transform, the flaw signals have specific characteristics in the wavelet domain. We fully utilize those characteristics to distinguish flaw indications from noise. To further evaluate the flaw candidates and reduce false calls, we invoked fuzzy logic to discriminate between true positives and false positives. A template matching technique and fuzzy inference system were developed. The template matching technique uses signals from artificial flaws as templates to match with possible flaw signals and execute a normalized complex crosscorrelation. Through this process, we obtain both phase and shape information which are placed into a fuzzy inference system for final decision making. A rigorous test of the system using actual inspection data was undertaken. Results from tests indicate that the new techniques show a great deal of promise for automatic flaw detection. Investigating the novel techniques and integrating them into a system are the major contribution of this work
    corecore