162 research outputs found

    Advanced spike sorting approaches in implantable VLSI wireless brain computer interfaces: a survey

    Full text link
    Brain Computer/Machine Interfaces (BCI/BMIs) have substantial potential for enhancing the lives of disabled individuals by restoring functionalities of missing body parts or allowing paralyzed individuals to regain speech and other motor capabilities. Due to severe health hazards arising from skull incisions required for wired BCI/BMIs, scientists are focusing on developing VLSI wireless BCI implants using biomaterials. However, significant challenges, like power efficiency and implant size, persist in creating reliable and efficient wireless BCI implants. With advanced spike sorting techniques, VLSI wireless BCI implants can function within the power and size constraints while maintaining neural spike classification accuracy. This study explores advanced spike sorting techniques to overcome these hurdles and enable VLSI wireless BCI/BMI implants to transmit data efficiently and achieve high accuracy.Comment: Submitted to 37th International Conference on VLSI Design 202

    Mobile Robots

    Get PDF
    The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations

    INQUIRIES IN INTELLIGENT INFORMATION SYSTEMS: NEW TRAJECTORIES AND PARADIGMS

    Get PDF
    Rapid Digital transformation drives organizations to continually revitalize their business models so organizations can excel in such aggressive global competition. Intelligent Information Systems (IIS) have enabled organizations to achieve many strategic and market leverages. Despite the increasing intelligence competencies offered by IIS, they are still limited in many cognitive functions. Elevating the cognitive competencies offered by IIS would impact the organizational strategic positions. With the advent of Deep Learning (DL), IoT, and Edge Computing, IISs has witnessed a leap in their intelligence competencies. DL has been applied to many business areas and many industries such as real estate and manufacturing. Moreover, despite the complexity of DL models, many research dedicated efforts to apply DL to limited computational devices, such as IoTs. Applying deep learning for IoTs will turn everyday devices into intelligent interactive assistants. IISs suffer from many challenges that affect their service quality, process quality, and information quality. These challenges affected, in turn, user acceptance in terms of satisfaction, use, and trust. Moreover, Information Systems (IS) has conducted very little research on IIS development and the foreseeable contribution for the new paradigms to address IIS challenges. Therefore, this research aims to investigate how the employment of new AI paradigms would enhance the overall quality and consequently user acceptance of IIS. This research employs different AI paradigms to develop two different IIS. The first system uses deep learning, edge computing, and IoT to develop scene-aware ridesharing mentoring. The first developed system enhances the efficiency, privacy, and responsiveness of current ridesharing monitoring solutions. The second system aims to enhance the real estate searching process by formulating the search problem as a Multi-criteria decision. The system also allows users to filter properties based on their degree of damage, where a deep learning network allocates damages in 12 each real estate image. The system enhances real-estate website service quality by enhancing flexibility, relevancy, and efficiency. The research contributes to the Information Systems research by developing two Design Science artifacts. Both artifacts are adding to the IS knowledge base in terms of integrating different components, measurements, and techniques coherently and logically to effectively address important issues in IIS. The research also adds to the IS environment by addressing important business requirements that current methodologies and paradigms are not fulfilled. The research also highlights that most IIS overlook important design guidelines due to the lack of relevant evaluation metrics for different business problems

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Tools for interfacing, extracting, and analyzing neural signals using wide-field fluorescence imaging and optogenetics in awake behaving mice

    Get PDF
    Imaging of multiple cells has rapidly multiplied the rate of data acquisition as well as our knowledge of the complex dynamics within the mammalian brain. The process of data acquisition has been dramatically enhanced with highly affordable, sensitive image sensors enable high-throughput detection of neural activity in intact animals. Genetically encoded calcium sensors deliver a substantial boost in signal strength and in combination with equally critical advances in the size, speed, and sensitivity of image sensors available in scientific cameras enables high-throughput detection of neural activity in behaving animals using traditional wide-field fluorescence microscopy. However, the tremendous increase in data flow presents challenges to processing, analysis, and storage of captured video, and prompts a reexamination of traditional routines used to process data in neuroscience and now demand improvements in both our hardware and software applications for processing, analyzing, and storing captured video. This project demonstrates the ease with which a dependable and affordable wide-field fluorescence imaging system can be assembled and integrated with behavior control and monitoring system such as found in a typical neuroscience laboratory. An Open-source MATLAB toolbox is employed to efficiently analyze and visualize large imaging data sets in a manner that is both interactive and fully automated. This software package provides a library of image pre-processing routines optimized for batch-processing of continuous functional fluorescence video, and additionally automates a fast unsupervised ROI detection and signal extraction routine. Further, an extension of this toolbox that uses GPU programming to process streaming video, enabling the identification, segmentation and extraction of neural activity signals on-line is described in which specific algorithms improve signal specificity and image quality at the single cell level in a behaving animal. This project describes the strategic ingredients for transforming a large bulk flow of raw continuous video into proportionally informative images and knowledge

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Deep learning for asteroid detection in large astronomical surveys : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Albany, New Zealand

    Get PDF
    The MOA-II telescope has been operating at the Mt John Observatory since 2004 as part of a Japan/NZ collaboration looking for microlensing events. The telescope has a total field of view of 1.6 x 1.3 degrees and surveys the Galactic Bulge several times each night. This makes it particularly good for observing short duration events. While it has been successful in discovering exoplanets, the full scientific potential of the data has not yet been realised. In particular, numerous known asteroids are hidden amongst the MOA data. These can be clearly seen upon visual inspection of selected images. There are also potentially many undiscovered asteroids captured by the telescope. As yet, no tool exists to effectively mine archival data from large astronomical surveys, such as MOA, for asteroids. The appeal of deep learning is in its ability to learn useful representations from data without significant hand-engineering, making it an excellent tool for asteroid detection. Supervised learning requires labelled datasets, which are also unavailable. The goal of this research is to develop datasets suitable for supervised learning and to apply several CNN-based techniques to identify asteroids in the MOA-II data. Asteroid tracklets can be clearly seen by combining all the observations on a given night and these tracklets form the basis of the dataset. Known asteroids were identified within the composite images, forming the seed dataset for supervised learning. These images were used to train several CNNs to classify images as either containing asteroids or not. The top five networks were then configured as an ensemble that achieved a recall of 97.67%. Next, the YOLO object detector was trained to localise asteroid tracklets, achieving a mean average precision (mAP) of 90.97%. These trained networks will be applied to 16 years of MOA archival data to find both known and unknown asteroids that have been observed by the telescope over the years. The methodologies developed can also be used by other surveys for asteroid recovery and discovery

    Pattern recognition of neural data: methods and algorithms for spike sorting and their optimal performance in prefrontal cortex recordings

    Get PDF
    Programa de Doctorado en NeurocienciasPattern recognition of neuronal discharges is the electrophysiological basis of the functional characterization of brain processes, so the implementation of a Spike Sorting algorithm is an essential step for the analysis of neural codes and neural interactions in a network or brain circuit. Extracted information from the neural action potential can be used to characterize neural activity events and correlate them during behavioral and cognitive processes, including different types of associative learning tasks. In particular, feature extraction is a critical step in the spike sorting procedure, which is prior to the clustering step and subsequent to the spike detection-identification step in a Spike Sorting algorithm. In the present doctoral thesis, the implementation of an automatic and unsupervised computational algorithm, called 'Unsupervised Automatic Algorithm', is proposed for the detection, identification and classification of the neural action potentials distributed across the electrophysiological recordings; and for clustering of these potentials in function of the shape, phase and distribution features, which are extracted from the first-order derivative of the potentials under study. For this, an efficient and unsupervised clustering method was developed, which integrate the K-means method with two clustering measures (validity and error indices) to verify both the cohesion-dispersion among neural spike during classification and the misclassification of clustering, respectively. In additions, this algorithm was implemented in a customized spike sorting software called VISSOR (Viability of Integrated Spike Sorting of Real Recordings). On the other hand, a supervised grouping method of neural activity profiles was performed to allow the recognition of specific patterns of neural discharges. Validity and effectiveness of these methods and algorithms were tested in this doctoral thesis by the classification of the detected action potentials from extracellular recordings of the rostro-medial prefrontal cortex of rabbits during the classical eyelid conditioning. After comparing the spike-sorting methods/algorithms proposed in this work with other methods also based on feature extraction of the action potentials, it was observed that this one had a better performance during the classification. That is, the methods/algorithms proposed here allowed obtaining: (1) the optimal number of clusters of neuronal spikes (according to the criterion of the maximum value of the cohesion-dispersion index) and (2) the optimal clustering of these spike-events (according to the criterion of the minimum value of the error index). The analytical implication of these results was that the feature extraction based on the shape, phase and distribution features of the action potential, together with the application of an alternative method of unsupervised classification with validity and error indices; guaranteed an efficient classification of neural events, especially for those detected from extracellular or multi-unitary recordings. Rabbits were conditioned with a delay paradigm consisting of a tone as conditioned stimulus. The conditioned stimulus started 50, 250, 500, 1000, or 2000 ms before and co-terminated with an air puff directed at the cornea as unconditioned stimulus. The results obtained indicated that the firing rate of each recorded neuron presented a single peak of activity with a frequency dependent on the inter-stimulus interval (i.e., Âż 12 Hz for 250 ms, Âż 6 Hz for 500 ms, and Âż 3 Hz for 1000 ms). Interestingly, the recorded neurons from the rostro-medial prefrontal cortex presented their dominant firing peaks at three precise times evenly distributed with respect to conditioned stimulus start, and also depending on the duration of the inter-stimulus interval (only for intervals of 250, 500, and 1000 ms). No significant neural responses were recorded at very short (50 ms) or long (2000 ms) conditioned stimulus-unconditioned stimulus time intervals. Furthermore, the eyelid movements were recorded with the magnetic search coil technique and the electromyographic (EMG) activity of the orbicularis oculi muscle. Reflex and conditioned eyelid responses presented a dominant oscillatory frequency of Âż 12 Hz. The experimental implication of these results is that the recorded neurons from the rostro-medial prefrontal cortex seem not to encode the oscillatory properties characterizing conditioned eyelid responses in rabbits. As a general experimental conclusion, it could be said that rostro-medial prefrontal cortex neurons are probably involved in the determination of CS-US intervals of an intermediate range (250-1000 ms).Universidad Pablo de Olavide. Departamento de FisiologĂ­a, AnatomĂ­a y BiologĂ­a CelularPostprin
    • …
    corecore