170 research outputs found
Recommended from our members
Neural Decoding Leveraging Motor-Cortex Population Geometry
Intracortical brain-computer interfaces (BCIs) provide the means to do something extraordinary: restore movement to patients with paralysis or amputated limbs. Realizing this potential requires the development of decode algorithms capable of accurately translating measurements of neural activity, in real time, into appropriate time-varying commands for an external device (e.g. prosthetic limb).
This problem is fundamentally interdisciplinary, drawing on tools and insights from engineering, neuroscience, statistics, and computer science, among others. Decode algorithms that have been favored historically tend to be computationally efficient, but perform suboptimally, likely because their assumptions fail to fully and accurately capture the complexity in neural population responses. Recent work harnessing the power of contemporary machine learning methods has raised the performance bar, yet these methods can be computationally demanding and it is unclear what properties of neural and/or behavioral data they exploit. In this dissertation, we characterize properties of motor-cortex population geometry and let these properties dictate decoder design, resulting in methods that perform very well, yet retain the benefits of simpler methods.
We use this approach to develop a closed-loop navigation BCI, and to design a highly accurate, general, and interpretable decoder. The properties described in this dissertation have implications for any BCI. By designing decoders to explicitly respect (and leverage) these properties, we can construct powerful yet practical BCIs that better meet the needs of patients
Non-linear adaptive control inspired by neuromuscular systems
Current paradigms for neuromorphic computing focus on internal computing mechanisms, for instance using spiking-neuron models. In this study, we propose to exploit what is known about neuro-mechanical control, exploiting the mechanisms of neural ensembles and recruitment, combined with the use of second-order overdamped impulse responses corresponding to the mechanical twitches of muscle-fiber groups. Such systems may be used for controlling any analog process, by realizing three aspects: Timing, output quantity representation and wave-shape approximation. We present an electronic based model implementing a single motor unit for twitch generation. Such units can be used to construct random ensembles, separately for an agonist and antagonist 'muscle'. Adaptivity is realized by assuming a multi-state memristive system for determining time constants in the circuit. Using (Spice)-based simulations, several control tasks were implemented which involved timing, amplitude and wave shape: The inverted pendulum task, the 'whack-a-mole' task and a handwriting simulation. The proposed model can be used for both electric-to-electronic as well as electric-to-mechanical tasks. In particular, the ensemble-based approach and local adaptivity may be of use in future multi-fiber polymer or multi-actuator pneumatic artificial muscles, allowing for robust control under varying conditions and fatigue, as is the case in biological muscles
Relationship between Anxiety and Freezing of Gait
Parkinson’s disease (PD) is the second most common neurodegenerative and a large percentage of PD patients develop freezing of gait (FOG) leading to an overall reduced quality of life. The overarching aim of the thesis is to investigate the relationship between anxiety and freezing of gait, to extend current research on this topic and produce findings that could facilitate more adequate treatment methods for this symptom.
The first study validated the seated functional MRI-compatible version of the walking threat paradigm that was previously found to induce anxiety and FOG. This would enable future studies to examine the neural correlates behind anxiety-induced freezing of gait. The second study investigated the effect of anxiety on the utilisation of body-related visual feedback in the form of an avatar in the virtual environment to improve FOG. The third study investigated the effects of Levodopa on the fronto-striato-limbic circuitry in PD Freezers at rest in their ‘ON’ and ‘OFF’ dopaminergic state.
Findings suggest that the VR seated threat paradigm is an adequate behavioural surrogate for the VR walking threat paradigm, eliciting comparable amounts of anxiety and freezing of gait as the walking version. Anxiety was also found to interfere with the utilisation of sensory feedback to improve FOG, where in highly threatening situations Freezers lack the capacity to process visual feedback for gait. Finally, dopaminergic medication was also found to partially modulate the frontoparietal-limbic-striatal circuitry in PD Freezers, where baseline anxiety levels influence the impact of Levodopa on the frontoparietal (FPN)- limbic connectivity, and the FPN-putamen connectivity.
In conclusion, the current thesis suggests that anxiety contributes to freezing of gait, which may present a barrier to treatment and could be a key factor in the heterogeneity observed in response to medication and sensory cueing
WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions
Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products.
This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices.
This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs.
This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope
Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology
The great behavioral heterogeneity observed between individuals with the same
psychiatric disorder and even within one individual over time complicates both
clinical practice and biomedical research. However, modern technologies are an
exciting opportunity to improve behavioral characterization. Existing
psychiatry methods that are qualitative or unscalable, such as patient surveys
or clinical interviews, can now be collected at a greater capacity and analyzed
to produce new quantitative measures. Furthermore, recent capabilities for
continuous collection of passive sensor streams, such as phone GPS or
smartwatch accelerometer, open avenues of novel questioning that were
previously entirely unrealistic. Their temporally dense nature enables a
cohesive study of real-time neural and behavioral signals.
To develop comprehensive neurobiological models of psychiatric disease, it
will be critical to first develop strong methods for behavioral quantification.
There is huge potential in what can theoretically be captured by current
technologies, but this in itself presents a large computational challenge --
one that will necessitate new data processing tools, new machine learning
techniques, and ultimately a shift in how interdisciplinary work is conducted.
In my thesis, I detail research projects that take different perspectives on
digital psychiatry, subsequently tying ideas together with a concluding
discussion on the future of the field. I also provide software infrastructure
where relevant, with extensive documentation.
Major contributions include scientific arguments and proof of concept results
for daily free-form audio journals as an underappreciated psychiatry research
datatype, as well as novel stability theorems and pilot empirical success for a
proposed multi-area recurrent neural network architecture.Comment: PhD thesis cop
A Survey of Using Machine Learning in IoT Security and the Challenges Faced by Researchers
The Internet of Things (IoT) has become more popular in the last 15 years as it has significantly improved and gained control in multiple fields. We are nowadays surrounded by billions of IoT devices that directly integrate with our lives, some of them are at the center of our homes, and others control sensitive data such as military fields, healthcare, and datacenters, among others. This popularity makes factories and companies compete to produce and develop many types of those devices without caring about how secure they are. On the other hand, IoT is considered a good insecure environment for cyber thefts. Machine Learning (ML) and Deep Learning (DL) also gained more importance in the last 15 years; they achieved success in the networking security field too. IoT has some similar security requirements such as traditional networks, but with some differences according to its characteristics, some specific security features, and environmental limitations, some differences are made such as low energy resources, limited computational capability, and small memory. These limitations inspire some researchers to search for the perfect and lightweight security ways which strike a balance between performance and security. This survey provides a comprehensive discussion about using machine learning and deep learning in IoT devices within the last five years. It also lists the challenges faced by each model and algorithm. In addition, this survey shows some of the current solutions and other future directions and suggestions. It also focuses on the research that took the IoT environment limitations into consideration
From Unimodal to Multimodal: improving the sEMG-Based Pattern Recognition via deep generative models
Multimodal hand gesture recognition (HGR) systems can achieve higher
recognition accuracy. However, acquiring multimodal gesture recognition data
typically requires users to wear additional sensors, thereby increasing
hardware costs. This paper proposes a novel generative approach to improve
Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial
Measurement Unit (IMU) signals. Specifically, we trained a deep generative
model based on the intrinsic correlation between forearm sEMG signals and
forearm IMU signals to generate virtual forearm IMU signals from the input
forearm sEMG signals at first. Subsequently, the sEMG signals and virtual IMU
signals were fed into a multimodal Convolutional Neural Network (CNN) model for
gesture recognition. To evaluate the performance of the proposed approach, we
conducted experiments on 6 databases, including 5 publicly available databases
and our collected database comprising 28 subjects performing 38 gestures,
containing both sEMG and IMU data. The results show that our proposed approach
outperforms the sEMG-based unimodal HGR method (with increases of
2.15%-13.10%). It demonstrates that incorporating virtual IMU signals,
generated by deep generative models, can significantly enhance the accuracy of
sEMG-based HGR. The proposed approach represents a successful attempt to
transition from unimodal HGR to multimodal HGR without additional sensor
hardware
Emergent Bio-Functional Similarities in a Cortical-Spike-Train-Decoding Spiking Neural Network Facilitate Predictions of Neural Computation
Despite its better bio-plausibility, goal-driven spiking neural network (SNN)
has not achieved applicable performance for classifying biological spike
trains, and showed little bio-functional similarities compared to traditional
artificial neural networks. In this study, we proposed the motorSRNN, a
recurrent SNN topologically inspired by the neural motor circuit of primates.
By employing the motorSRNN in decoding spike trains from the primary motor
cortex of monkeys, we achieved a good balance between classification accuracy
and energy consumption. The motorSRNN communicated with the input by capturing
and cultivating more cosine-tuning, an essential property of neurons in the
motor cortex, and maintained its stability during training. Such
training-induced cultivation and persistency of cosine-tuning was also observed
in our monkeys. Moreover, the motorSRNN produced additional bio-functional
similarities at the single-neuron, population, and circuit levels,
demonstrating biological authenticity. Thereby, ablation studies on motorSRNN
have suggested long-term stable feedback synapses contribute to the
training-induced cultivation in the motor cortex. Besides these novel findings
and predictions, we offer a new framework for building authentic models of
neural computation
Linguistic Competence and New Empiricism in Philosophy and Science
The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic competence in this framework was regarded as being innate, rule-governed, domain-specific, and fundamentally different from performance, i.e., idiosyncrasies and factors governing linguistic behavior. I analyze state-of-the-art connectionist, deep learning models of natural language processing, most notably large language models, to see what they can tell us about linguistic competence. Deep learning is a statistical technique for the classification of patterns through which artificial intelligence researchers train artificial neural networks containing multiple layers that crunch a gargantuan amount of textual and/or visual data. I argue that these models suggest that linguistic competence should be construed as stochastic, pattern-based, and stemming from domain-general mechanisms. Moreover, I distinguish syntactic from semantic competence, and I show for each the ramifications of the endorsement of a connectionist research program as opposed to the traditional symbolic cognitive science and transformational-generative grammar. I provide a unifying front, consisting of usage-based theories, a construction grammar approach, and an embodied approach to cognition to show that the more multimodal and diverse models are in terms of architectural features and training data, the stronger the case is for the connectionist linguistic competence. I also propose to discard the competence vs. performance distinction as theoretically inferior so that a novel and integrative account of linguistic competence originating in connectionism and empiricism that I propose and defend in the dissertation could be put forward in scientific and philosophical literature
2016 GREAT Day Program
SUNY Geneseo’s Tenth Annual GREAT Day.https://knightscholar.geneseo.edu/program-2007/1010/thumbnail.jp
- …