76,589 research outputs found

    Chemoreception and neuroplasticity in respiratory circuits

    Get PDF
    The respiratory central pattern generator must respond to chemosensory cues to maintain oxygen (O2) and carbon dioxide (CO2) homeostasis in the blood and tissues. To do this, sensorial cells located in the periphery and central nervous system monitor the arterial partial pressure of O2 and CO2 and initiate respiratory and autonomic reflex adjustments in conditions of hypoxia and hypercapnia. In conditions of chronic intermittent hypoxia (CIH), repeated peripheral chemoreceptor input mediated by the nucleus of the solitary tract induces plastic changes in respiratory circuits that alter baseline respiratory and sympathetic motor outputs and result in chemoreflex sensitization, active expiration, and arterial hypertension. Herein, we explored the hypothesis that the CIH-induced neuroplasticity primarily consists of increased excitability of pre-inspiratory/inspiratory neurons in the pre-Bötzinger complex. To evaluate this hypothesis and elucidate neural mechanisms for the emergence of active expiration and sympathetic overactivity in CIH-treated animals, we extended a previously developed computational model of the brainstem respiratory-sympathetic network to reproduce experimental data on peripheral and central chemoreflexes post-CIH. The model incorporated neuronal connections between the 2nd-order NTS neurons and peripheral chemoreceptors afferents, the respiratory pattern generator, and sympathetic neurons in the rostral ventrolateral medulla in order to capture key features of sympathetic and respiratory responses to peripheral chemoreflex stimulation. Our model identifies the potential neuronal groups recruited during peripheral chemoreflex stimulation that may be required for the development of inspiratory, expiratory and sympathetic reflex responses. Moreover, our model predicts that pre-inspiratory neurons in the pre-Bötzinger complex experience plasticity of channel expression due to excessive excitation during peripheral chemoreflex. Simulations also show that, due to positive interactions between pre-inspiratory neurons in the pre-Bötzinger complex and expiratory neurons in the retrotrapezoid nucleus, increased excitability of the former may lead to the emergence of the active expiratory pattern at normal CO2 levels found after CIH exposure. We conclude that neuronal type specific neuroplasticity in the pre-Bötzinger complex induced by repetitive episodes of peripheral chemoreceptor activation by hypoxia may contribute to the development of sympathetic over-activity and hypertension

    Towards a neural hierarchy of time scales for motor control

    Get PDF
    Animals show remarkable rich motion skills which are still far from realizable with robots. Inspired by the neural circuits which generate rhythmic motion patterns in the spinal cord of all vertebrates, one main research direction points towards the use of central pattern generators in robots. On of the key advantages of this, is that the dimensionality of the control problem is reduced. In this work we investigate this further by introducing a multi-timescale control hierarchy with at its core a hierarchy of recurrent neural networks. By means of some robot experiments, we demonstrate that this hierarchy can embed any rhythmic motor signal by imitation learning. Furthermore, the proposed hierarchy allows the tracking of several high level motion properties (e.g.: amplitude and offset), which are usually observed at a slower rate than the generated motion. Although these experiments are preliminary, the results are promising and have the potential to open the door for rich motor skills and advanced control

    Analytic and Learned Footstep Control for Robust Bipedal Walking

    Get PDF
    Bipedal walking is a complex, balance-critical whole-body motion with inherently unstable inverted pendulum-like dynamics. Strong disturbances must be quickly responded to by altering the walking motion and placing the next step in the right place at the right time. Unfortunately, the high number of degrees of freedom of the humanoid body makes the fast computation of well-placed steps a particularly challenging task. Sensor noise, imprecise actuation, and latency in the sensomotoric feedback loop impose further challenges when controlling real hardware. This dissertation addresses these challenges and describes a method of generating a robust walking motion for bipedal robots. Fast modification of footstep placement and timing allows agile control of the walking velocity and the absorption of strong disturbances. In a divide and conquer manner, the concepts of motion and balance are solved separately from each other, and consolidated in a way that a low-dimensional balance controller controls the timing and the footstep locations of a high-dimensional motion generator. Central pattern generated oscillatory motion signals are used for the synthesis of an open-loop stable walk on flat ground, which lacks the ability to respond to disturbances due to the absence of feedback. The Central Pattern Generator exhibits a low-dimensional parameter set to influence the timing and the landing coordinates of the swing foot. For balance control, a simple inverted pendulum-based physical model is used to represent the principal dynamics of walking. The model is robust to disturbances in a way that it returns to an ideal trajectory from a wide range of initial conditions by employing a combination of Zero Moment Point control, step timing, and foot placement strategies. The simulation of the model and its controller output are computed efficiently in closed form, supporting high-frequency balance control at the cost of an insignificant computational load. Additionally, the sagittal step size produced by the controller can be trained online during walking with a novel, gradient descent-based machine learning method. While the analytic controller forms the core of reliable walking, the trained sagittal step size complements the analytic controller in order to improve the overall walking performance. The balanced whole-body walking motion arises by using the footstep coordinates and the step timing predicted by the low-dimensional model as control input for the Central Pattern Generator. Real robot experiments are presented as evidence for disturbance-resistant, omnidirectional gait control, with arguably the strongest push-recovery capabilities to date

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequence of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by this large spatial ensemble so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800X10800 pixels phase image using only ~25 seconds, a 50X speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ~6X. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution

    Deep learning approach to Fourier ptychographic microscopy

    Full text link
    Convolutional neural networks (CNNs) have gained tremendous success in solving complex inverse problems. The aim of this work is to develop a novel CNN framework to reconstruct video sequences of dynamic live cells captured using a computational microscopy technique, Fourier ptychographic microscopy (FPM). The unique feature of the FPM is its capability to reconstruct images with both wide field-of-view (FOV) and high resolution, i.e. a large space-bandwidth-product (SBP), by taking a series of low resolution intensity images. For live cell imaging, a single FPM frame contains thousands of cell samples with different morphological features. Our idea is to fully exploit the statistical information provided by these large spatial ensembles so as to make predictions in a sequential measurement, without using any additional temporal dataset. Specifically, we show that it is possible to reconstruct high-SBP dynamic cell videos by a CNN trained only on the first FPM dataset captured at the beginning of a time-series experiment. Our CNN approach reconstructs a 12800×10800 pixel phase image using only ∼25 seconds, a 50× speedup compared to the model-based FPM algorithm. In addition, the CNN further reduces the required number of images in each time frame by ∼ 6×. Overall, this significantly improves the imaging throughput by reducing both the acquisition and computational times. The proposed CNN is based on the conditional generative adversarial network (cGAN) framework. We further propose a mixed loss function that combines the standard image domain loss and a weighted Fourier domain loss, which leads to improved reconstruction of the high frequency information. Additionally, we also exploit transfer learning so that our pre-trained CNN can be further optimized to image other cell types. Our technique demonstrates a promising deep learning approach to continuously monitor large live-cell populations over an extended time and gather useful spatial and temporal information with sub-cellular resolution.We would like to thank NVIDIA Corporation for supporting us with the GeForce Titan Xp through the GPU Grant Program. (NVIDIA Corporation; GeForce Titan Xp through the GPU Grant Program)First author draf

    Deep learning in computational microscopy

    Full text link
    We propose to use deep convolutional neural networks (DCNNs) to perform 2D and 3D computational imaging. Specifically, we investigate three different applications. We first try to solve the 3D inverse scattering problem based on learning a huge number of training target and speckle pairs. We also demonstrate a new DCNN architecture to perform Fourier ptychographic Microscopy (FPM) reconstruction, which achieves high-resolution phase recovery with considerably less data than standard FPM. Finally, we employ DCNN models that can predict focused 2D fluorescent microscopic images from blurred images captured at overfocused or underfocused planes.Published versio
    corecore