41,351 research outputs found

    Analysis of Cultured Neuronal Networks Using Intraburst Firing Characteristics

    Get PDF
    It is an open question whether neuronal networks, cultured on multielectrode arrays, retain any capability to usefully process information (learning and memory). A necessary prerequisite for learning is that stimulation can induce lasting changes in the network. To observe these changes, one needs a method to describe the network in sufficient detail, while stable in normal circumstances. We analyzed the spontaneous bursting activity that is encountered in dissociated cultures of rat neocortical cells. Burst profiles (BPs) were made by estimating the instantaneous array-wide firing frequency. The shape of the BPs was found to be stable on a time scale of hours. Spatiotemporal detail is provided by analyzing the instantaneous firing frequency per electrode. The resulting phase profiles (PPs) were estimated by aligning BPs to their peak spiking rate over a period of 15 min. The PPs reveal a stable spatiotemporal pattern of activity during bursts over a period of several hours, making them useful for plasticity and learning studies. We also show that PPs can be used to estimate conditional firing probabilities. Doing so, yields an approach in which network bursting behavior and functional connectivity can be studied

    "Body-In-The-Loop": Optimizing Device Parameters Using Measures of Instantaneous Energetic Cost

    Get PDF
    This paper demonstrates methods for the online optimization of assistive robotic devices such as powered prostheses, orthoses and exoskeletons. Our algorithms estimate the value of a physiological objective in real-time (with a body “in-the-loop”) and use this information to identify optimal device parameters. To handle sensor data that are noisy and dynamically delayed, we rely on a combination of dynamic estimation and response surface identification. We evaluated three algorithms (Steady-State Cost Mapping, Instantaneous Cost Mapping, and Instantaneous Cost Gradient Search) with eight healthy human subjects. Steady-State Cost Mapping is an established technique that fits a cubic polynomial to averages of steady-state measures at different parameter settings. The optimal parameter value is determined from the polynomial fit. Using a continuous sweep over a range of parameters and taking into account measurement dynamics, Instantaneous Cost Mapping identifies a cubic polynomial more quickly. Instantaneous Cost Gradient Search uses a similar technique to iteratively approach the optimal parameter value using estimates of the local gradient. To evaluate these methods in a simple and repeatable way, we prescribed step frequency via a metronome and optimized this frequency to minimize metabolic energetic cost. This use of step frequency allows a comparison of our results to established techniques and enables others to replicate our methods. Our results show that all three methods achieve similar accuracy in estimating optimal step frequency. For all methods, the average error between the predicted minima and the subjects’ preferred step frequencies was less than 1% with a standard deviation between 4% and 5%. Using Instantaneous Cost Mapping, we were able to reduce subject walking-time from over an hour to less than 10 minutes. While, for a single parameter, the Instantaneous Cost Gradient Search is not much faster than Steady-State Cost Mapping, the Instantaneous Cost Gradient Search extends favorably to multi-dimensional parameter spaces

    Instantaneous spectrum estimation of event-based densities

    Get PDF
    We present a method for obtaining a time-varying spectrum that is particularly suited when the data are in event-based form. This form arises in many areas of science and engineering, and especially in astronomy, where one has photon counting detectors. The method presented consists of three procedures. First, estimating the density using the kernel method; second, highpass filtering the manifestly positive density; finally, obtaining the time-frequency distribution with a modified Welch′s method. For the sake of validation event-based data are generated from a given distribution and the proposed method is used to construct the time-frequency spectrum and is compared to the original density. The results demonstrate the effectiveness of the method

    Bayesian quantum frequency estimation in presence of collective dephasing

    Full text link
    We advocate a Bayesian approach to optimal quantum frequency estimation - an important issue for future quantum enhanced atomic clock operation. The approach provides a clear insight into the interplay between decoherence and the extent of the prior knowledge in determining the optimal interrogation times and optimal estimation strategies. We propose a general framework capable of describing local oscillator noise as well as additional collective atomic dephasing effects. For a Gaussian noise the average Bayesian cost can be expressed using the quantum Fisher information and thus we establish a direct link between the two, often competing, approaches to quantum estimation theoryComment: 15 pages, 3 figure

    Iron Losses Prediction with PWM Supply Using Low and High Frequency Measurements: Analysis and Results Comparison

    Get PDF
    In this paper, two different methods for iron loss prediction are analyzed. The first method is based on the classical separation of loss contributions (hysteresis, eddy-current, and excess losses). The model requires loss contribution separation using iron loss measurements with sinusoidal supply. In this paper, this method will be called the ldquolow-frequency method.rdquo The second method, named the ldquohigh-frequency method,rdquo is based on the assumption that, under pulsewidth modulation supply, the higher order flux density harmonics do not influence the magnetic work conditions. These magnetic conditions depend only on the amplitude of the fundamental harmonic of the flux density. In this paper, both the proposed methodologies and the related measurements are described in detail, and the obtained results are compared with the experimental ones. The experimental results show that both methods allow getting excellent results. The high-frequency method is better than the lower one but requires a more complex test bench. Depending on the accuracy required by the user, the more handy method can be chosen, with the guarantee that the estimation errors will be lower than 5
    corecore