27 research outputs found

    Fitts' Law for speed-accuracy trade-off is a diversity sweet spot in sensorimotor control

    Get PDF
    Human sensorimotor control exhibits remarkable speed and accuracy, as celebrated in Fitts' law for reaching. Much less studied is how this is possible despite being implemented by neurons and muscle components with severe speed-accuracy tradeoffs (SATs). Here we develop a theory that connects the SATs at the system and hardware levels, and use it to explain Fitts' law for reaching and related laws. These results show that diversity between hardware components can be exploited to achieve both fast and accurate control performance using slow or inaccurate hardware. Such “diversity sweet spots'' (DSSs) are ubiquitous in biology and technology, and explain why large heterogeneities exist in biological and technical components and how both engineers and natural selection routinely evolve fast and accurate systems from imperfect hardware

    Fitts' Law for speed-accuracy trade-off is a diversity sweet spot in sensorimotor control

    Get PDF
    Human sensorimotor control exhibits remarkable speed and accuracy, as celebrated in Fitts' law for reaching. Much less studied is how this is possible despite being implemented by neurons and muscle components with severe speed-accuracy tradeoffs (SATs). Here we develop a theory that connects the SATs at the system and hardware levels, and use it to explain Fitts' law for reaching and related laws. These results show that diversity between hardware components can be exploited to achieve both fast and accurate control performance using slow or inaccurate hardware. Such “diversity sweet spots'' (DSSs) are ubiquitous in biology and technology, and explain why large heterogeneities exist in biological and technical components and how both engineers and natural selection routinely evolve fast and accurate systems from imperfect hardware

    Connecting the Speed-Accuracy Trade-Offs in Sensorimotor Control and Neurophysiology Reveals Diversity Sweet Spots in Layered Control Architectures

    Get PDF
    Nervous systems sense, communicate, compute, and actuate movement using distributed components with trade-offs in speed, accuracy, sparsity, noise, and saturation. Nevertheless, the resulting control can achieve remarkably fast, accurate, and robust performance due to a highly effective layered control architecture. However, this architecture has received little attention from the existing research. This is in part because of the lack of theory that connects speed-accuracy trade-offs (SATs) in the components neurophysiology with system-level sensorimotor control and characterizes the overall system performance when different layers (planning vs. reflex layer) act work jointly. In thesis, we present a theoretical framework that provides a synthetic perspective of both levels and layers. We then use this framework to clarify the properties of effective layered architectures and explain why there exists extreme diversity across layers (planning vs. reflex layers) and within levels (sensorimotor versus neural/muscle hardware levels). The framework characterizes how the sensorimotor SATs are constrained by the component SATs of neurons communicating with spikes and their sensory and muscle endpoints, in both stochastic and deterministic models. The theoretical predictions are also verified using driving experiments. Our results lead to a novel concept, termed ``diversity sweet spots (DSSs)'': the appropriate diversity in the properties of neurons and muscles across layers and within levels help create systems that are both fast and accurate despite being built from components that are individually slow or inaccurate. At the component level, this concept explains why there are extreme heterogeneities in the neural or muscle composition. At the system level, DSSs explain the benefits of layering to allow extreme heterogeneities in speed and accuracy in different sensorimotor loops. Similar issues and properties also extend down to the cellular level in biology and outward to our most advanced network technologies from smart grid to the Internet of Things. We present our initial step in expanding our framework to that area and widely-open area of research for future direction

    Experimental and educational platforms for studying architecture and tradeoffs in human sensorimotor control

    Get PDF
    This paper describes several surprisingly rich but simple demos and a new experimental platform for human sensorimotor control research and also controls education. The platform safely simulates a canonical sensorimotor task of riding a mountain bike down a steep, twisting, bumpy trail using a standard display and inexpensive off-the-shelf gaming steering wheel with a force feedback motor. We use the platform to verify our theory, presented in a companion paper. The theory tells how component hardware speed-accuracy tradeoffs (SATs) in control loops impose corresponding SATs at the system level and how effective architectures mitigate the deleterious impact of hardware SATs through layering and “diversity sweet spots” (DSSs). Specifically, we measure the impacts on system performance of delays, quantization, and uncertainties in sensorimotor control loops, both within the subject's nervous system and added externally via software in the platform. This provides a remarkably rich test of the theory, which is consistent with all preliminary data. Moreover, as the theory predicted, subjects effectively multiplex specific higher layer planning/tracking of the trail using vision with lower layer rejection of unseen bump disturbances using reflexes. In contrast, humans multitask badly on tasks that do not naturally distribute across layers (e.g. texting and driving). The platform is cheap to build and easy to program for both research and education purposes, yet verifies our theory, which is aimed at closing a crucial gap between neurophysiology and sensorimotor control. The platform can be downloaded at https://github.com/Doyle-Lab/WheelCon

    An integrative perspective to LQ and ℓ∞ control for delayed and quantized systems

    Get PDF
    Deterministic and stochastic approaches to handle uncertainties may incur very different complexities in computation time and memory usage, in addition to different uncertainty models. For linear systems with delay and rate constrained communications between the observer and the controller, previous work shows that a deterministic approach, the ℓ ∞ control has low complexity but can only handle bounded disturbances. In this article, we take a stochastic approach and propose a linear-quadratic (LQ) controller that can handle arbitrarily large disturbance but has large complexity in time and space. The differences in robustness and complexity of the ℓ ∞ and LQ controllers motivate the design of a hybrid controller that interpolates between the two: The ℓ ∞ controller is applied when the disturbance is not too large (normal mode) and the LQ controller is resorted to otherwise (acute mode). We characterize the switching behavior between the normal and acute modes. Using our theoretical bounds which are supplemented by numerical experiments, we show that the hybrid controller can achieve a sweet spot in the robustness-complexity tradeoff, i.e., reject occasional large disturbance while operating with low complexity most of the time

    Towards a Theory of Control Architecture: A quantitative framework for layered multi-rate control

    Full text link
    This paper focuses on the need for a rigorous theory of layered control architectures (LCAs) for complex engineered and natural systems, such as power systems, communication networks, autonomous robotics, bacteria, and human sensorimotor control. All deliver extraordinary capabilities, but they lack a coherent theory of analysis and design, partly due to the diverse domains across which LCAs can be found. In contrast, there is a core universal set of control concepts and theory that applies very broadly and accommodates necessary domain-specific specializations. However, control methods are typically used only to design algorithms in components within a larger system designed by others, typically with minimal or no theory. This points towards a need for natural but large extensions of robust performance from control to the full decision and control stack. It is encouraging that the successes of extant architectures from bacteria to the Internet are due to strikingly universal mechanisms and design patterns. This is largely due to convergent evolution by natural selection and not intelligent design, particularly when compared with the sophisticated design of components. Our aim here is to describe the universals of architecture and sketch tentative paths towards a useful design theory.Comment: Submitted to IEEE Control Systems Magazin

    Low-dimensional representations of neural time-series data with applications to peripheral nerve decoding

    Get PDF
    Bioelectronic medicines, implanted devices that influence physiological states by peripheral neuromodulation, have promise as a new way of treating diverse conditions from rheumatism to diabetes. We here explore ways of creating nerve-based feedback for the implanted systems to act in a dynamically adapting closed loop. In a first empirical component, we carried out decoding studies on in vivo recordings of cat and rat bladder afferents. In a low-resolution data-set, we selected informative frequency bands of the neural activity using information theory to then relate to bladder pressure. In a second high-resolution dataset, we analysed the population code for bladder pressure, again using information theory, and proposed an informed decoding approach that promises enhanced robustness and automatic re-calibration by creating a low-dimensional population vector. Coming from a different direction of more general time-series analysis, we embedded a set of peripheral nerve recordings in a space of main firing characteristics by dimensionality reduction in a high-dimensional feature-space and automatically proposed single efficiently implementable estimators for each identified characteristic. For bioelectronic medicines, this feature-based pre-processing method enables an online signal characterisation of low-resolution data where spike sorting is impossible but simple power-measures discard informative structure. Analyses were based on surrogate data from a self-developed and flexibly adaptable computer model that we made publicly available. The wider utility of two feature-based analysis methods developed in this work was demonstrated on a variety of datasets from across science and industry. (1) Our feature-based generation of interpretable low-dimensional embeddings for unknown time-series datasets answers a need for simplifying and harvesting the growing body of sequential data that characterises modern science. (2) We propose an additional, supervised pipeline to tailor feature subsets to collections of classification problems. On a literature standard library of time-series classification tasks, we distilled 22 generically useful estimators and made them easily accessible.Open Acces

    Life Sciences Program Tasks and Bibliography

    Get PDF
    This document includes information on all peer reviewed projects funded by the Office of Life and Microgravity Sciences and Applications, Life Sciences Division during fiscal year 1995. Additionally, this inaugural edition of the Task Book includes information for FY 1994 programs. This document will be published annually and made available to scientists in the space life sciences field both as a hard copy and as an interactive Internet web pag
    corecore