8 research outputs found
Perceptuomotor bias in the imitation of steady-state vowels
Previous studies suggest that speakers are systematically inaccurate, or biased, when imitating self-produced vowels. The direction of these biases in formant space and their variation may offer clues about the organization of the vowel perceptual space. To examine these patterns, three male speakers were asked to imitate 45 self-produced vowels that were systematically distributed in F1/F2 space. All three speakers showed imitation bias, and the bias magnitudes were significantly larger than those predicted by a model of articulatory noise. Each speaker showed a different pattern of bias directions, but the pattern was unrelated to the locations of prototypical vowels produced by that speaker. However, there were substantial quantitative regularities: Í‘1Í’ The distribution of imitation variability and bias magnitudes were similar for all speakers, Í‘2Í’ the imitation variability was independent of the bias magnitudes, and Í‘3Í’ the imitation variability Í‘a production measureÍ’ was commensurate with the formant discrimination limen Í‘a perceptual measureÍ’. These results indicate that there is additive Gaussian noise in the imitation process that independently affects each formant and that there are speaker-dependent and potentially nonlinguistic biases in vowel perception and production
Exploiting Large Neuroimaging Datasets to Create Connectome-Constrained Approaches for more Robust, Efficient, and Adaptable Artificial Intelligence
Despite the progress in deep learning networks, efficient learning at the
edge (enabling adaptable, low-complexity machine learning solutions) remains a
critical need for defense and commercial applications. We envision a pipeline
to utilize large neuroimaging datasets, including maps of the brain which
capture neuron and synapse connectivity, to improve machine learning
approaches. We have pursued different approaches within this pipeline
structure. First, as a demonstration of data-driven discovery, the team has
developed a technique for discovery of repeated subcircuits, or motifs. These
were incorporated into a neural architecture search approach to evolve network
architectures. Second, we have conducted analysis of the heading direction
circuit in the fruit fly, which performs fusion of visual and angular velocity
features, to explore augmenting existing computational models with new insight.
Our team discovered a novel pattern of connectivity, implemented a new model,
and demonstrated sensor fusion on a robotic platform. Third, the team analyzed
circuitry for memory formation in the fruit fly connectome, enabling the design
of a novel generative replay approach. Finally, the team has begun analysis of
connectivity in mammalian cortex to explore potential improvements to
transformer networks. These constraints increased network robustness on the
most challenging examples in the CIFAR-10-C computer vision robustness
benchmark task, while reducing learnable attention parameters by over an order
of magnitude. Taken together, these results demonstrate multiple potential
approaches to utilize insight from neural systems for developing robust and
efficient machine learning techniques.Comment: 11 pages, 4 figure
A Domain-Agnostic Approach for Characterization of Lifelong Learning Systems
Despite the advancement of machine learning techniques in recent years,
state-of-the-art systems lack robustness to "real world" events, where the
input distributions and tasks encountered by the deployed systems will not be
limited to the original training context, and systems will instead need to
adapt to novel distributions and tasks while deployed. This critical gap may be
addressed through the development of "Lifelong Learning" systems that are
capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3)
Scalability. Unfortunately, efforts to improve these capabilities are typically
treated as distinct areas of research that are assessed independently, without
regard to the impact of each separate capability on other aspects of the
system. We instead propose a holistic approach, using a suite of metrics and an
evaluation framework to assess Lifelong Learning in a principled way that is
agnostic to specific domains or system techniques. Through five case studies,
we show that this suite of metrics can inform the development of varied and
complex Lifelong Learning systems. We highlight how the proposed suite of
metrics quantifies performance trade-offs present during Lifelong Learning
system development - both the widely discussed Stability-Plasticity dilemma and
the newly proposed relationship between Sample Efficient and Robust Learning.
Further, we make recommendations for the formulation and use of metrics to
guide the continuing development of Lifelong Learning systems and assess their
progress in the future.Comment: To appear in Neural Network
Network analysis of toxic chemicals and symptoms: Implications for designing firstresponder systems
Abstract Th
A domain-agnostic approach for characterization of lifelong learning systems
Despite the advancement of machine learning techniques in recent years, state-of-the-art systems lack robustness to “real world” events, where the input distributions and tasks encountered by the deployed systems will not be limited to the original training context, and systems will instead need to adapt to novel distributions and tasks while deployed. This critical gap may be addressed through the development of “Lifelong Learning” systems that are capable of (1) Continuous Learning, (2) Transfer and Adaptation, and (3) Scalability. Unfortunately, efforts to improve these capabilities are typically treated as distinct areas of research that are assessed independently, without regard to the impact of each separate capability on other aspects of the system. We instead propose a holistic approach, using a suite of metrics and an evaluation framework to assess Lifelong Learning in a principled way that is agnostic to specific domains or system techniques. Through five case studies, we show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems. We highlight how the proposed suite of metrics quantifies performance trade-offs present during Lifelong Learning system development — both the widely discussed Stability-Plasticity dilemma and the newly proposed relationship between Sample Efficient and Robust Learning. Further, we make recommendations for the formulation and use of metrics to guide the continuing development of Lifelong Learning systems and assess their progress in the future