8 research outputs found
NeuroroboticsâA Thriving Community and a Promising Pathway Toward Intelligent Cognitive Robots
Neurorobots are robots whose control has been modeled after some aspect of the brain. Since the brain is so closely coupled to the body and situated in the environment, Neurorobots can be a powerful tool for studying neural function in a holistic fashion. It may also be a means to develop autonomous systems that have some level of biological intelligence. The present article provides my perspective on this field, points out some of the landmark events, and discusses its future potential
A Developmental Neuro-Robotics Approach for Boosting the Recognition of Handwritten Digits
Developmental psychology and neuroimaging
research identified a close link between numbers and fingers,
which can boost the initial number knowledge in children. Recent
evidence shows that a simulation of the children's embodied
strategies can improve the machine intelligence too. This article
explores the application of embodied strategies to convolutional
neural network models in the context of developmental neurorobotics, where the training information is likely to be gradually
acquired while operating rather than being abundant and fully
available as the classical machine learning scenarios. The
experimental analyses show that the proprioceptive information
from the robot fingers can improve network accuracy in the
recognition of handwritten Arabic digits when training examples
and epochs are few. This result is comparable to brain imaging
and longitudinal studies with young children. In conclusion, these
findings also support the relevance of the embodiment in the case
of artificial agentsâ training and show a possible way for the
humanization of the learning process, where the robotic body can
express the internal processes of artificial intelligence making it
more understandable for humans
Spiking Neurons Integrating Visual Stimuli Orientation and Direction Selectivity in a Robotic Context
Visual motion detection is essential for the survival of many species. The phenomenon includes several spatial properties, not fully understood at the level of a neural circuit. This paper proposes a computational model of a visual motion detector that integrates direction and orientation selectivity features. A recent experiment in the Drosophila model highlights that stimulus orientation influences the neural response of direction cells. However, this interaction and the significance at the behavioral level are currently unknown. As such, another objective of this article is to study the effect of merging these two visual processes when contextualized in a neuro-robotic model and an operant conditioning procedure. In this work, the learning task was solved using an artificial spiking neural network, acting as the brain controller for virtual and physical robots, showing a behavior modulation from the integration of both visual processes
Grounding semantic cognition using computational modelling and network analysis
The overarching objective of this thesis is to further the field of grounded semantics using a range of computational and empirical studies. Over the past thirty years, there have been many algorithmic advances in the
modelling of semantic cognition. A commonality across these cognitive models is a reliance on hand-engineering âtoy-modelsâ. Despite incorporating newer
techniques (e.g. Long short-term memory), the model inputs remain unchanged. We argue that the inputs to these traditional semantic models have little resemblance with real human experiences. In this dissertation, we ground our neural network models by training them with real-world visual scenes using naturalistic photographs. Our approach is an alternative to both hand-coded
features and embodied raw sensorimotor signals.
We conceptually replicate the mutually reinforcing nature of hybrid (feature-based and grounded) representations using silhouettes of concrete concepts as model inputs. We next gradually develop a novel grounded cognitive semantic representation which we call scene2vec, starting with object co-occurrences and then adding emotions and language-based tags. Limitations of our scene-based representation are identified for more abstract concepts (e.g. freedom). We further present a large-scale human semantics study, which reveals small-world semantic network topologies are context-dependent and
that scenes are the most dominant cognitive dimension. This finding leads us to conclude that there is no meaning without context. Lastly, scene2vec shows
promising human-like context-sensitive stereotypes (e.g. gender role bias), and we explore how such stereotypes are reduced by targeted debiasing. In conclusion, this thesis provides support for a novel computational
viewpoint on investigating meaning - scene-based grounded semantics. Future research scaling scene-based semantic models to human-levels through virtual grounding has the potential to unearth new insights into the human mind and
concurrently lead to advancements in artificial general intelligence by enabling robots, embodied or otherwise, to acquire and represent meaning directly from the environment
Entangled predictive brain: emotion, prediction and embodied cognition
How does the living body impact, and perhaps even help constitute, the thinking, reasoning,
feeling agent? This is the guiding question that the following work seeks to answer. The subtitle
of this project is emotion, prediction and embodied cognition for good reason: these are the
three closely related themes that tie together the various chapters of the following thesis. The
central claim is that a better understanding of the nature of emotion offers valuable insight for
understanding the nature of the so called âpredictive mindâ, including a powerful new way to
think about the mind as embodied
Recently a new perspective has arguably taken the pole position in both philosophy of mind and
the cognitive sciences when it comes to discussing the nature of mind. This framework takes the
brain to be a probabilistic prediction engine. Such engines, so the framework proposes, are
dedicated to the task of minimizing the disparity between how they expect the world to be and
how the world actually is. Part of the power of the framework is the elegant suggestion that
much of what we take to be central to human intelligence - perception, action, emotion, learning
and language - can be understood within the framework of prediction and error reduction. In
what follows I will refer to this general approach to understanding the mind and brain as
'predictive processing'.
While the predictive processing framework is in many ways revolutionary, there is a tendency for
researchers interested in this topic to assume a very traditional âneurocentricâ stance concerning
the mind. I argue that this neurocentric stance is completely optional, and that a focus on
emotional processing provides good reasons to think that the predictive mind is also a deeply
embodied mind. The result is a way of understanding the predictive brain that allows the body
and the surrounding environment to make a robust constitutive contribution to the predictive
process. While itâs true that predictive models can get us a long way in making sense of what
drives the neural-economy, I will argue that a complete picture of human intelligence requires us
to also explore the many ways that a predictive brain is embodied in a living body and embedded
in the social-cultural world in which it was born and lives
Internet and Biometric Web Based Business Management Decision Support
Internet and Biometric Web Based Business Management Decision Support
MICROBE
MOOC material prepared under
IO1/A5 Development of the MICROBE personalized MOOCs content and teaching materials
Prepared by:
A. Kaklauskas, A. Banaitis, I. Ubarte
Vilnius Gediminas Technical University, Lithuania
Project No: 2020-1-LT01-KA203-07810