251 research outputs found

    A survey on modern trainable activation functions

    Full text link
    In neural networks literature, there is a strong interest in identifying and defining activation functions which can improve neural network performance. In recent years there has been a renovated interest of the scientific community in investigating activation functions which can be trained during the learning process, usually referred to as "trainable", "learnable" or "adaptable" activation functions. They appear to lead to better network performance. Diverse and heterogeneous models of trainable activation function have been proposed in the literature. In this paper, we present a survey of these models. Starting from a discussion on the use of the term "activation function" in literature, we propose a taxonomy of trainable activation functions, highlight common and distinctive proprieties of recent and past models, and discuss main advantages and limitations of this type of approach. We show that many of the proposed approaches are equivalent to adding neuron layers which use fixed (non-trainable) activation functions and some simple local rule that constraints the corresponding weight layers.Comment: Published in "Neural Networks" journal (Elsevier

    Sensorimotor coarticulation in the execution and recognition of intentional actions

    Get PDF
    Humans excel at recognizing (or inferring) another's distal intentions, and recent experiments suggest that this may be possible using only subtle kinematic cues elicited during early phases of movement. Still, the cognitive and computational mechanisms underlying the recognition of intentional (sequential) actions are incompletely known and it is unclear whether kinematic cues alone are sufficient for this task, or if it instead requires additional mechanisms (e.g., prior information) that may be more difficult to fully characterize in empirical studies. Here we present a computationally-guided analysis of the execution and recognition of intentional actions that is rooted in theories of motor control and the coarticulation of sequential actions. In our simulations, when a performer agent coarticulates two successive actions in an action sequence (e.g., "reach-to-grasp" a bottle and "grasp-to-pour"), he automatically produces kinematic cues that an observer agent can reliably use to recognize the performer's intention early on, during the execution of the first part of the sequence. This analysis lends computational-level support for the idea that kinematic cues may be sufficiently informative for early intention recognition. Furthermore, it suggests that the social benefits of coarticulation may be a byproduct of a fundamental imperative to optimize sequential actions. Finally, we discuss possible ways a performer agent may combine automatic (coarticulation) and strategic (signaling) ways to facilitate, or hinder, an observer's action recognition processe

    Differential neural dynamics underling pragmatic and semantic affordance processing in macaque ventral premotor cortex

    Get PDF
    Premotor neurons play a fundamental role in transforming physical properties of observed objects, such as size and shape, into motor plans for grasping them, hence contributing to "pragmatic" affordance processing. Premotor neurons can also contribute to "semantic" affordance processing, as they can discharge differently even to pragmatically identical objects depending on their behavioural relevance for the observer (i.e. edible or inedible objects). Here, we compared the response of monkey ventral premotor area F5 neurons tested during pragmatic (PT) or semantic (ST) visuomotor tasks. Object presentation responses in ST showed shorter latency and lower object selectivity than in PT. Furthermore, we found a difference between a transient representation of semantic affordances and a sustained representation of pragmatic affordances at both the single neuron and population level. Indeed, responses in ST returned to baseline within 0.5 s whereas in PT they showed the typical sustained visual-to-motor activity during Go trials. In contrast, during No-go trials, the time course of pragmatic and semantic information processing was similar. These findings suggest that premotor cortex generates different dynamics depending on pragmatic and semantic information provided by the context in which the to-be-grasped object is presented

    Interactive inference: a multi-agent model of cooperative joint actions

    Full text link
    We advance a novel computational model of multi-agent, cooperative joint actions that is grounded in the cognitive framework of active inference. The model assumes that to solve a joint task, such as pressing together a red or blue button, two (or more) agents engage in a process of interactive inference. Each agent maintains probabilistic beliefs about the goal of the joint task (e.g., should we press the red or blue button?) and updates them by observing the other agent's movements, while in turn selecting movements that make his own intentions legible and easy to infer by the other agent (i.e., sensorimotor communication). Over time, the interactive inference aligns both the beliefs and the behavioral strategies of the agents, hence ensuring the success of the joint action. We exemplify the functioning of the model in two simulations. The first simulation illustrates a ''leaderless'' joint action. It shows that when two agents lack a strong preference about their joint task goal, they jointly infer it by observing each other's movements. In turn, this helps the interactive alignment of their beliefs and behavioral strategies. The second simulation illustrates a "leader-follower" joint action. It shows that when one agent ("leader") knows the true joint goal, it uses sensorimotor communication to help the other agent ("follower") infer it, even if doing this requires selecting a more costly individual plan. These simulations illustrate that interactive inference supports successful multi-agent joint actions and reproduces key cognitive and behavioral dynamics of "leaderless" and "leader-follower" joint actions observed in human-human experiments. In sum, interactive inference provides a cognitively inspired, formal framework to realize cooperative joint actions and consensus in multi-agent systems.Comment: 32 pages, 16 figure

    Action perception as hypothesis testing

    Get PDF
    We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions – and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing

    Interactional leader-follower sensorimotor communication strategies during repetitive joint actions

    Get PDF
    Non-verbal communication is the basis of animal interactions. In dyadic leader – follower interactions, leaders master the ability to carve their motor behaviour in order to ‘signal’ their future actions and internal plans while these signals influence the behaviour of follower partners, who automatically tend to imitate the leader even in complementary interactions. Despite their usefulness, signalling and imitation have a biomechanical cost, and it is unclear how this cost – benefits trade-off is managed during repetitive dyadic interactions that present learnable regularities. We studied signalling and imitation dynamics (indexed by movement kinematics) in pairs of leaders and followers during a repetitive, rule-based, joint action. Trial-by-trial Bayesian model comparison was used to evaluate the relation between signalling, imitation and pair performance. The different models incorporate different hypotheses concerning the factors (past interactions versus online movements) influencing the leader’s signalling (or follower’s imitation) kinematics. This approach showed that (i) leaders’ signalling strategy improves future couple performance, (ii) leaders used the history of past interactions to shape their signalling, (iii) followers’ imitative behaviour is more strongly affected by the online movement of the leader. This study elucidates the ways online sensorimotor communication help individuals align their task representations and ultimately improves joint action performanc

    Hysteresis Modeling in Iron-Dominated Magnets Based on a Multi-Layered Narx Neural Network Approach

    Get PDF
    A full-fledged neural network modeling, based on a Multi-layered Nonlinear Autoregressive Exogenous Neural Network (NARX) architecture, is proposed for quasi-static and dynamic hysteresis loops, one of the most challenging topics for computational magnetism. This modeling approach overcomes drawbacks in attaining better than percent-level accuracy of classical and recent approaches for accelerator magnets, that combine hybridization of standard hysteretic models and neural network architectures. By means of an incremental procedure, different Deep Neural Network Architectures are selected, fine-tuned and tested in order to predict magnetic hysteresis in the context of electromagnets. Tests and results show that the proposed NARX architecture best fits the measured magnetic field behavior of a reference quadrupole at CERN. In particular, the proposed modeling framework leads to a percent error below 0.02% for the magnetic field prediction, thus outperforming state of the art approaches and paving a very promising way for future real time applications

    AGILE Observations of GW Events

    Get PDF
    AGILE is a space mission of the Italian Space Agency dedicated to Îł-ray astrophysics, launched in 2007. AGILE performed dedicated real-time searches for possible Îł-ray counterparts of gravitational wave (GW) events detected by the LIGO-Virgo scientific Collaboration (LVC) during the O2 observation run. We present a review of AGILE observations of GW events, starting with the first, GW150914, which was a test case for future searches. We focus here on the main characteristics of the observations of the most important GW events detected in 2017, i.e. GW170104 and GW170817. In particular, for the former event we published Îł-ray upper limits (ULs) in the 50 MeV - 10 GeV energy band together with a detailed analysis of a candidate precursor event in the Mini-Calorimeter data. As for GW170817, we published a set of constraining Îł-ray ULs obtained for integrations preceding and following the event time. These results allow us to establish important constraints on the Îł-ray emission from a possible magnetar-like remnant in the first ~1000 s following T0. AGILE is a major player in the search of electromagnetic counterparts of GW events, and its enhanced detection capabilities in hard X-ray/MeV/GeV ranges will play a crucial role in the future O3 observing run

    Multimodal Feedback in Assisting a Wearable Brain-Computer Interface Based on Motor Imagery

    Get PDF
    A multimodal sensory feedback was exploited in the present study to improve the detection of neurological phenomena associated with motor imagery. At this aim, visual and haptic feedback were simultaneously delivered to the user of a brain-computer interface. The motor imagery-based brain-computer interface was built by using a wearable and portable electroencephalograph with only eight dry electrodes, a haptic suit, and a purposely implemented virtual reality application. Preliminary experiments were carried out with six subjects participating in five sessions on different days. The subjects were randomly divided into “control group” and “neurofeedback group”. The former performed pure motor imagery without receiving any feedback, while the latter received multimodal feedback as a response to their imaginative act. Results of a cross validation showed that at most 61% of classification accuracy was achieved in performing the pure motor imagination. On the contrary, subjects of the “neurofeedback group” achieved up to 82% mean accuracy, with a peak of 91% in one of the sessions. However, no improvement in pure motor imagery was observed, either when practicing with pure motor imagery or with feedback
    • 

    corecore