26 research outputs found
Exclusion rates from international large-scale assessments: an analysis of 20Â years of IEA data
Cross-national comparisons of educational achievement rely upon each participating country collecting nationally representative data. While obtaining high response rates is a key part of reaching this goal, other potentially important factors may also be at play. This paper focuses on one such issue—exclusion rates—which has received relatively little attention in the academic literature. Using data from 20 years of international large-scale assessment data, we find there to be modest variation in exclusion rates across countries and that there has been a relatively small increase in exclusion rates in some over time. We also demonstrate how exclusion rates tend to be higher in studies of primary students than in studies of secondary students. Finally, while there seems to be little relationship between exclusion rates and response rates, there is a weak negative association between the level of exclusions and test performance. We conclude by discussing how information about exclusions—and other similar issues—might be more clearly communicated to non-specialist audiences
Generating pointing motions for a humanoid robot by combining motor primitives
The human motor system is robust, adaptive and very flexible. The underlying principles of human motion provide inspiration for robotics. Pointing at different targets is a common robotics task, where insights about human motion can be applied. Traditionally in robotics, when a motion is generated it has to be validated so that the robot configurations involved are appropriate. The human brain, in contrast, uses the motor cortex to generate new motions reusing and combining existing knowledge before executing the motion. We propose a method to generate and control pointing motions for a robot using a biological inspired architecture implemented with spiking neural networks. We outline a simplified model of the human motor cortex that generates motions using motor primitives. The network learns a base motor primitive for pointing at a target in the center, and four correction primitives to point at targets up, down, left and right from the base primitive, respectively. The primitives are combined to reach different targets. We evaluate the performance of the network with a humanoid robot pointing at different targets marked on a plane. The network was able to combine one, two or three motor primitives at the same time to control the robot in real-time to reach a specific target. We work on extending this work from pointing to a given target to performing a grasping or tool manipulation task. This has many applications for engineering and industry involving real robots
A spiking network classifies human sEMG signals and triggers finger reflexes on a robotic hand
The interaction between robots and humans is of great relevance for the field of neurorobotics as it can provide insights on how humans perform motor control and sensor processing and on how it can be applied to robotics. We propose a spiking neural network (SNN) to trigger finger motion reflexes on a robotic hand based on human surface Electromyography (sEMG) data. The first part of the network takes sEMG signals to measure muscle activity, then classify the data to detect which finger is being flexed in the human hand. The second part triggers single finger reflexes on the robot using the classification output. The finger reflexes are modeled with motion primitives activated with an oscillator and mapped to the robot kinematic. We evaluated the SNN by having users wear a non-invasive sEMG sensor, record a training dataset, and then flex different fingers, one at a time. The muscle activity was recorded using a Myo sensor with eight different channels. The sEMG signals were successfully encoded into spikes as input for the SNN. The classification could detect the active finger and trigger the motion generation of finger reflexes. The SNN was able to control a real Schunk SVH 5-finger robotic hand online. Being able to map myo-electric activity to functions of motor control for a task, can provide an interesting interface for robotic applications, and a platform to study brain functioning. SNN provide a challenging but interesting framework to interact with human data. In future work the approach will be extended to control also a robot arm at the same time
Soft-Grasping With an Anthropomorphic Robotic Hand Using Spiking Neurons
Evolution gave humans advanced grasping capabilities combining an adaptive hand with efficient control. Grasping motions can quickly be adapted if the object moves or deforms. Soft-grasping with an anthropomorphic hand is a great capability for robots interacting with objects shaped for humans. Nevertheless, most robotic applications use vacuum, 2-finger or custom made grippers. We present a biologically inspired spiking neural network (SNN) for soft-grasping to control a robotic hand. Two control loops are combined, one from motor primitives and one from a compliant controller activated by a reflex. The finger primitives represent synergies between joints and hand primitives represent different affordances. Contact is detected with a mechanism based on inter-neuron circuits in the spinal cord to trigger reflexes. A Schunk SVH 5-finger hand was used to grasp objects with different shapes, stiffness and sizes. The SNN adapted the grasping motions without knowing the exact properties of the objects. The compliant controller with online learning proved to be sensitive, allowing even the grasping of balloons. In contrast to deep learning approaches, our SNN requires one example of each grasping motion to train the primitives. Computation of the inverse kinematics or complex contact point planning is not required. This approach simplifies the control and can be used on different robots providing similar adaptive features as a human hand. A physical imitation of a biological system implemented completely with SNN and a robotic hand can provide new insights into grasping mechanisms
Implementing ICT in classroom practice: what else matters besides the ICT infrastructure?
Abstract Background The large-scale International Computer and Information Literacy Study (2018) has an interesting finding concerning Luxembourg teachers. Luxembourg has one of the highest reported level of technology-related resources for teaching and learning, but a relatively lower reported use of ICT in classroom practice. Methods ICT innovation requires a high initial level of financial investment in technology, and Luxembourg has achieved this since 2015. Once the necessary financial investment in ICT technology has been made, the key question is what else matters to increase the use of ICT in teaching. To identify the relevant factors, we used the “Four in Balance” model, aimed explicitly at monitoring the implementation of ICT in schools. Results Using data for 420 teachers in Luxembourg, we identify that within such a technology-driven approach to digitalization, teachers’ vision of ICT use in teaching, level of expertise, and the use of digital learning materials in class are significant support factors. Leadership and collaboration, in the form of an explicit vision of setting ICT as a priority for teaching in the school, also prove to be important. Conclusions Through these findings, we show that the initial investment in school infrastructure for ICT needs to be associated in its implementation with teachers’ ICT-related beliefs, attitudes, and ICT expertise
Embodied Neuromorphic Vision with Continuous Random Backpropagation
Spike-based communication between biological neurons is sparse and
unreliable. This enables the brain to process visual information from the eyes
efficiently. Taking inspiration from biology, artificial spiking neural
networks coupled with silicon retinas attempt to model these computations.
Recent findings in machine learning allowed the derivation of a family of
powerful synaptic plasticity rules approximating backpropagation for spiking
networks. Are these rules capable of processing real-world visual sensory data?
In this paper, we evaluate the performance of Event-Driven Random
Back-Propagation (eRBP) at learning representations from event streams provided
by a Dynamic Vision Sensor (DVS). First, we show that eRBP matches
state-of-the-art performance on the DvsGesture dataset with the addition of a
simple covert attention mechanism. By remapping visual receptive fields
relatively to the center of the motion, this attention mechanism provides
translation invariance at low computational cost compared to convolutions.
Second, we successfully integrate eRBP in a real robotic setup, where a robotic
arm grasps objects according to detected visual affordances. In this setup,
visual information is actively sensed by a DVS mounted on a robotic head
performing microsaccadic eye movements. We show that our method classifies
affordances within 100ms after microsaccade onset, which is comparable to human
performance reported in behavioral study. Our results suggest that advances in
neuromorphic technology and plasticity rules enable the development of
autonomous robots operating at high speed and low energy consumption