15,701 research outputs found
Plucked piezoelectric bimorphs for knee-joint energy harvesting: modelling and experimental validation
The modern drive towards mobility and wireless devices is motivating intensive
research in energy harvesting technologies. To reduce the battery burden on
people, we propose the adoption of a frequency up-conversion strategy for a new
piezoelectric wearable energy harvester. Frequency up-conversion increases
efficiency because the piezoelectric devices are permitted to vibrate at
resonance even if the input excitation occurs at much lower frequency.
Mechanical plucking-based frequency up-conversion is obtained by deflecting the
piezoelectric bimorph via a plectrum, then rapidly releasing it so that it can
vibrate unhindered; during the following oscillatory cycles, part of the
mechanical energy is converted into electrical energy. In order to guide the
design of such a harvester, we have modelled with finite element methods the
response and power generation of a piezoelectric bimorph while it is plucked.
The model permits the analysis of the effects of the speed of deflection as well
as the prediction of the energy produced and its dependence on the electrical
load. An experimental rig has been set up to observe the response of the bimorph
in the harvester. A PZT-5H bimorph was used for the experiments. Measurements of
tip velocity, voltage output and energy dissipated across a resistor are
reported. Comparisons of the experimental results with the model predictions are
very successful and prove the validity of the model
Pizzicato excitation for wearable energy harvesters
A new technique based on the plucking of flexible piezoelectric material can be
used to boost energy harvested to power portable electronic devices
A Biologically Plausible Learning Rule for Deep Learning in the Brain
Researchers have proposed that deep learning, which is providing important
progress in a wide range of high complexity tasks, might inspire new insights
into learning in the brain. However, the methods used for deep learning by
artificial neural networks are biologically unrealistic and would need to be
replaced by biologically realistic counterparts. Previous biologically
plausible reinforcement learning rules, like AGREL and AuGMEnT, showed
promising results but focused on shallow networks with three layers. Will these
learning rules also generalize to networks with more layers and can they handle
tasks of higher complexity? We demonstrate the learning scheme on classical and
hard image-classification benchmarks, namely MNIST, CIFAR10 and CIFAR100, cast
as direct reward tasks, both for fully connected, convolutional and locally
connected architectures. We show that our learning rule - Q-AGREL - performs
comparably to supervised learning via error-backpropagation, with this type of
trial-and-error reinforcement learning requiring only 1.5-2.5 times more
epochs, even when classifying 100 different classes as in CIFAR100. Our results
provide new insights into how deep learning may be implemented in the brain
- …