107 research outputs found
Fewer, tastier tomatoes: Expanding the use of saltwater in agriculture
As the world experiences increasing freshwater crises, researchers turn to saline waters. Can the use of saltwater be expanded? The development of appropriate practices for the use of saline waters for irrigation requires an adequate understanding of how salts affect waters, soils and plants, as FAO states in their book The use of saline waters for crop production, 1992. Plants can grow in saline waters, if the salinity of the soil is controlled and monitored, and the soil salt content is not allowed to increase past a level that damages the plants. Irrigation of salt tolerant plants with slightly saline waters offers a substantial opportunity to increase the yearly crop production of the world
Attention-free Spikformer: Mixing Spike Sequences with Simple Linear Transforms
By integrating the self-attention capability and the biological properties of
Spiking Neural Networks (SNNs), Spikformer applies the flourishing Transformer
architecture to SNNs design. It introduces a Spiking Self-Attention (SSA)
module to mix sparse visual features using spike-form Query, Key, and Value,
resulting in the State-Of-The-Art (SOTA) performance on numerous datasets
compared to previous SNN-like frameworks. In this paper, we demonstrate that
the Spikformer architecture can be accelerated by replacing the SSA with an
unparameterized Linear Transform (LT) such as Fourier and Wavelet transforms.
These transforms are utilized to mix spike sequences, reducing the quadratic
time complexity to log-linear time complexity. They alternate between the
frequency and time domains to extract sparse visual features, showcasing
powerful performance and efficiency. We conduct extensive experiments on image
classification using both neuromorphic and static datasets. The results
indicate that compared to the SOTA Spikformer with SSA, Spikformer with LT
achieves higher Top-1 accuracy on neuromorphic datasets (i.e., CIFAR10-DVS and
DVS128 Gesture) and comparable Top-1 accuracy on static datasets (i.e.,
CIFAR-10 and CIFAR-100). Furthermore, Spikformer with LT achieves approximately
29-51% improvement in training speed, 61-70% improvement in inference speed,
and reduces memory usage by 4-26% due to not requiring learnable parameters.Comment: Under Revie
ODE-based Recurrent Model-free Reinforcement Learning for POMDPs
Neural ordinary differential equations (ODEs) are widely recognized as the
standard for modeling physical mechanisms, which help to perform approximate
inference in unknown physical or biological environments. In partially
observable (PO) environments, how to infer unseen information from raw
observations puzzled the agents. By using a recurrent policy with a compact
context, context-based reinforcement learning provides a flexible way to
extract unobservable information from historical transitions. To help the agent
extract more dynamics-related information, we present a novel ODE-based
recurrent model combines with model-free reinforcement learning (RL) framework
to solve partially observable Markov decision processes (POMDPs). We
experimentally demonstrate the efficacy of our methods across various PO
continuous control and meta-RL tasks. Furthermore, our experiments illustrate
that our method is robust against irregular observations, owing to the ability
of ODEs to model irregularly-sampled time series.Comment: Accepted by NeurIPS 202
Local Convolution Enhanced Global Fourier Neural Operator For Multiscale Dynamic Spaces Prediction
Neural operators extend the capabilities of traditional neural networks by
allowing them to handle mappings between function spaces for the purpose of
solving partial differential equations (PDEs). One of the most notable methods
is the Fourier Neural Operator (FNO), which is inspired by Green's function
method and approximate operator kernel directly in the frequency domain. In
this work, we focus on predicting multiscale dynamic spaces, which is
equivalent to solving multiscale PDEs. Multiscale PDEs are characterized by
rapid coefficient changes and solution space oscillations, which are crucial
for modeling atmospheric convection and ocean circulation. To solve this
problem, models should have the ability to capture rapid changes and process
them at various scales. However, the FNO only approximates kernels in the
low-frequency domain, which is insufficient when solving multiscale PDEs. To
address this challenge, we propose a novel hierarchical neural operator that
integrates improved Fourier layers with attention mechanisms, aiming to capture
all details and handle them at various scales. These mechanisms complement each
other in the frequency domain and encourage the model to solve multiscale
problems. We perform experiments on dynamic spaces governed by forward and
reverse problems of multiscale elliptic equations, Navier-Stokes equations and
some other physical scenarios, and reach superior performance in existing PDE
benchmarks, especially equations characterized by rapid coefficient variations.Comment: 10 pages, 4 figure
Tuning Synaptic Connections instead of Weights by Genetic Algorithm in Spiking Policy Network
Learning from the interaction is the primary way biological agents know about
the environment and themselves. Modern deep reinforcement learning (DRL)
explores a computational approach to learning from interaction and has
significantly progressed in solving various tasks. However, the powerful DRL is
still far from biological agents in energy efficiency. Although the underlying
mechanisms are not fully understood, we believe that the integration of spiking
communication between neurons and biologically-plausible synaptic plasticity
plays a prominent role. Following this biological intuition, we optimize a
spiking policy network (SPN) by a genetic algorithm as an energy-efficient
alternative to DRL. Our SPN mimics the sensorimotor neuron pathway of insects
and communicates through event-based spikes. Inspired by biological research
that the brain forms memories by forming new synaptic connections and rewires
these connections based on new experiences, we tune the synaptic connections
instead of weights in SPN to solve given tasks. Experimental results on several
robotic control tasks show that our method can achieve the performance level of
mainstream DRL methods and exhibit significantly higher energy efficiency
Population-coding and Dynamic-neurons improved Spiking Actor Network for Reinforcement Learning
With the Deep Neural Networks (DNNs) as a powerful function approximator,
Deep Reinforcement Learning (DRL) has been excellently demonstrated on robotic
control tasks. Compared to DNNs with vanilla artificial neurons, the
biologically plausible Spiking Neural Network (SNN) contains a diverse
population of spiking neurons, making it naturally powerful on state
representation with spatial and temporal information. Based on a hybrid
learning framework, where a spike actor-network infers actions from states and
a deep critic network evaluates the actor, we propose a Population-coding and
Dynamic-neurons improved Spiking Actor Network (PDSAN) for efficient state
representation from two different scales: input coding and neuronal coding. For
input coding, we apply population coding with dynamically receptive fields to
directly encode each input state component. For neuronal coding, we propose
different types of dynamic-neurons (containing 1st-order and 2nd-order neuronal
dynamics) to describe much more complex neuronal dynamics. Finally, the PDSAN
is trained in conjunction with deep critic networks using the Twin Delayed Deep
Deterministic policy gradient algorithm (TD3-PDSAN). Extensive experimental
results show that our TD3-PDSAN model achieves better performance than
state-of-the-art models on four OpenAI gym benchmark tasks. It is an important
attempt to improve RL with SNN towards the effective computation satisfying
biological plausibility.Comment: 27 pages, 11 figures, accepted by Journal of Neural Network
Recent Advances and New Frontiers in Spiking Neural Networks
In recent years, spiking neural networks (SNNs) have received extensive
attention in brain-inspired intelligence due to their rich spatially-temporal
dynamics, various encoding methods, and event-driven characteristics that
naturally fit the neuromorphic hardware. With the development of SNNs,
brain-inspired intelligence, an emerging research field inspired by brain
science achievements and aiming at artificial general intelligence, is becoming
hot. This paper reviews recent advances and discusses new frontiers in SNNs
from five major research topics, including essential elements (i.e., spiking
neuron models, encoding methods, and topology structures), neuromorphic
datasets, optimization algorithms, software, and hardware frameworks. We hope
our survey can help researchers understand SNNs better and inspire new works to
advance this field.Comment: Accepted at IJCAI202
- …