643 research outputs found
Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks
Kugele A, Pfeil T, Pfeiffer M, Chicca E. Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks. Frontiers in Neuroscience. 2020;14: 439.Spiking neural networks (SNNs) are potentially highly efficient models for inference on fully parallel neuromorphic hardware, but existing training methods that convert conventional artificial neural networks (ANNs) into SNNs are unable to exploit these advantages. Although ANN-to-SNN conversion has achieved state-of-the-art accuracy for static image classification tasks, the following subtle but important difference in the way SNNs and ANNs integrate information over time makes the direct application of conversion techniques for sequence processing tasks challenging. Whereas all connections in SNNs have a certain propagation delay larger than zero, ANNs assign different roles to feed-forward connections, which immediately update all neurons within the same time step, and recurrent connections, which have to be rolled out in time and are typically assigned a delay of one time step. Here, we present a novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation. Our method builds on the recently introduced framework of streaming rollouts, which aims for fully parallel model execution of ANNs and inherently allows for temporal integration by merging paths of different delays between input and output of the network. The resulting networks achieve state-of-the-art accuracy for multiple event-based benchmark datasets, including N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture, and through the use of spatio-temporal shortcut connections yield low-latency approximate network responses that improve over time as more of the input sequence is processed. In addition, our converted SNNs are consistently more energy-efficient than their corresponding ANNs
Training Deep Surrogate Models with Large Scale Online Learning
The spatiotemporal resolution of Partial Differential Equations (PDEs) plays
important roles in the mathematical description of the world's physical
phenomena. In general, scientists and engineers solve PDEs numerically by the
use of computationally demanding solvers. Recently, deep learning algorithms
have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on
disk and read back for training. This paper advocates that relying on a
traditional static dataset to train these models does not allow the full
benefit of the solver to be used as a data generator. It proposes an open
source online training framework for deep surrogate models. The framework
implements several levels of parallelism focused on simultaneously generating
numerical simulations and training deep neural networks. This approach
suppresses the I/O and storage bottleneck associated with disk-loaded datasets,
and opens the way to training on significantly larger datasets. Experiments
compare the offline and online training of four surrogate models, including
state-of-the-art architectures. Results indicate that exposing deep surrogate
models to more dataset diversity, up to hundreds of GB, can increase model
generalization capabilities. Fully connected neural networks, Fourier Neural
Operator (FNO), and Message Passing PDE Solver prediction accuracy is improved
by 68%, 16% and 7%, respectively
Neural Packet Classification
Packet classification is a fundamental problem in computer networking. This
problem exposes a hard tradeoff between the computation and state complexity,
which makes it particularly challenging. To navigate this tradeoff, existing
solutions rely on complex hand-tuned heuristics, which are brittle and hard to
optimize. In this paper, we propose a deep reinforcement learning (RL) approach
to solve the packet classification problem. There are several characteristics
that make this problem a good fit for Deep RL. First, many of the existing
solutions are iteratively building a decision tree by splitting nodes in the
tree. Second, the effects of these actions (e.g., splitting nodes) can only be
evaluated once we are done with building the tree. These two characteristics
are naturally captured by the ability of RL to take actions that have sparse
and delayed rewards. Third, it is computationally efficient to generate data
traces and evaluate decision trees, which alleviate the notoriously high sample
complexity problem of Deep RL algorithms. Our solution, NeuroCuts, uses
succinct representations to encode state and action space, and efficiently
explore candidate decision trees to optimize for a global objective. It
produces compact decision trees optimized for a specific set of rules and a
given performance metric, such as classification time, memory footprint, or a
combination of the two. Evaluation on ClassBench shows that NeuroCuts
outperforms existing hand-crafted algorithms in classification time by 18% at
the median, and reduces both time and memory footprint by up to 3x
On the Rollout of Network Slicing in Carrier Networks: A Technology Radar
Network slicing is a powerful paradigm for network operators to support use cases with
widely diverse requirements atop a common infrastructure. As 5G standards are completed, and
commercial solutions mature, operators need to start thinking about how to integrate network slicing
capabilities in their assets, so that customer-facing solutions can be made available in their portfolio.
This integration is, however, not an easy task, due to the heterogeneity of assets that typically exist
in carrier networks. In this regard, 5G commercial networks may consist of a number of domains,
each with a different technological pace, and built out of products from multiple vendors, including
legacy network devices and functions. These multi-technology, multi-vendor and brownfield features
constitute a challenge for the operator, which is required to deploy and operate slices across all these
domains in order to satisfy the end-to-end nature of the services hosted by these slices. In this context,
the only realistic option for operators is to introduce slicing capabilities progressively, following a
phased approach in their roll-out. The purpose of this paper is to precisely help designing this kind
of plan, by means of a technology radar. The radar identifies a set of solutions enabling network
slicing on the individual domains, and classifies these solutions into four rings, each corresponding
to a different timeline: (i) as-is ring, covering today’s slicing solutions; (ii) deploy ring, corresponding
to solutions available in the short term; (iii) test ring, considering medium-term solutions; and
(iv) explore ring, with solutions expected in the long run. This classification is done based on the
technical availability of the solutions, together with the foreseen market demands. The value of this
radar lies in its ability to provide a complete view of the slicing landscape with one single snapshot,
by linking solutions to information that operators may use for decision making in their individual
go-to-market strategies.H2020 European Projects 5G-VINNI (grant agreement No. 815279) and 5G-CLARITY (grant agreement No. 871428)Spanish national project TRUE-5G (PID2019-108713RB-C53
3G migration in Pakistan
The telecommunication industry in Pakistan has come a long way since the country\u27s independence in 1947. The initial era could be fairly termed as the PTCL (Pakistan Telecommunication Company Limited) monopoly, for it was the sole provider of all telecommunication services across the country. It was not until four decades later that the region embarked into the new world of wireless communication, hence ending the decades old PTCL monopoly. By the end of the late 1990\u27s, government support and international investment in the region opened new doors to innovation and better quality, low cost, healthy competition. Wireless licenses for the private sector in the telecommunication industry triggered a promising chain of events that resulted in a drastic change in the telecommunication infrastructure and service profile. The newly introduced wireless (GSM) technology received enormous support from all stakeholders (consumers, regulatory body, and market) and caused a vital boost in Pakistan\u27s economy. Numerous tangential elements had triggered this vital move in the history of telecommunications in Pakistan. Entrepreneurs intended to test the idea of global joint ventures in the East and hence the idea of international business became a reality. The technology had proven to be a great success in the West, while Pakistan\u27s telecom consumer had lived under the shadow of PTCL dominance for decades and needed more flexibility. At last the world was moving from wired to wireless! Analysts termed this move as the beginning of a new era. The investors, telecommunication businesses, and Pakistani treasury prospered. It was a win-win situation for all involved. The learning curve was steep for both operators and consumers but certainly improved over time. In essence, the principle of deploying the right technology in the right market at the right time led to this remarkable success. The industry today stands on the brink of a similar crossroads via transition from second generation to something beyond. With the partial success of 3G in Europe and the USA, the government has announced the release of three 3G licenses by mid 2009. This decision is not yet fully supported by all but still initiated parallel efforts by the operators and the vendors to integrate this next move into their existing infrastructure
Analyzing Performance Effects of Neural Networks Applied to Lane Recognition under Various Environmental Driving Conditions
Acknowledgments: Authors would like to thank the Université du Québec à Trois-Rivières and the
Institut de recherche sur l’hydrogène for their collaboration and assistance.Lane detection is an essential module for the safe navigation of autonomous vehicles (AVs). Estimating the vehicle’s position and trajectory on the road is critical; however, several environmental variables can affect this task. State-of-the-art lane detection methods utilize convolutional neural networks (CNNs) as feature extractors to obtain relevant features through training using multiple kernel layers. It makes them vulnerable to any statistical change in the input data or noise affecting the spatial characteristics. In this paper, we compare six different CNN architectures to analyze the effect of various adverse conditions, including harsh weather, illumination variations, and shadows/occlusions, on lane detection. Among all the aforementioned adverse conditions, harsh weather in general and snowy night conditions particularly affect the performance by a large margin. The average detection accuracy of the networks decreased by 75.2%, and the root mean square error (RMSE) increased by 301.1%. Overall, the results show a noticeable drop in the networks’ accuracy for all adverse conditions because the features’ stochastic distributions change for each state.Natural Sciences and Engineering Research Council of CanadaCanada Research Chair
- …