6,813 research outputs found
Recommended from our members
Efficient Partitioning of Memory Systems and Its Importance for Memory Consolidation
Long-term memories are likely stored in the synaptic weights of neuronal networks in the brain. The storage capacity of such networks depends on the degree of plasticity of their synapses. Highly plastic synapses allow for strong memories, but these are quickly overwritten. On the other hand, less labile synapses result in long-lasting but weak memories. Here we show that the trade-off between memory strength and memory lifetime can be overcome by partitioning the memory system into multiple regions characterized by different levels of synaptic plasticity and transferring memory information from the more to less plastic region. The improvement in memory lifetime is proportional to the number of memory regions, and the initial memory strength can be orders of magnitude larger than in a non-partitioned memory system. This model provides a fundamental computational reason for memory consolidation processes at the systems level
DeepPicar: A Low-cost Deep Neural Network-based Autonomous Car
We present DeepPicar, a low-cost deep neural network based autonomous car
platform. DeepPicar is a small scale replication of a real self-driving car
called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN),
which takes images from a front-facing camera as input and produces car
steering angles as output. DeepPicar uses the same network architecture---9
layers, 27 million connections and 250K parameters---and can drive itself in
real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using
DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end
deep learning based real-time control of autonomous vehicles. We also
systematically compare other contemporary embedded computing platforms using
the DeepPicar's CNN-based real-time control workload. We find that all tested
platforms, including the Pi 3, are capable of supporting the CNN-based
real-time control, from 20 Hz up to 100 Hz, depending on hardware platform.
However, we find that shared resource contention remains an important issue
that must be considered in applying CNN models on shared memory based embedded
computing platforms; we observe up to 11.6X execution time increase in the CNN
based control loop due to shared resource contention. To protect the CNN
workload, we also evaluate state-of-the-art cache partitioning and memory
bandwidth throttling techniques on the Pi 3. We find that cache partitioning is
ineffective, while memory bandwidth throttling is an effective solution.Comment: To be published as a conference paper at RTCSA 201
An Algorithm for Network and Data-aware Placement of Multi-Tier Applications in Cloud Data Centers
Today's Cloud applications are dominated by composite applications comprising
multiple computing and data components with strong communication correlations
among them. Although Cloud providers are deploying large number of computing
and storage devices to address the ever increasing demand for computing and
storage resources, network resource demands are emerging as one of the key
areas of performance bottleneck. This paper addresses network-aware placement
of virtual components (computing and data) of multi-tier applications in data
centers and formally defines the placement as an optimization problem. The
simultaneous placement of Virtual Machines and data blocks aims at reducing the
network overhead of the data center network infrastructure. A greedy heuristic
is proposed for the on-demand application components placement that localizes
network traffic in the data center interconnect. Such optimization helps
reducing communication overhead in upper layer network switches that will
eventually reduce the overall traffic volume across the data center. This, in
turn, will help reducing packet transmission delay, increasing network
performance, and minimizing the energy consumption of network components.
Experimental results demonstrate performance superiority of the proposed
algorithm over other approaches where it outperforms the state-of-the-art
network-aware application placement algorithm across all performance metrics by
reducing the average network cost up to 67% and network usage at core switches
up to 84%, as well as increasing the average number of application deployments
up to 18%.Comment: Submitted for publication consideration for the Journal of Network
and Computer Applications (JNCA). Total page: 28. Number of figures: 15
figure
Connected component identification and cluster update on GPU
Cluster identification tasks occur in a multitude of contexts in physics and
engineering such as, for instance, cluster algorithms for simulating spin
models, percolation simulations, segmentation problems in image processing, or
network analysis. While it has been shown that graphics processing units (GPUs)
can result in speedups of two to three orders of magnitude as compared to
serial codes on CPUs for the case of local and thus naturally parallelized
problems such as single-spin flip update simulations of spin models, the
situation is considerably more complicated for the non-local problem of cluster
or connected component identification. I discuss the suitability of different
approaches of parallelization of cluster labeling and cluster update algorithms
for calculations on GPU and compare to the performance of serial
implementations.Comment: 15 pages, 14 figures, one table, submitted to PR
Analyze Large Multidimensional Datasets Using Algebraic Topology
This paper presents an efficient algorithm to extract knowledge from high-dimensionality, high- complexity datasets using algebraic topology, namely simplicial complexes. Based on concept of isomorphism of relations, our method turn a relational table into a geometric object (a simplicial complex is a polyhedron). So, conceptually association rule searching is turned into a geometric traversal problem. By leveraging on the core concepts behind Simplicial Complex, we use a new technique (in computer science) that improves the performance over existing methods and uses far less memory. It was designed and developed with a strong emphasis on scalability, reliability, and extensibility. This paper also investigate the possibility of Hadoop integration and the challenges that come with the framework
- …