30,285 research outputs found
A parallel algorithm to calculate the costrank of a network
We developed analogous parallel algorithms to implement CostRank for distributed memory parallel computers using multi processors. Our intent is to make CostRank calculations for the growing number of hosts in a fast and a scalable way. In the same way we intent to secure large scale networks that require fast and reliable computing to calculate the ranking of enormous graphs with thousands of vertices (states) and millions or arcs (links). In our proposed approach we focus on a parallel CostRank computational architecture on a cluster of PCs networked via Gigabit Ethernet LAN to evaluate the performance and scalability of our implementation. In particular, a partitioning of input data, graph files, and ranking vectors with load balancing technique can improve the runtime and scalability of large-scale parallel computations. An application case study of analogous Cost Rank computation is presented. Applying parallel environment models for one-dimensional sparse matrix partitioning on a modified research page, results in a significant reduction in communication overhead and in per-iteration runtime. We provide an analytical discussion of analogous algorithms performance in terms of I/O and synchronization cost, as well as of memory usage
The "MIND" Scalable PIM Architecture
MIND (Memory, Intelligence, and Network Device) is an advanced parallel computer architecture for high performance computing and scalable embedded processing. It is a
Processor-in-Memory (PIM) architecture integrating both DRAM bit cells and CMOS logic devices on the same silicon die. MIND is multicore with multiple memory/processor nodes on
each chip and supports global shared memory across systems of MIND components. MIND is distinguished from other PIM architectures in that it incorporates mechanisms for efficient support of a global parallel execution model based on the semantics of message-driven multithreaded split-transaction processing. MIND is designed to operate either in conjunction with other conventional microprocessors or in standalone arrays of like devices. It also incorporates mechanisms for fault tolerance, real time execution, and active power management. This paper describes the major elements and operational methods of the MIND
architecture
A Novel Network Coded Parallel Transmission Framework for High-Speed Ethernet
Parallel transmission, as defined in high-speed Ethernet standards, enables
to use less expensive optoelectronics and offers backwards compatibility with
legacy Optical Transport Network (OTN) infrastructure. However, optimal
parallel transmission does not scale to large networks, as it requires
computationally expensive multipath routing algorithms to minimize differential
delay, and thus the required buffer size, optimize traffic splitting ratio, and
ensure frame synchronization. In this paper, we propose a novel framework for
high-speed Ethernet, which we refer to as network coded parallel transmission,
capable of effective buffer management and frame synchronization without the
need for complex multipath algorithms in the OTN layer. We show that using
network coding can reduce the delay caused by packet reordering at the
receiver, thus requiring a smaller overall buffer size, while improving the
network throughput. We design the framework in full compliance with high-speed
Ethernet standards specified in IEEE802.3ba and present solutions for network
encoding, data structure of coded parallel transmission, buffer management and
decoding at the receiver side. The proposed network coded parallel transmission
framework is simple to implement and represents a potential major breakthrough
in the system design of future high-speed Ethernet.Comment: 6 pages, 8 figures, Submitted to Globecom201
- …