4,613,979 research outputs found
Electromagnetic Coupling through Arbitrary Apertures in Parallel Conducting Planes
We propose a numerical methodto solve the problem of coupling through finite, but otherwise arbitrary apertures in perfectly conducting and vanishingly thin parallel planes. The problem is given a generic formulation using the Method of Moments and the Green's function in the region between the two planes is evaluated using Ewald's method. Numerical applications using Glisson's basis functions to solve the problem are demonstrated and compared with previously published results and the output of FDTD software
GraphX: Unifying Data-Parallel and Graph-Parallel Analytics
From social networks to language modeling, the growing scale and importance
of graph data has driven the development of numerous new graph-parallel systems
(e.g., Pregel, GraphLab). By restricting the computation that can be expressed
and introducing new techniques to partition and distribute the graph, these
systems can efficiently execute iterative graph algorithms orders of magnitude
faster than more general data-parallel systems. However, the same restrictions
that enable the performance gains also make it difficult to express many of the
important stages in a typical graph-analytics pipeline: constructing the graph,
modifying its structure, or expressing computation that spans multiple graphs.
As a consequence, existing graph analytics pipelines compose graph-parallel and
data-parallel systems using external storage systems, leading to extensive data
movement and complicated programming model.
To address these challenges we introduce GraphX, a distributed graph
computation framework that unifies graph-parallel and data-parallel
computation. GraphX provides a small, core set of graph-parallel operators
expressive enough to implement the Pregel and PowerGraph abstractions, yet
simple enough to be cast in relational algebra. GraphX uses a collection of
query optimization techniques such as automatic join rewrites to efficiently
implement these graph-parallel operators. We evaluate GraphX on real-world
graphs and workloads and demonstrate that GraphX achieves comparable
performance as specialized graph computation systems, while outperforming them
in end-to-end graph pipelines. Moreover, GraphX achieves a balance between
expressiveness, performance, and ease of use
Parallel Mapper
The construction of Mapper has emerged in the last decade as a powerful and
effective topological data analysis tool that approximates and generalizes
other topological summaries, such as the Reeb graph, the contour tree, split,
and joint trees. In this paper, we study the parallel analysis of the
construction of Mapper. We give a provably correct parallel algorithm to
execute Mapper on multiple processors and discuss the performance results that
compare our approach to a reference sequential Mapper implementation. We report
the performance experiments that demonstrate the efficiency of our method
Recommended from our members
Crosslinking in parallel
A crosslink is a double link established between the two entries of an edge in an adjacency list representation of a graph. Crosslinks play important roles in several parallel algorithms as they provide constant time access between the two entries of an edge; the existence of crosslinks is usually assumed. We consider the problem of establishing crosslinks in a crosslink-less adjacency list for graphs that belong to a class of graphs called the linearly contractible graphs, and show that cross-links can be established optimally in O(log n log*n) time using a CREW PRAM and optimally in O(log n) time using a CRCW PRAM for such graphs
Recommended from our members
Parallel convolutional coder
A parallel convolutional coder (104) comprising: a plurality of serial convolutional coders (108) each having a register
with a plurality of memory cells and a plurality of serial coder outputs,- input means (120) from which data can be transferred
in parallel into the registers,- and a parallel coder output (124) comprising a plurality of output memory cells each of which is connected
to one of the serial coder outputs so that data can be transferred in parallel from all of the serial coders to the parallel coder
output
- …
