27,157 research outputs found
Resolving sets for Johnson and Kneser graphs
A set of vertices in a graph is a {\em resolving set} for if, for
any two vertices , there exists such that the distances . In this paper, we consider the Johnson graphs and Kneser
graphs , and obtain various constructions of resolving sets for these
graphs. As well as general constructions, we show that various interesting
combinatorial objects can be used to obtain resolving sets in these graphs,
including (for Johnson graphs) projective planes and symmetric designs, as well
as (for Kneser graphs) partial geometries, Hadamard matrices, Steiner systems
and toroidal grids.Comment: 23 pages, 2 figures, 1 tabl
Recommended from our members
Survey of partitioning techniques in silicon compilation
In the silicon compilation design process, partitioning is usually the first problem to be investigated because partitioning algorithms form the backbone of many algorithms including: system synthesis, processor synthesis, floorplanning, and placement. In this survey, several partitioning techniques will be examined. In addition, this paper will review the partitioning algorithms used by synthesis systems at different design levels
Recommended from our members
A new partitioning approach for layout synthesis from register-transfer netlists
Most of the IC today are described and documented using heiarchical netlists. In addition to gates, latches, and flip-flops, these netlists include sliceable register-transfer components such as registers, counters, adders, ALUs, shifters, register files, and multiplexers. Usually, these components are decomposed into basic gates, latches, and flip-flops, and are laid out using standard cells. The standard cell architecture requires excessive routing area, and does not exploit the bit-sliced nature of register-transfer components. In this paper, we present a new sliced-layout architecture to alleviate the preceding problems. We also describe partitioning algorithms that are used to generate the floorplan for this layout architecture. The partitioning algorithms not only select the best suited layout style for each component, but also consider critical paths, I/O pin locations, and connections between blocks. This approach improves the overall area utilization and minimizes the total wire length
Leveraging Coding Techniques for Speeding up Distributed Computing
Large scale clusters leveraging distributed computing frameworks such as
MapReduce routinely process data that are on the orders of petabytes or more.
The sheer size of the data precludes the processing of the data on a single
computer. The philosophy in these methods is to partition the overall job into
smaller tasks that are executed on different servers; this is called the map
phase. This is followed by a data shuffling phase where appropriate data is
exchanged between the servers. The final so-called reduce phase, completes the
computation.
One potential approach, explored in prior work for reducing the overall
execution time is to operate on a natural tradeoff between computation and
communication. Specifically, the idea is to run redundant copies of map tasks
that are placed on judiciously chosen servers. The shuffle phase exploits the
location of the nodes and utilizes coded transmission. The main drawback of
this approach is that it requires the original job to be split into a number of
map tasks that grows exponentially in the system parameters. This is
problematic, as we demonstrate that splitting jobs too finely can in fact
adversely affect the overall execution time.
In this work we show that one can simultaneously obtain low communication
loads while ensuring that jobs do not need to be split too finely. Our approach
uncovers a deep relationship between this problem and a class of combinatorial
structures called resolvable designs. Appropriate interpretation of resolvable
designs can allow for the development of coded distributed computing schemes
where the splitting levels are exponentially lower than prior work. We present
experimental results obtained on Amazon EC2 clusters for a widely known
distributed algorithm, namely TeraSort. We obtain over 4.69 improvement
in speedup over the baseline approach and more than 2.6 over current
state of the art
Recommended from our members
Behavioral synthesis from VHDL using structured modeling
This dissertation describes work in behavioral synthesis involving the development of a VHDL Synthesis System VSS which accepts a VHDL behavioral input specification and performs technology independent synthesis to generate a circuit netlist of generic components. The VHDL language is used for input and output descriptions. An intermediate representation which incorporates signal typing and component attributes simplifies compilation and facilitates design optimization.A Structured Modeling methodology has been developed to suggest standard VHDL modeling practices for synthesis. Structured modeling provides recommendations for the use of available VHDL description styles so that optimal designs will be synthesized.A design composed of generic components is synthesized from the input description through a process of Graph Compilation, Graph Criticism, and Design Compilation. Experiments were performed to demonstrate the effects of different modeling styles on the quality of the design produced by VSS. Several alternative VHDL models were examined for each benchmark, illustrating the improvements in design quality achieved when Structured Modeling guidelines were followed
Low Power Processor Architectures and Contemporary Techniques for Power Optimization – A Review
The technological evolution has increased the number of transistors for a given die area significantly and increased the switching speed from few MHz to GHz range. Such inversely proportional decline in size and boost in performance consequently demands shrinking of supply voltage and effective power dissipation in chips with millions of transistors. This has triggered substantial amount of research in power reduction techniques into almost every aspect of the chip and particularly the processor cores contained in the chip. This paper presents an overview of techniques for achieving the power efficiency mainly at the processor core level but also visits related domains such as buses and memories. There are various processor parameters and features such as supply voltage, clock frequency, cache and pipelining which can be optimized to reduce the power consumption of the processor. This paper discusses various ways in which these parameters can be optimized. Also, emerging power efficient processor architectures are overviewed and research activities are discussed which should help reader identify how these factors in a processor contribute to power consumption. Some of these concepts have been already established whereas others are still active research areas. © 2009 ACADEMY PUBLISHER
Hadamard partitioned difference families and their descendants
If is a Hadamard difference set (HDS) in , then
is clearly a partitioned
difference family (PDF). Any -PDF will be said of Hadamard-type
if as the one above. We present a doubling construction which,
starting from any such PDF, leads to an infinite class of PDFs. As a special
consequence, we get a PDF in a group of order and three
block-sizes , and , whenever we have a
-HDS and the maximal prime power divisors of are
all greater than
- …