13,658 research outputs found
How dense can one pack spheres of arbitrary size distribution?
We present the first systematic algorithm to estimate the maximum packing
density of spheres when the grain sizes are drawn from an arbitrary size
distribution. With an Apollonian filling rule, we implement our technique for
disks in 2d and spheres in 3d. As expected, the densest packing is achieved
with power-law size distributions. We also test the method on homogeneous and
on empirical real distributions, and we propose a scheme to obtain
experimentally accessible distributions of grain sizes with low porosity. Our
method should be helpful in the development of ultra-strong ceramics and high
performance concrete.Comment: 5 pages, 5 figure
Random Sequential Addition of Hard Spheres in High Euclidean Dimensions
Employing numerical and theoretical methods, we investigate the structural
characteristics of random sequential addition (RSA) of congruent spheres in
-dimensional Euclidean space in the infinite-time or
saturation limit for the first six space dimensions ().
Specifically, we determine the saturation density, pair correlation function,
cumulative coordination number and the structure factor in each =of these
dimensions. We find that for , the saturation density
scales with dimension as , where and
. We also show analytically that the same density scaling
persists in the high-dimensional limit, albeit with different coefficients. A
byproduct of this high-dimensional analysis is a relatively sharp lower bound
on the saturation density for any given by , where is the structure factor at
(i.e., infinite-wavelength number variance) in the high-dimensional limit.
Consistent with the recent "decorrelation principle," we find that pair
correlations markedly diminish as the space dimension increases up to six. Our
work has implications for the possible existence of disordered classical ground
states for some continuous potentials in sufficiently high dimensions.Comment: 38 pages, 9 figures, 4 table
Bin Packing and Related Problems: General Arc-flow Formulation with Graph Compression
We present an exact method, based on an arc-flow formulation with side
constraints, for solving bin packing and cutting stock problems --- including
multi-constraint variants --- by simply representing all the patterns in a very
compact graph. Our method includes a graph compression algorithm that usually
reduces the size of the underlying graph substantially without weakening the
model. As opposed to our method, which provides strong models, conventional
models are usually highly symmetric and provide very weak lower bounds.
Our formulation is equivalent to Gilmore and Gomory's, thus providing a very
strong linear relaxation. However, instead of using column-generation in an
iterative process, the method constructs a graph, where paths from the source
to the target node represent every valid packing pattern.
The same method, without any problem-specific parameterization, was used to
solve a large variety of instances from several different cutting and packing
problems. In this paper, we deal with vector packing, graph coloring, bin
packing, cutting stock, cardinality constrained bin packing, cutting stock with
cutting knife limitation, cutting stock with binary patterns, bin packing with
conflicts, and cutting stock with binary patterns and forbidden pairs. We
report computational results obtained with many benchmark test data sets, all
of them showing a large advantage of this formulation with respect to the
traditional ones
Packing Sporadic Real-Time Tasks on Identical Multiprocessor Systems
In real-time systems, in addition to the functional correctness recurrent
tasks must fulfill timing constraints to ensure the correct behavior of the
system. Partitioned scheduling is widely used in real-time systems, i.e., the
tasks are statically assigned onto processors while ensuring that all timing
constraints are met. The decision version of the problem, which is to check
whether the deadline constraints of tasks can be satisfied on a given number of
identical processors, has been known -complete in the strong sense.
Several studies on this problem are based on approximations involving resource
augmentation, i.e., speeding up individual processors. This paper studies
another type of resource augmentation by allocating additional processors, a
topic that has not been explored until recently. We provide polynomial-time
algorithms and analysis, in which the approximation factors are dependent upon
the input instances. Specifically, the factors are related to the maximum ratio
of the period to the relative deadline of a task in the given task set. We also
show that these algorithms unfortunately cannot achieve a constant
approximation factor for general cases. Furthermore, we prove that the problem
does not admit any asymptotic polynomial-time approximation scheme (APTAS)
unless when the task set has constrained deadlines, i.e.,
the relative deadline of a task is no more than the period of the task.Comment: Accepted and to appear in ISAAC 2018, Yi-Lan, Taiwa
Approximating Smallest Containers for Packing Three-dimensional Convex Objects
We investigate the problem of computing a minimal-volume container for the
non-overlapping packing of a given set of three-dimensional convex objects.
Already the simplest versions of the problem are NP-hard so that we cannot
expect to find exact polynomial time algorithms. We give constant ratio
approximation algorithms for packing axis-parallel (rectangular) cuboids under
translation into an axis-parallel (rectangular) cuboid as container, for
cuboids under rigid motions into an axis-parallel cuboid or into an arbitrary
convex container, and for packing convex polyhedra under rigid motions into an
axis-parallel cuboid or arbitrary convex container. This work gives the first
approximability results for the computation of minimal volume containers for
the objects described
Online Circle and Sphere Packing
In this paper we consider the Online Bin Packing Problem in three variants:
Circles in Squares, Circles in Isosceles Right Triangles, and Spheres in Cubes.
The two first ones receive an online sequence of circles (items) of different
radii while the third one receive an online sequence of spheres (items) of
different radii, and they want to pack the items into the minimum number of
unit squares, isosceles right triangles of leg length one, and unit cubes,
respectively. For Online Circle Packing in Squares, we improve the previous
best-known competitive ratio for the bounded space version, when at most a
constant number of bins can be open at any given time, from 2.439 to 2.3536.
For Online Circle Packing in Isosceles Right Triangles and Online Sphere
Packing in Cubes we show bounded space algorithms of asymptotic competitive
ratios 2.5490 and 3.5316, respectively, as well as lower bounds of 2.1193 and
2.7707 on the competitive ratio of any online bounded space algorithm for these
two problems. We also considered the online unbounded space variant of these
three problems which admits a small reorganization of the items inside the bin
after their packing, and we present algorithms of competitive ratios 2.3105,
2.5094, and 3.5146 for Circles in Squares, Circles in Isosceles Right
Triangles, and Spheres in Cubes, respectively
Parallel Implementation of Lossy Data Compression for Temporal Data Sets
Many scientific data sets contain temporal dimensions. These are the data
storing information at the same spatial location but different time stamps.
Some of the biggest temporal datasets are produced by parallel computing
applications such as simulations of climate change and fluid dynamics. Temporal
datasets can be very large and cost a huge amount of time to transfer among
storage locations. Using data compression techniques, files can be transferred
faster and save storage space. NUMARCK is a lossy data compression algorithm
for temporal data sets that can learn emerging distributions of element-wise
change ratios along the temporal dimension and encodes them into an index table
to be concisely represented. This paper presents a parallel implementation of
NUMARCK. Evaluated with six data sets obtained from climate and astrophysics
simulations, parallel NUMARCK achieved scalable speedups of up to 8788 when
running 12800 MPI processes on a parallel computer. We also compare the
compression ratios against two lossy data compression algorithms, ISABELA and
ZFP. The results show that NUMARCK achieved higher compression ratio than
ISABELA and ZFP.Comment: 10 pages, HiPC 201
- …