2,297 research outputs found
Reconstructing Polyatomic Structures from Discrete X-Rays: NP-Completeness Proof for Three Atoms
We address a discrete tomography problem that arises in the study of the
atomic structure of crystal lattices. A polyatomic structure T can be defined
as an integer lattice in dimension D>=2, whose points may be occupied by
distinct types of atoms. To ``analyze'' T, we conduct ell measurements that we
call_discrete X-rays_. A discrete X-ray in direction xi determines the number
of atoms of each type on each line parallel to xi. Given ell such non-parallel
X-rays, we wish to reconstruct T.
The complexity of the problem for c=1 (one atom type) has been completely
determined by Gardner, Gritzmann and Prangenberg, who proved that the problem
is NP-complete for any dimension D>=2 and ell>=3 non-parallel X-rays, and that
it can be solved in polynomial time otherwise.
The NP-completeness result above clearly extends to any c>=2, and therefore
when studying the polyatomic case we can assume that ell=2. As shown in another
article by the same authors, this problem is also NP-complete for c>=6 atoms,
even for dimension D=2 and axis-parallel X-rays. They conjecture that the
problem remains NP-complete for c=3,4,5, although, as they point out, the proof
idea does not seem to extend to c<=5.
We resolve the conjecture by proving that the problem is indeed NP-complete
for c>=3 in 2D, even for axis-parallel X-rays. Our construction relies heavily
on some structure results for the realizations of 0-1 matrices with given row
and column sums
The K-Server Dual and Loose Competitiveness for Paging
This paper has two results. The first is based on the surprising observation
that the well-known ``least-recently-used'' paging algorithm and the
``balance'' algorithm for weighted caching are linear-programming primal-dual
algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that
generalizes them both and has an optimal performance guarantee for weighted
caching.
For the second result, the paper presents empirical studies of paging
algorithms, documenting that in practice, on ``typical'' cache sizes and
sequences, the performance of paging strategies are much better than their
worst-case analyses in the standard model suggest. The paper then presents
theoretical results that support and explain this. For example: on any input
sequence, with almost all cache sizes, either the performance guarantee of
least-recently-used is O(log k) or the fault rate (in an absolute sense) is
insignificant.
Both of these results are strengthened and generalized in``On-line File
Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA
(1991
Application of Local Information Entropy in Cluster Monte Carlo Algorithms
The chapter refers to a modification of the so-called adding probability used in cluster Monte Carlo algorithms. The modification is based on the fact that in real systems, different properties can influence its clusterization. Finally, an additional factor related to property disorder was introduced into the adding probability, which leads to more effective free energy minimization during MC iteration. As a measure of the disorder, we proposed to use a local information entropy. The proposed approach was tested and compared with the classical methods, showing its high efficiency in simulations of multiphase magnetic systems where magnetic anisotropy was used as the property influencing the system clusterization
A -Competitive Algorithm for Scheduling Packets with Deadlines
In the online packet scheduling problem with deadlines (PacketScheduling, for
short), the goal is to schedule transmissions of packets that arrive over time
in a network switch and need to be sent across a link. Each packet has a
deadline, representing its urgency, and a non-negative weight, that represents
its priority. Only one packet can be transmitted in any time slot, so, if the
system is overloaded, some packets will inevitably miss their deadlines and be
dropped. In this scenario, the natural objective is to compute a transmission
schedule that maximizes the total weight of packets which are successfully
transmitted. The problem is inherently online, with the scheduling decisions
made without the knowledge of future packet arrivals. The central problem
concerning PacketScheduling, that has been a subject of intensive study since
2001, is to determine the optimal competitive ratio of online algorithms,
namely the worst-case ratio between the optimum total weight of a schedule
(computed by an offline algorithm) and the weight of a schedule computed by a
(deterministic) online algorithm.
We solve this open problem by presenting a -competitive online
algorithm for PacketScheduling (where is the golden ratio),
matching the previously established lower bound.Comment: Major revision of the analysis and some other parts of the paper.
Another revision will follo
A Note on Tiling under Tomographic Constraints
Given a tiling of a 2D grid with several types of tiles, we can count for
every row and column how many tiles of each type it intersects. These numbers
are called the_projections_. We are interested in the problem of reconstructing
a tiling which has given projections. Some simple variants of this problem,
involving tiles that are 1x1 or 1x2 rectangles, have been studied in the past,
and were proved to be either solvable in polynomial time or NP-complete. In
this note we make progress toward a comprehensive classification of various
tiling reconstruction problems, by proving NP-completeness results for several
sets of tiles.Comment: added one author and a few theorem
Preemptive Multi-Machine Scheduling of Equal-Length Jobs to Minimize the Average Flow Time
We study the problem of preemptive scheduling of n equal-length jobs with
given release times on m identical parallel machines. The objective is to
minimize the average flow time. Recently, Brucker and Kravchenko proved that
the optimal schedule can be computed in polynomial time by solving a linear
program with O(n^3) variables and constraints, followed by some substantial
post-processing (where n is the number of jobs.) In this note we describe a
simple linear program with only O(mn) variables and constraints. Our linear
program produces directly the optimal schedule and does not require any
post-processing
An instructional model for the teaching of physics, based on a meaningful learning theory and class experiences
Practically all research studies concerning the teaching of Physics point out the fact that conventional instructional models fail to achieve their objectives. Many attempts have been done to change this situation, frequently with disappointing results. This work, which is the experimental stage in a research project of a greater scope, represents an effort to change to a model based on a cognitive learning theory, known as the Ausubel-Novak-Gowin theory, making use of the metacognitive tools that emerge from this theory. The results of this work indicate that the students react positively to the goals of meaningful learning, showing substantial understanding of Newtonian Mechanics. An important reduction in the study time required to pass the course has also been reported
- …