2,113 research outputs found
Redividing the Cake
A heterogeneous resource, such as a land-estate, is already divided among
several agents in an unfair way. It should be re-divided among the agents in a
way that balances fairness with ownership rights. We present re-division
protocols that attain various trade-off points between fairness and ownership
rights, in various settings differing in the geometric constraints on the
allotments: (a) no geometric constraints; (b) connectivity --- the cake is a
one-dimensional interval and each piece must be a contiguous interval; (c)
rectangularity --- the cake is a two-dimensional rectangle or rectilinear
polygon and the pieces should be rectangles; (d) convexity --- the cake is a
two-dimensional convex polygon and the pieces should be convex.
Our re-division protocols have implications on another problem: the
price-of-fairness --- the loss of social welfare caused by fairness
requirements. Each protocol implies an upper bound on the price-of-fairness
with the respective geometric constraints.Comment: Extended IJCAI 2018 version. Previous name: "How to Re-Divide a Cake
Fairly
Query processing of spatial objects: Complexity versus Redundancy
The management of complex spatial objects in applications, such as geography and cartography,
imposes stringent new requirements on spatial database systems, in particular on efficient
query processing. As shown before, the performance of spatial query processing can be improved
by decomposing complex spatial objects into simple components. Up to now, only decomposition
techniques generating a linear number of very simple components, e.g. triangles or trapezoids, have
been considered. In this paper, we will investigate the natural trade-off between the complexity of
the components and the redundancy, i.e. the number of components, with respect to its effect on
efficient query processing. In particular, we present two new decomposition methods generating
a better balance between the complexity and the number of components than previously known
techniques. We compare these new decomposition methods to the traditional undecomposed representation
as well as to the well-known decomposition into convex polygons with respect to their
performance in spatial query processing. This comparison points out that for a wide range of query
selectivity the new decomposition techniques clearly outperform both the undecomposed representation
and the convex decomposition method. More important than the absolute gain in performance
by a factor of up to an order of magnitude is the robust performance of our new decomposition
techniques over the whole range of query selectivity
On the gradient of the Green tensor in two-dimensional elastodynamic problems, and related integrals: Distributional approach and regularization, with application to nonuniformly moving sources
The two-dimensional elastodynamic Green tensor is the primary building block
of solutions of linear elasticity problems dealing with nonuniformly moving
rectilinear line sources, such as dislocations. Elastodynamic solutions for
these problems involve derivatives of this Green tensor, which stand as
hypersingular kernels. These objects, well defined as distributions, prove
cumbersome to handle in practice. This paper, restricted to isotropic media,
examines some of their representations in the framework of distribution theory.
A particularly convenient regularization of the Green tensor is introduced,
that amounts to considering line sources of finite width. Technically, it is
implemented by an analytic continuation of the Green tensor to complex times.
It is applied to the computation of regularized forms of certain integrals of
tensor character that involve the gradient of the Green tensor. These integrals
are fundamental to the computation of the elastodynamic fields in the problem
of nonuniformly moving dislocations. The obtained expressions indifferently
cover cases of subsonic, transonic, or supersonic motion. We observe that for
faster-than-wave motion, one of the two branches of the Mach cone(s) displayed
by the Cartesian components of these tensor integrals is extinguished for some
particular orientations of source velocity vector.Comment: 25 pages, 6 figure
Tyre induced vibrations of the car-trailer system
The lateral stability of the car-trailer combination is analysed by means of a single track model. The equations
of motion are derived rigorously by means of the Appell-Gibbs equations for constant longitudinal velocity of the vehicle. The tyres are described with the help of the so-called delayed tyre model, which is based on a brush model with pure rolling contact. The lateral forces and aligning torques of the tyre/road interaction are calculated via the exact instantaneous lateral deformations in the contact patches. The linear stability analysis of the rectilinear motion is performed via the analytically determined characteristic function of the system. Stability charts are constructed with respect to the vehicle longitudinal velocity and the payload position on the trailer. Self-excited lateral vibrations are detected with different vibration modes at low and at high longitudinal speeds of the vehicle. The effects of the tyre parameters are also investigated
Facility Layout of Irregular-shaped Departments using a Nested Approach
The facility layout problem is a very difficult and widely studied optimization problem. As a result, many facility layout models and techniques have been developed. However, the literature does not fully consider or control irregular-shaped departments. In this paper, the nested facility layout problem is defined whereby irregular-shaped departments(i.e. L-shaped, O-shaped or U-shaped) can be generated and controlled. This is a unique problem that can be used to efficiently layout workstations, storage areas and other departments within departments, while arranging the departments with respect to an objective. The objective considered here is to minimize material handling cost. We present a formulation and solution technique for the nested facility layout problem. The formulation consists of a modification of Montreuil’s mixed-integer problem (MIP) to consider nesting departments. Finally, for illustrative purposes, several example problems are solved using the solution technique presented. The nested facility layout model can be used to either produce a more realistic and detailed block layout, or to group departments together (or nest departments within departments), thus enabling larger facility layout problems to be solved
Design Rules in VLSI Routing
One of the last major steps in the design of highly integrated circuits (VLSI design) is routing. The task of routing is to compute disjoint sets of wires connecting different parts of a chip in order to realize the desired electrical connectivity. Design rules define restrictions on the minimum distance and geometry of metal shapes. The intent of most design rules is to forbid patterns that cannot be manufactured well in the lithographic production process. This process has become extremely difficult with the current small feature sizes of 32 nm and below, which are still being manufactured using 193 nm wavelength technology. Because of this, the design rules of modern technologies have become very complex, and computing a routing with a sufficiently low number of design rule violations is a difficult task for automated routing tools. In this thesis we present in detail how design rules can be handled efficiently. We develop an appropriate design rule model which considerably reduces complexity while not being too restrictive. This involves mapping complex polygon-based rules to simpler rectangle-based rules and building equivalence classes of shapes with respect to their minimum distance requirements. Our model enables efficient checking of minimum distance rules, which has to be done dozens of times in each routing run. We also discuss efficient data structures that are necessary to achieve this. We implemented our design rule model within BonnRoute, the routing tool of the BonnTools, a software package for VLSI physical design developed at the Research Institute for Discrete Mathematics at the University of Bonn in cooperation with IBM. The result is a new module of BonnRoute, called BonnRoutRules, which computes this design rule model and embeds BonnRoute in the complex routing environment of current technologies. The BonnRouteRules module was a key part in enabling BonnRoute to route current 32 nm and 22 nm chips. We describe the combined routing flow used by IBM in practice, in which BonnRoute solves the main routing task and an industrial standard router is used for postprocessing. We present detailed experimental results of this flow on real-world designs. The results show that this combined flow produces routings with almost no remaining design rule violations, which proves that our design rule model works well in practice. Furthermore, compared to the industrial standard router alone, the combination with BonnRoute provides several significant benefits: It has 24% less runtime, 5% less wiring length, and over 90% less detours, which shows that with this flow we have an excellent routing tool in practice
- …