2,767 research outputs found

    An Implicitization Challenge for Binary Factor Analysis

    Get PDF
    We use tropical geometry to compute the multidegree and Newton polytope of the hypersurface of a statistical model with two hidden and four observed binary random variables, solving an open question stated by Drton, Sturmfels and Sullivant in "Lectures on Algebraic Statistics" (Problem 7.7). The model is obtained from the undirected graphical model of the complete bipartite graph K2,4K_{2,4} by marginalizing two of the six binary random variables. We present algorithms for computing the Newton polytope of its defining equation by parallel walks along the polytope and its normal fan. In this way we compute vertices of the polytope. Finally, we also compute and certify its facets by studying tangent cones of the polytope at the symmetry classes vertices. The Newton polytope has 17214912 vertices in 44938 symmetry classes and 70646 facets in 246 symmetry classes.Comment: 25 pages, 5 figures, presented at Mega 09 (Barcelona, Spain

    On k-Convex Polygons

    Get PDF
    We introduce a notion of kk-convexity and explore polygons in the plane that have this property. Polygons which are \mbox{kk-convex} can be triangulated with fast yet simple algorithms. However, recognizing them in general is a 3SUM-hard problem. We give a characterization of \mbox{22-convex} polygons, a particularly interesting class, and show how to recognize them in \mbox{O(nlogn)O(n \log n)} time. A description of their shape is given as well, which leads to Erd\H{o}s-Szekeres type results regarding subconfigurations of their vertex sets. Finally, we introduce the concept of generalized geometric permutations, and show that their number can be exponential in the number of \mbox{22-convex} objects considered.Comment: 23 pages, 19 figure

    Trajectory Range Visibility

    Full text link
    We study the problem of Trajectory Range Visibility, determining the sub-trajectories on which two moving entities become mutually visible. Specifically, we consider two moving entities with not necessarily equal velocities and moving on a given piece-wise linear trajectory inside a simple polygon. Deciding whether the entities can see one another with given constant velocities, and assuming the trajectories only as line segments, was solved by P. Eades et al. in 2020. However, we obtain stronger results and support queries on constant velocities for non-constant complexity trajectories. Namely, given a constant query velocity for a moving entity, we specify all visible parts of the other entity's trajectory and all possible constant velocities of the other entity to become visible. Regarding line-segment trajectories, we obtain O(nlogn)\mathcal{O}(n \log n) time to specify all pairs of mutually visible sub-trajectories s.t. nn is the number of vertices of the polygon. Moreover, our results for a restricted case on non-constant complexity trajectories yield O(n+m(logm+logn))\mathcal{O}(n + m(\log m + \log n)) time, in which mm is the overall number of vertices of both trajectories. Regarding the unrestricted case, we provide O(nlogn+m(logm+logn))\mathcal{O}(n \log n + m(\log m + \log n)) running time. We offer O(logn)\mathcal{O}(\log n) query time for line segment trajectories and O(logm+k)\mathcal{O}(\log m + k) for the non-constant complexity ones s.t. kk is the number of velocity ranges reported in the answer. Interestingly, our results require only O(n+m)\mathcal{O}(n + m) space for non-constant complexity trajectories

    Orthogonal Range Reporting and Rectangle Stabbing for Fat Rectangles

    Full text link
    In this paper we study two geometric data structure problems in the special case when input objects or queries are fat rectangles. We show that in this case a significant improvement compared to the general case can be achieved. We describe data structures that answer two- and three-dimensional orthogonal range reporting queries in the case when the query range is a \emph{fat} rectangle. Our two-dimensional data structure uses O(n)O(n) words and supports queries in O(loglogU+k)O(\log\log U +k) time, where nn is the number of points in the data structure, UU is the size of the universe and kk is the number of points in the query range. Our three-dimensional data structure needs O(nlogεU)O(n\log^{\varepsilon}U) words of space and answers queries in O(loglogU+k)O(\log \log U + k) time. We also consider the rectangle stabbing problem on a set of three-dimensional fat rectangles. Our data structure uses O(n)O(n) space and answers stabbing queries in O(logUloglogU+k)O(\log U\log\log U +k) time.Comment: extended version of a WADS'19 pape

    On the Number of Pseudo-Triangulations of Certain Point Sets

    Get PDF
    We pose a monotonicity conjecture on the number of pseudo-triangulations of any planar point set, and check it on two prominent families of point sets, namely the so-called double circle and double chain. The latter has asymptotically 12nnΘ(1)12^n n^{\Theta(1)} pointed pseudo-triangulations, which lies significantly above the maximum number of triangulations in a planar point set known so far.Comment: 31 pages, 11 figures, 4 tables. Not much technical changes with respect to v1, except some proofs and statements are slightly more precise and some expositions more clear. This version has been accepted in J. Combin. Th. A. The increase in number of pages from v1 is mostly due to formatting the paper with "elsart.cls" for Elsevie

    Using 3D Voronoi grids in radiative transfer simulations

    Get PDF
    Probing the structure of complex astrophysical objects requires effective three-dimensional (3D) numerical simulation of the relevant radiative transfer (RT) processes. As with any numerical simulation code, the choice of an appropriate discretization is crucial. Adaptive grids with cuboidal cells such as octrees have proven very popular, however several recently introduced hydrodynamical and RT codes are based on a Voronoi tessellation of the spatial domain. Such an unstructured grid poses new challenges in laying down the rays (straight paths) needed in RT codes. We show that it is straightforward to implement accurate and efficient RT on 3D Voronoi grids. We present a method for computing straight paths between two arbitrary points through a 3D Voronoi grid in the context of a RT code. We implement such a grid in our RT code SKIRT, using the open source library Voro++ to obtain the relevant properties of the Voronoi grid cells based solely on the generating points. We compare the results obtained through the Voronoi grid with those generated by an octree grid for two synthetic models, and we perform the well-known Pascucci RT benchmark using the Voronoi grid. The presented algorithm produces correct results for our test models. Shooting photon packages through the geometrically much more complex 3D Voronoi grid is only about three times slower than the equivalent process in an octree grid with the same number of cells, while in fact the total number of Voronoi grid cells may be lower for an equally good representation of the density field. We conclude that the benefits of using a Voronoi grid in RT simulation codes will often outweigh the somewhat slower performance.Comment: 9 pages, 7 figures, accepted by A

    The Importance of Binary Gravitational Microlensing Events Through High-Magnification Channel

    Full text link
    We estimate the detection efficiency of binary gravitational lensing events through the channel of high-magnification events. From this estimation, we find that binaries in the separations ranges of 0.1 < s < 10, 0.2 < s < 5, and 0.3 < s < 3 can be detected with ~ 100% efficiency for events with magnifications higher than A=100, 50, and 10, respectively, where s represents the projected separation between the lens components normalized by the Einstein radius. We also find that the range of high efficiency covers nearly the whole mass-ratio range of stellar companions. Due to the high efficiency in wide ranges of parameter space, we point out that majority of binary-lens events will be detected through the high-magnification channel in lensing surveys that focus on high-magnification events for efficient detections of microlensing planets. In addition to the high efficiency, the simplicity of the efficiency estimation makes the sample of these binaries useful in the statistical studies of the distributions of binary companions as functions of mass ratio and separation. We also discuss other importance of these events.Comment: 5 pages, 1 figure, 1 tabl

    Algorithms for fat objects : decompositions and applications

    Get PDF
    Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal

    Vertical ray shooting and computing depth orders of fat objects

    Get PDF
    We present new results for three problems dealing with a set P\mathcal{P} of nn convex constant-complexity fat polyhedra in 3-space. (i) We describe a data structure for vertical ray shooting in P\mathcal{P} that has O(log2n)O(\log^2 n) query time and uses O(nlog2n)O(n\log^2 n) storage. (ii) We give an algorithm to compute in O(nlog3n)O(n\log^3 n) time a depth order on P\mathcal{P} if it exists. (iii) We give an algorithm to verify in O(nlog3n)O(n\log^3 n) time whether a given order on P\mathcal{P} is a valid depth order. All three results improve on previous results
    corecore