416 research outputs found

    Minimizing waste in the 2-dimensional cutting stock problem

    Get PDF
    The 2-dimensional cutting stock problem is an important problem in the garment manufacturing industry. The problem is to arrange a given set of 2-dimensional patterns onto a rectangular bolt of cloth such that the efficiency is maximised. This arrangement is called a marker. Efficiency is measured by pattern area I marker area. Efficiency varies depending on the shape and number of patterns being cut, but an improvement in efficiency can result in significant savings. Markers are usually created by humans with the aid of CAD software. Many researchers have attempted to create automatic marker making software but have failed to produce marker efficiencies as high as human generated ones. This thesis presents a mathematical model which optimally solves the 2-dimensional cutting stock problem. However, the model can only be solved in a practical amount of time for small markers. Subsequently, two compaction algorithms based on mathematical modelling have been developed to improve the efficiency of human generated markers. The models developed in this thesis make use of a geometrical calculation known as the no-fit polygon. The no-fit polygon is a tool for determining whether polygons A and B overlap. It also gives all feasible positions for polygons B with respect to polygon A, such that the two polygons do not overlap. For the case when both polygons A and B are non-convex, current calculation methods are either time consuming or unreliable. This thesis presents a method which is both computationally efficient and robust for calculating the no-fit polygon when polygons A and B are non-convex. When tested on a set of industrial markers, the compaction algorithms improved the marker efficiencies by over 1.5% on average

    Visualizing Set Relations and Cardinalities Using Venn and Euler Diagrams

    Get PDF
    In medicine, genetics, criminology and various other areas, Venn and Euler diagrams are used to visualize data set relations and their cardinalities. The data sets are represented by closed curves and the data set relationships are depicted by the overlaps between these curves. Both the sets and their intersections are easily visible as the closed curves are preattentively processed and form common regions that have a strong perceptual grouping effect. Besides set relations such as intersection, containment and disjointness, the cardinality of the sets and their intersections can also be depicted in the same diagram (referred to as area-proportional) through the size of the curves and their overlaps. Size is a preattentive feature and so similarities, differences and trends are easily identified. Thus, such diagrams facilitate data analysis and reasoning about the sets. However, drawing these diagrams manually is difficult, often impossible, and current automatic drawing methods do not always produce appropriate diagrams. This dissertation presents novel automatic drawing methods for different types of Euler diagrams and a user study of how such diagrams can help probabilistic judgement. The main drawing algorithms are: eulerForce, which uses a force-directed approach to lay out Euler diagrams; eulerAPE, which draws area-proportional Venn diagrams with ellipses. The user study evaluated the effectiveness of area- proportional Euler diagrams, glyph representations, Euler diagrams with glyphs and text+visualization formats for Bayesian reasoning, and a method eulerGlyphs was devised to automatically and accurately draw the assessed visualizations for any Bayesian problem. Additionally, analytic algorithms that instantaneously compute the overlapping areas of three general intersecting ellipses are provided, together with an evaluation of the effectiveness of ellipses in drawing accurate area-proportional Venn diagrams for 3-set data and the characteristics of the data that can be depicted accurately with ellipses

    The automatic definition and generation of axial lines and axial maps

    Get PDF

    Heuristics for Multidimensional Packing Problems

    Get PDF

    Point seeking: a family of dynamic path finding algorithms

    Get PDF
    In the field of Artificial Intelligence, calculating the best route from one point to another, known as “path finding,” has become a common problem. If an agent cannot effectively navigate through an environment – be it real or virtual – it will often not be able to perform even the most routine tasks. For example, a Martian rover can\u27t collect samples if it can\u27t get to them; meanwhile, a computer game is not much of a challenge if your opponents can\u27t find their way around. The problem of path finding has three basic aspects: map representation, path generation, and locomotion. First, the environment must be interpreted into a form which can be processed algorithmically. Afterward, a path through this environment is planned out. A list of movement instructions or locations to travel to are then produced in order to guide the agent. During both the planning and movement of the agent, an algorithm may consider the agent\u27s limitations with regards to changes in velocity and orientation. Together, these steps serve to move an agent from its initial position to the desired location

    An Evolutionary Algorithm for solving the Two-Dimensional Irregular Shape Packing Problem combined with the Knapsack Problem

    Get PDF
    This work presents an evolutionary algorithm to solve a joint problem of the Packing Problem and the Knapsack Problem, where the objective is to place items (with shape, value and weight) in a container (defined by its shape and capacity), maximizing the container's value, without intersections

    Reformulating Space Syntax: The Automatic Definition and Generation of Axial Lines and Axial Maps

    Get PDF
    Space syntax is a technique for measuring the relative accessibility of different locations in a spatial system which has been loosely partitioned into convex spaces.These spaces are approximated by straight lines, called axial lines, and the topological graph associated with their intersection is used to generate indices of distance, called integration, which are then used as proxies for accessibility. The most controversial problem in applying the technique involves the definition of these lines. There is no unique method for their generation, hence different users generate different sets of lines for the same application. In this paper, we explore this problem, arguing that to make progress, there need to be unambiguous, agreed procedures for generating such maps. The methods we suggest for generating such lines depend on defining viewsheds, called isovists, which can be approximated by their maximum diameters,these lengths being used to form axial maps similar to those used in space syntax. We propose a generic algorithm for sorting isovists according to various measures,approximating them by their diameters and using the axial map as a summary of the extent to which isovists overlap (intersect) and are accessible to one another. We examine the fields created by these viewsheds and the statistical properties of the maps created. We demonstrate our techniques for the small French town of Gassin used originally by Hillier and Hanson (1984) to illustrate the theory, exploring different criteria for sorting isovists, and different axial maps generated by changing the scale of resolution. This paper throws up as many problems as it solves but we believe it points the way to firmer foundations for space syntax

    Voxel-Based Solution Approaches to the Three-Dimensional Irregular Packing Problem

    Get PDF
    Research on the three-dimensional (3D) packing problem has largely focused on packing boxes for the transportation of goods. As a result, there has been little focus on packing irregular shapes in the operational research literature. New technologies have raised the practical importance of 3D irregular packing problems and the need for efficient solutions. In this work, we address the variant of the problem where the aim is to place a set of 3D irregular items in a container, while minimizing the container height, analogous to the strip packing problem. In order to solve this problem, we need to address two critical components; efficient computation of the geometry and finding high-quality solutions. In this work, we explore the potential of voxels, the 3D equivalent of pixels, as the geometric representation of the irregular items. In this discretised space, we develop a geometric tool that extends the concept of the nofit polygon to the 3D case. This enables us to provide an integer linear programming formulation for this problem that can solve some small instances. For practical size problems, we design metaheuristic optimisation approaches. Because the literature is limited, we introduce new benchmark instances. Some are randomly generated and some represent realistic models from the additive manufacturing area. Our results on the literature benchmark data and on our new instances show that our metaheuristic techniques achieve the best known solutions for a wide variety of problems in practical computation times

    Algorithms for fat objects : decompositions and applications

    Get PDF
    Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal
    • 

    corecore