9 research outputs found

    Compact Floor-Planning via Orderly Spanning Trees

    Full text link
    Floor-planning is a fundamental step in VLSI chip design. Based upon the concept of orderly spanning trees, we present a simple O(n)-time algorithm to construct a floor-plan for any n-node plane triangulation. In comparison with previous floor-planning algorithms in the literature, our solution is not only simpler in the algorithm itself, but also produces floor-plans which require fewer module types. An equally important aspect of our new algorithm lies in its ability to fit the floor-plan area in a rectangle of size (n-1)x(2n+1)/3. Lower bounds on the worst-case area for floor-planning any plane triangulation are also provided in the paper.Comment: 13 pages, 5 figures, An early version of this work was presented at 9th International Symposium on Graph Drawing (GD 2001), Vienna, Austria, September 2001. Accepted to Journal of Algorithms, 200

    A Machine Learning Approach to Artificial Floorplan Generation

    Get PDF
    The process of designing a floorplan is highly iterative and requires extensive human labor. Currently, there are a number of computer programs that aid humans in floorplan design. These programs, however, are limited in their inability to fully automate the creative process. Such automation would allow a professional to quickly generate many possible floorplan solutions, greatly expediting the process. However, automating this creative process is very difficult because of the many implicit and explicit rules a model must learn in order create viable floorplans. In this paper, we propose a method of floorplan generation using two machine learning models: a sequential model that generates rooms within the floorplan, and a graph-based model that finds adjacencies between generated rooms. Each of these models can be altered such that they are each capable of producing a floorplan independently; however, we find that the combination of these models outperforms each of its pieces, as well as a statistic-based approach

    Improved Cardinality Bounds for Rectangle Packing Representations

    Get PDF
    Axis-aligned rectangle packings can be characterized by the set of spatial relations that hold for pairs of rectangles (west, south, east, north). A representation of a packing consists of one satisfied spatial relation for each pair. We call a set of representations complete for n ∈ ℕ if it contains a representation of every packing of any n rectangles. Both in theory and practice, fastest known algorithms for a large class of rectangle packing problems enumerate a complete set R of representations. The running time of these algorithms is dominated by the (exponential) size of R. In this thesis, we improve the best known lower and upper bounds on the minimum cardinality of complete sets of representations. The new upper bound implies theoretically faster algorithms for many rectangle packing problems, for example in chip design, while the new lower bound imposes a limit on the running time that can be achieved by any algorithm following this approach. The proofs of both results are based on pattern-avoiding permutations. Finally, we empirically compute the minimum cardinality of complete sets of representations for small n. Our computations directly suggest two conjectures, connecting well-known Baxter permutations with the set of permutations avoiding an apparently new pattern, which in turn seem to generate complete sets of representations of minimum cardinality

    3D scene and object parsing from a single image

    Get PDF
    The term 3D parsing refers to the process of segmenting and labeling the 3D space into expressive categories of voxels, point clouds or surfaces. Humans can effortlessly perceive the 3D scene and the unseen part of an object from a single image with a limited field of view. In the same sense, a robot that is designed to execute a few human-like actions should be able to infer the 3D visual world, from a single snapshot of a 2D sensor such as a camera, or a 2.5D sensor such as a Kinect depth equipment. In this thesis, we focus on 3D scene and object parsing from a single image, aiming to produce a 3D parse that is able to support applications like robotics and navigation. Our goal is to produce an expressive 3D parse: e.g., what is it, where is it, how can humans move and interact with it. Inferring such a 3D parse from a single image is not trivial. The main challenges are: the unknown separation of layout surfaces and objects; the high degree of occlusions and the diverse classes of objects in the cluttered scene; how to represent 3D object geometry in a way that can be predicted from noisy or partial observations, and can help assist reasoning like contact, support and extent. In this thesis, we put forward the hypothesis and prove in experiments, that a data-driven approach is able to directly produce a complete 3D recovery from 2D partial observations. Moreover, we show that by imposing constraints of 3D patterns and priors into the learned model (e.g., layout surfaces are flat and orthogonal to adjacent surfaces, support height can reveal the full extent of an occluded object, 2D complete silhouettes can guide reconstructions beyond partial foreground occlusions, and a shape can be decomposed into a set of simple parts), we are able to obtain a more accurate reconstruction of the scene and a structural representation of the object. We present our approaches at different levels of detail, from a rough layout level to a more complex scene level and finally to the most detailed object level. We start by estimating the 3D room layout from a single RGB image, proposing an approach that generalizes across panoramas and perspective images, cuboid layouts and more general layouts (e.g., “L”-shape room). We then make use of an additional depth image, explore at the scene level to recover the complete 3D scene with layouts and all objects jointly. At the object level, we propose to recover each 3D object with robustness to possible partial foreground occlusions. Finally, we represent each 3D object as a 3D composite of sets of primitives, recurrently parsing each shape into primitives given a single depth view. We demonstrate the efficacy of each proposed approach with extensive experiments both quantitatively and qualitatively on public datasets

    Stevensen 3rd East, L.C. v. Russell K. Watts : Brief of Appellant

    Get PDF
    Addendum to Brief of Appellan

    Spatiotemporal occupancy in building settings

    Get PDF
    This thesis presents an investigation of methods to capture and analyze spatiotemporal occupancy patterns of high resolution, demonstrating their value by measuring behavioral outcomes over time. Obtaining fine-grain occupancy patterns is particularly useful since it gives researchers an ability to study such patterns not just with respect to the geometry of the space in which they occur, but also to study how they change dynamically in time, in response to the behavior itself. This research has three parts: The first is a review of the traditional methods of behavioral mapping utilized in architecture research, as well as the existing indoor positioning systems, offering an assessment of their comparative potential, and a selection for the current scenario. The second is an implementation of scene analysis analyses using computer vision to capture occupancy patterns on one week of surveillance videos over twelve corridors in a hospital in Chile. The data outcome is occupancy in a set of hospital corridors at a resolution of one square foot per second. Due to the practical detection errors, a two-part statistical model was developed to compute the accuracy on recognition and precision of location, given certain scenario conditions. These error rates models can be then used to predict estimates of patterns of occupancy in an actual scenario. The third is a proof-of-concept study of the usefulness of a new spatiotemporal metric called the Isovist-minute, which describes the actual occupancy of an Isovist, over a specified period of time. Occupancy data obtained using scene-analyses, updated with error-rate models of the previous study, are used to compute Isovist-minute values per square feet. The Isovist-minute is shown to capture significant differences in the patient surveillance outcome in the same spatial layout, but different organizational schedule and program.Ph.D

    Neutrinos

    Get PDF
    229 pages229 pages229 pagesThe Proceedings of the 2011 workshop on Fundamental Physics at the Intensity Frontier. Science opportunities at the intensity frontier are identified and described in the areas of heavy quarks, charged leptons, neutrinos, proton decay, new light weakly-coupled particles, and nucleons, nuclei, and atoms
    corecore