11,191 research outputs found

    Automated sequence and motion planning for robotic spatial extrusion of 3D trusses

    Full text link
    While robotic spatial extrusion has demonstrated a new and efficient means to fabricate 3D truss structures in architectural scale, a major challenge remains in automatically planning extrusion sequence and robotic motion for trusses with unconstrained topologies. This paper presents the first attempt in the field to rigorously formulate the extrusion sequence and motion planning (SAMP) problem, using a CSP encoding. Furthermore, this research proposes a new hierarchical planning framework to solve the extrusion SAMP problems that usually have a long planning horizon and 3D configuration complexity. By decoupling sequence and motion planning, the planning framework is able to efficiently solve the extrusion sequence, end-effector poses, joint configurations, and transition trajectories for spatial trusses with nonstandard topologies. This paper also presents the first detailed computation data to reveal the runtime bottleneck on solving SAMP problems, which provides insight and comparing baseline for future algorithmic development. Together with the algorithmic results, this paper also presents an open-source and modularized software implementation called Choreo that is machine-agnostic. To demonstrate the power of this algorithmic framework, three case studies, including real fabrication and simulation results, are presented.Comment: 24 pages, 16 figure

    Upright posture and the meaning of meronymy: A synthesis of metaphoric and analytic accounts

    Get PDF
    Cross-linguistic strategies for mapping lexical and spatial relations from body partonym systems to external object meronymies (as in English ‘table leg’, ‘mountain face’) have attracted substantial research and debate over the past three decades. Due to the systematic mappings, lexical productivity and geometric complexities of body-based meronymies found in many Mesoamerican languages, the region has become focal for these discussions, prominently including contrastive accounts of the phenomenon in Zapotec and Tzeltal, leading researchers to question whether such systems should be explained as global metaphorical mappings from bodily source to target holonym or as vector mappings of shape and axis generated “algorithmically”. I propose a synthesis of these accounts in this paper by drawing on the species-specific cognitive affordances of human upright posture grounded in the reorganization of the anatomical planes, with a special emphasis on antisymmetrical relations that emerge between arm-leg and face-groin antinomies cross-culturally. Whereas Levinson argues that the internal geometry of objects “stripped of their bodily associations” (1994: 821) is sufficient to account for Tzeltal meronymy, making metaphorical explanations entirely unnecessary, I propose a more powerful, elegant explanation of Tzeltal meronymic mapping that affirms both the geometric-analytic and the global-metaphorical nature of Tzeltal meaning construal. I do this by demonstrating that the “algorithm” in question arises from the phenomenology of movement and correlative body memories—an experiential ground which generates a culturally selected pair of inverse contrastive paradigm sets with marked and unmarked membership emerging antithetically relative to the transverse anatomical plane. These relations are then selected diagrammatically for the classification of object orientations according to systematic geometric iconicities. Results not only serve to clarify the case in question but also point to the relatively untapped potential that upright posture holds for theorizing the emergence of human cognition, highlighting in the process the nature, origins and theoretical validity of markedness and double scope conceptual integration

    Sketching space

    Get PDF
    In this paper, we present a sketch modelling system which we call Stilton. The program resembles a desktop VRML browser, allowing a user to navigate a three-dimensional model in a perspective projection, or panoramic photographs, which the program maps onto the scene as a `floor' and `walls'. We place an imaginary two-dimensional drawing plane in front of the user, and any geometric information that user sketches onto this plane may be reconstructed to form solid objects through an optimization process. We show how the system can be used to reconstruct geometry from panoramic images, or to add new objects to an existing model. While panoramic imaging can greatly assist with some aspects of site familiarization and qualitative assessment of a site, without the addition of some foreground geometry they offer only limited utility in a design context. Therefore, we suggest that the system may be of use in `just-in-time' CAD recovery of complex environments, such as shop floors, or construction sites, by recovering objects through sketched overlays, where other methods such as automatic line-retrieval may be impossible. The result of using the system in this manner is the `sketching of space' - sketching out a volume around the user - and once the geometry has been recovered, the designer is free to quickly sketch design ideas into the newly constructed context, or analyze the space around them. Although end-user trials have not, as yet, been undertaken we believe that this implementation may afford a user-interface that is both accessible and robust, and that the rapid growth of pen-computing devices will further stimulate activity in this area

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    Towards general spatial intelligence

    Get PDF
    The goal of General Spatial Intelligence is to present a unified theory to support the various aspects of spatial experience, whether physical or cognitive. We acknowledge the fact that GIScience has to assume a particular worldview, resulting from specific positions regarding metaphysics, ontology, epistemology, mind, language, cognition and representation. Implicit positions regarding these domains may allow solutions to isolated problems but often hamper a more encompassing approach. We argue that explicitly defining a worldview allows the grounding and derivation of multi-modal models, establishing precise problems, allowing falsifiability. We present an example of such a theory founded on process metaphysics, where the ontological elements are called differences. We show that a worldview has implications regarding the nature of space and, in the case of the chosen metaphysical layer, favours a model of space as true spacetime, i.e. four-dimensionality. Finally we illustrate the approach using a scenario from psychology and AI based planning
    corecore