68,202 research outputs found

    MeshPipe: a Python-based tool for easy automation and demonstration of geometry processing pipelines

    Get PDF
    The popularization of inexpensive 3D scanning, 3D printing, 3D publishing and AR/VR display technologies have renewed the interest in open-source tools providing the geometry processing algorithms required to clean, repair, enrich, optimize and modify point-based and polygonal-based models. Nowadays, there is a large variety of such open-source tools whose user community includes 3D experts but also 3D enthusiasts and professionals from other disciplines. In this paper we present a Python-based tool that addresses two major caveats of current solutions: the lack of easy-to-use methods for the creation of custom geometry processing pipelines (automation), and the lack of a suitable visual interface for quickly testing, comparing and sharing different pipelines, supporting rapid iterations and providing dynamic feedback to the user (demonstration). From the user's point of view, the tool is a 3D viewer with an integrated Python console from which internal or external Python code can be executed. We provide an easy-to-use but powerful API for element selection and geometry processing. Key algorithms are provided by a high-level C library exposed to the viewer via Python-C bindings. Unlike competing open-source alternatives, our tool has a minimal learning curve and typical pipelines can be written in a few lines of Python code.Peer ReviewedPostprint (published version

    Reflectance Transformation Imaging (RTI) System for Ancient Documentary Artefacts

    No full text
    This tutorial summarises our uses of reflectance transformation imaging in archaeological contexts. It introduces the UK AHRC funded project reflectance Transformation Imaging for Anciant Documentary Artefacts and demonstrates imaging methodologies

    Geoinformation, Geotechnology, and Geoplanning in the 1990s

    Get PDF
    Over the last decade, there have been some significant changes in the geographic information available to support those involved in spatial planning and policy-making in different contexts. Moreover, developments have occurred apace in the technology with which to handle geoinformation. This paper provides an overview of trends during the 1990s in data provision, in the technology required to manipulate and analyse spatial information, and in the domain of planning where applications of computer technology in the processing of geodata are prominent. It draws largely on experience in western Europe, and in the UK and the Netherlands in particular, and suggests that there are a number of pressures for a strengthened role for geotechnology in geoplanning in the years ahead

    CREATe 2012-2016: Impact on society, industry and policy through research excellence and knowledge exchange

    Get PDF
    On the eve of the CREATe Festival May 2016, the Centre published this legacy report (edited by Kerry Patterson & Sukhpreet Singh with contributions from consortium researchers)

    Digitally interpreting traditional folk crafts

    Get PDF
    The cultural heritage preservation requires that objects persist throughout time to continue to communicate an intended meaning. The necessity of computer-based preservation and interpretation of traditional folk crafts is validated by the decreasing number of masters, fading technologies, and crafts losing economic ground. We present a long-term applied research project on the development of a mathematical basis, software tools, and technology for application of desktop or personal fabrication using compact, cheap, and environmentally friendly fabrication devices, including '3D printers', in traditional crafts. We illustrate the properties of this new modeling and fabrication system using several case studies involving the digital capture of traditional objects and craft patterns, which we also reuse in modern designs. The test application areas for the development are traditional crafts from different cultural backgrounds, namely Japanese lacquer ware and Norwegian carvings. Our project includes modeling existing artifacts, Web presentations of the models, automation of the models fabrication, and the experimental manufacturing of new designs and forms

    The XII century towers, a benchmark of the Rome countryside almost cancelled. The safeguard plan by low cost uav and terrestrial DSM photogrammetry surveying and 3D Web GIS applications

    Get PDF
    “Giving a bird-fly look at the Rome countryside, throughout the Middle Age central period, it would show as if the multiple city towers has been widely spread around the territory” on a radial range of maximum thirty kilometers far from the Capitol Hill center (Carocci and Vendittelli, 2004). This is the consequence of the phenomenon identified with the “Incasalamento” neologism, described in depth in the following paper, intended as the general process of expansion of the urban society interests outside the downtown limits, started from the half of the XII and developed through all the XIII century, slowing down and ending in the following years. From the XIX century till today the architectural finds of this reality have raised the interest of many national and international scientists, which aimed to study and catalog them all to create a complete framework that, cause of its extension, didn’t allow yet attempting any element by element detailed analysis. From the described situation has started our plan of intervention, we will apply integrated survey methods and technologies of terrestrial and UAV near stereo-photogrammetry, by the use of low cost drones, more than action cameras and reflex on extensible rods, integrated and referenced with GPS and topographic survey. In the final project we intend to produce some 3D scaled and textured surface models of any artifact (almost two hundreds were firstly observed still standing), to singularly study the dimensions and structure, to analyze the building materials and details and to formulate an hypothesis about any function, based even on the position along the territory. These models, successively georeferenced, will be imported into a 2D and 3D WebGIS and organized in layers made visible on basemaps of reference, as much as on historical maps

    The Enigma of Digitized Property A Tribute to John Perry Barlow

    Get PDF
    Compressive Sensing has attracted a lot of attention over the last decade within the areas of applied mathematics, computer science and electrical engineering because of it suggesting that we can sample a signal under the limit that traditional sampling theory provides. By then using dierent recovery algorithms we are able to, theoretically, recover the complete original signal even though we have taken very few samples to begin with. It has been proven that these recovery algorithms work best on signals that are highly compressible, meaning that the signals can have a sparse representation where the majority of the signal elements are close to zero. In this thesis we implement some of these recovery algorithms and investigate how these perform practically on a real video signal consisting of 300 sequential image frames. The video signal will be under sampled, using compressive sensing, and then recovered using two types of strategies, - One where no time correlation between successive frames is assumed, using the classical greedy algorithm Orthogonal Matching Pursuit (OMP) and a more robust, modied OMP called Predictive Orthogonal Matching Pursuit (PrOMP). - One newly developed algorithm, Dynamic Iterative Pursuit (DIP), which assumes and utilizes time correlation between successive frames. We then performance evaluate and compare these two strategies using the Peak Signal to Noise Ratio (PSNR) as a metric. We also provide visual results. Based on investigation of the data in the video signal, using a simple model for the time correlation and transition probabilities between dierent signal coecients in time, the DIP algorithm showed good recovery performance. The main results showed that DIP performed better and better over time and outperformed the PrOMP up to a maximum of 6 dB gain at half of the original sampling rate but performed slightly below the PrOMP in a smaller part of the video sequence where the correlation in time between successive frames in the original video sequence suddenly became weaker.Compressive sensing har blivit mer och mer uppmarksammat under det senaste decenniet inom forskningsomraden sasom tillampad matematik, datavetenskap och elektroteknik. En stor anledning till detta ar att dess teori innebar att det blir mojligt att sampla en signal under gransen som traditionell samplingsteori innebar. Genom att sen anvanda olika aterskapningsalgoritmer ar det anda teoretiskt mojligt att aterskapa den ursprungliga signalen. Det har visats sig att dessaaterskapningsalgoritmer funkar bast pa signaler som ar mycket kompressiva, vilket innebar att dessa signaler kan representeras glest i nagon doman dar merparten av signalens koecienter ar nara 0 i varde. I denna uppsats implementeras vissa av dessaaterskapningsalgoritmer och vi undersoker sedan hur dessa presterar i praktiken pa en riktig videosignal bestaende av 300 sekventiella bilder. Videosignalen kommer att undersamplas med compressive sensing och sen aterskapas genom att anvanda 2 typer av strategier, - En dar ingen tidskorrelation mellan successiva bilder i videosignalen antas genom att anvanda klassiska algoritmer sasom Orthogonal Matching Pursuit (OMP) och en mer robust, modierad OMP : Predictive Orthogonal Matching Pursuit (PrOMP). - En nyligen utvecklad algoritm, Dynamic Iterative Pursuit (DIP), som antar och nyttjar en tidskorrelation mellan successiva bilder i videosignalen. Vi utvarderar och jamfor prestandan i dessa tva olika typer av strategier genom att anvanda Peak Signal to Noise Ratio (PSNR) som jamforelseparameter. Vi ger ocksa visuella resultat fran videosekvensen. Baserat pa undersokning av data i videosignalen visade det sig, genom att anvanda enkla modeller, bade for tidskorrelationen och sannolikhetsfunktioner for vilka koecienter som ar aktiva vid varje tidpunkt, att DIP algoritmen visade battre prestanda an de tva andra tidsoberoende algoritmerna under visa tidsekvenser. Framforallt de sekvenser dar videosignalen inneholl starkare korrelation i tid. Som mest presterade DIP upp till 6 dB battre an OMP och PrOMP
    • …
    corecore