13 research outputs found

    3D modelling by low-cost range camera: software evaluation and comparison

    Get PDF
    The aim of this work is to present a comparison among three software applications currently available for the Occipital Structure SensorTM; all these software were developed for collecting 3D models of objects easily and in real-time with this structured light range camera. The SKANECT, itSeez3D and Scanner applications were thus tested: a DUPLOTM bricks construction was scanned with the three applications and the obtained models were compared to the model virtually generated with a standard CAD software, which served as reference. The results demonstrate that all the software applications are generally characterized by the same level of geometric accuracy, which amounts to very few millimetres. However, the itSeez3D software, which requires a payment of $7 to export each model, represents surely the best solution, both from the point of view of the geometric accuracy and, mostly, at the level of the color restitution. On the other hand, Scanner, which is a free software, presents an accuracy comparable to that of itSeez3D. At the same time, though, the colors are often smoothed and not perfectly overlapped to the corresponding part of the model. Lastly, SKANECT is the software that generates the highest number of points, but it has also some issues with the rendering of the colors

    DSM Generation from Single and Cross-Sensor Multi-View Satellite Images Using the New Agisoft Metashape: The Case Studies of Trento and Matera (Italy)

    Get PDF
    DSM generation from satellite imagery is a long-lasting issue and it has been addressed in several ways over the years; however, expert and users are continuously searching for simpler but accurate and reliable software solutions. One of the latest ones is provided by the commercial software Agisoft Metashape (since version 1.6), previously known as Photoscan, which joins other already available open-source and commercial software tools. The present work aims to quantify the potential of the new Agisoft Metashape satellite processing module, considering that to the best knowledge of the authors, only two papers have been published, but none considering cross-sensor imagery. Here we investigated two different case studies to evaluate the accuracy of the generated DSMs. The first dataset consists of a triplet of Pléiades images acquired over the area of Trento and the Adige valley (Northern Italy), which is characterized by a great variety in terms of geomorphology, land uses and land covers. The second consists of a triplet composed of a WorldView-3 stereo pair and a GeoEye-1 image, acquired over the city of Matera (Southern Italy), one of the oldest settlements in the world, with the worldwide famous area of Sassi and a very rugged morphology in the surroundings. First, we carried out the accuracy assessment using the RPCs supplied by the satellite companies as part of the image metadata. Then, we refined the RPCs with an original independent terrain technique able to supply a new set of RPCs, using a set of GCPs adequately distributed across the regions of interest. The DSMs were generated both in a stereo and multi-view (triplet) configuration. We assessed the accuracy and completeness of these DSMs through a comparison with proper references, i.e., DSMs obtained through LiDAR technology. The impact of the RPC refinement on the DSM accuracy is high, ranging from 20 to 40% in terms of LE90. After the RPC refinement, we achieved an average overall LE90 <5.0 m (Trento) and <4.0 m (Matera) for the stereo configuration, and <5.5 m (Trento) and <4.5 m (Matera) for the multi-view (triplet) configuration, with an increase of completeness in the range 5–15% with respect to stereo pairs. Finally, we analyzed the impact of land cover on the accuracy of the generated DSMs; results for three classes (urban, agricultural, forest and semi-natural areas) are also supplied

    The Rongorongo tablet C. New technologies and conventional approaches to an undeciphered text

    Full text link
    The Rongorongo script of Rapa Nui (Easter Island) remains undeciphered and its status as language notation is not proven. Only very recently has a full corpus of Rongorongo, with carefully edited texts, appeared, while a consensual inventory of signs remains a desideratum. We report the 3D-modeling of Rongorongo Tablet C, which provides new detail on certain portions of its text, and a new drawing and transcription, complete with paleographic commentary. We also revisit the structure and possible contents of its text - a crucial step towards decipherment. In addition to a previously identified calendar (list of the nights of the moon), Tablet C may include words related to agriculture, well as other lexical lists, perhaps copied for the purpose of learning. We thus combine new technologies and more conventional approaches to offer new insight on an undeciphered inscription

    3D high-quality modeling of small and complex archaeological inscribed objects: Relevant issues and proposed methodology

    Get PDF
    3D modelling of inscribed archaeological finds (such as tablets or small objects) has to consider issues related to the correct acquisition and reading of ancient inscriptions, whose size and degree of conservation may vary greatly, in order to guarantee the needed requirements for visual inspection and analysis of the signs. In this work, photogrammetry and laser scanning were tested in order to find the optimal sensors and settings, useful to the complete 3D reconstruction of such inscribed archaeological finds, paying specific attention to the final geometric accuracy and operative feasibility in terms of required sensors and necessary time. Several 3D modelling tests were thus carried out on four replicas of inscribed objects, which are characterized by different size, material and epigraphic peculiarities. Specifically, in relation to photogrammetry, different cameras and lenses were used and a robust acquisition setup, able to guarantee a correct and automatic alignment of images during the photogrammetric process, was identified. The focus stacking technique was also investigated. The Canon EOS 1200D camera equipped with prime lenses and iPad camera showed respectively the best and the worst accuracy. From an overall geometric point of view, 50 mm and 100 mm lenses achieved very similar results, but the reconstruction of the smallest details with the 50 mm lens was not appropriate. On the other hand, the acquisition time for the 50 mm lens was considerably lower than the 100 mm one. In relation to laser scanning, the ScanRider 1.2 model was used. The 3D models produced (in less time than using photogrammetry) clearly highlight how this scanner is able to reconstruct even the high frequencies with high resolution. However, the models in this case are not provided with texture. For these reasons, a robust procedure for integrating the texture of photogrammetry models with the mesh of laser scanning models was also carried out

    Modelling the Rongorongo tablets: A new transcription of the Échancrée tablet and the foundation for decipherment attempts

    Full text link
    none4noThe Rongorongo is a system of writing, still undeciphered, from Easter Island in the Pacific. It consists of a corpus of twenty-six inscriptions, scattered around the world. This article presents the state-of-the art in the study of one of these inscriptions, Text D or the ‘Échancrée’ tablet housed in a museum in Rome, Italy. Through an integrated methodology based on photogrammetry and high-precision structured light scanning, a 3D model of the inscriptions is made available through a public 3D Viewer for the first time. The technique made use of the benefits of both methods of image acquisition: a very accurate, precise, high resolution, and metric reconstruction of the tablet geometry gained through the scanning process, and a high-quality texture achieved through photogrammetry. In addition, we present a new analysis of the text, through a close palaeographic examination of its signs, and corrections of previous hand drawings and transcriptions. The ultimate aim is to reach unbiased ‘readings’ of the signs through an integrated synergy of traditional palaeographic analysis and an advanced 3D model. These, applied to all the inscriptions, constitute the necessary stepping-stones for any decipherment attempt.openLorenzo Lastilla, Roberta Ravanelli, Miguel Valério, Silvia FerraraLorenzo Lastilla, Roberta Ravanelli, Miguel Valério, Silvia Ferrar

    3D Modelling of the Mamari Tablet from the Rongorongo Corpus: Acquisition, Processing Issues, and Outcomes

    Get PDF
    Rongorongo is an undeciphered script inscribed on wooden objects from Easter Island (Rapa Nui) in the Pacific Ocean. The existing editions of the inscriptions, and their widespread locations in museums and archives all over the world today constitute a serious obstacle to any objective paleographical assessment. Thus, with a view to a potential decipherment, creating 3D models of the available corpus is of crucial importance, and one of the objectives of the ERC INSCRIBE project, based at the University of Bologna with Professor S. Ferrara as Principal Investigator. In this preliminary work, we present the results of the 3D digitization of the Mamari tablet, one of the longest inscriptions in Rongorongo, housed in the Museum Archives of the Congregazione dei Sacri Cuori di Gesù e Maria in Rome. The tablet is made of wood, with a shiny reflecting surface, characterized by a mainly dark texture. The 3D modelling was carried out with the ScanRider 1.2 laser scanner manufactured by VGER, based on Structured Light technology, taking care to ensure the legibility of each sign while preserving the overall shape of the object as precisely as possible. To overcome the difficulties inherent in the object’s complex fabric, the Mamari tablet was acquired in separate sections (joined together during processing through specific markers), thus managing to optimize the optical parameters of the laser scanner, such as the exposure of the camera and the depth of field of the projector. Furthermore, an evaluation of the 3D reconstruction precision was also carried out, highlighting a precision of few hundredths of millimeters, in agreement with the claimed nominal standard deviation. In addition to the 3D model produced, one of the main results of this endeavor was the definition of a successful method to scan such complex objects, which will be replicated to finalize the complete 3D modelling of the whole Rongorongo corpus of inscriptions

    3D modelling of archaeological small finds by the structure sensor range camera: comparison of different scanning applications

    Full text link
    Today, range cameras represent a cheap, intuitive and effective technology for collecting the 3D geometry of objects and environments automatically and practically in real time. Such features can make these sensors a valuable tool for documenting archaeological small finds, especially when not expert users are involved. Therefore, in this work, Scanner and itSeez3D, two of the most promising scanning applications actually available for the Structure Sensor, a range camera specifically designed for mobile devices, were tested in order to evaluate their accuracy in modelling the 3D geometry of two archaeological artefacts, characterized by different shape and dimensions. The 3D models obtained through the two scanning applications were thus compared with the reference ones generated with the more accurate photogrammetric technique. The results demonstrate that both the applications show the same level of geometric accuracy, which amounts generally to very few millimetres, from an overall point of view, and, at the same time, they substantially point out the good quality of the Structure Sensor 3D reconstruction technology. In particular, the itSeez3D application is surely the best solution for the color restitution, even if it requires a payment of $7 to export and thus to use effectively each model generated. On the other side, Scanner is a free application and its geometric accuracy is comparable to that of itSeez3D, but, however, the colours are frequently smoothed and sometimes not fully rendered

    CycleDRUMS: automatic drum arrangement for bass lines using CycleGAN

    Full text link
    Abstract The two main research threads in computer-based music generation are the construction of autonomous music-making systems and the design of computer-based environments to assist musicians. In the symbolic domain, the key problem of automatically arranging a piece of music was extensively studied, while relatively fewer systems tackled this challenge in the audio domain. In this contribution, we propose CycleDRUMS, a novel method for generating drums given a bass line. After converting the waveform of the bass into a mel-spectrogram, we can automatically generate original drums that follow the beat, sound credible, and be directly mixed with the input bass. We formulated this task as an unpaired image-to-image translation problem, and we addressed it with CycleGAN, a well-established unsupervised style transfer framework designed initially for treating images. The choice to deploy raw audio and mel-spectrograms enabled us to represent better how humans perceive music and to draw sounds for new arrangements from the vast collection of music recordings accumulated in the last century. In the absence of an objective way of evaluating the output of both generative adversarial networks and generative music systems, we further defined a possible metric for the proposed task, partially based on human (and expert) judgment. Finally, as a comparison, we replicated our results with Pix2Pix, a paired image-to-image translation network, and we showed that our approach outperforms it
    corecore