105 research outputs found

    Range images

    Get PDF
    This article gives an overview of range‐imaging techniques with an aim to let the reader better understand how the difficult issue, such as the registration of overlapping range images, can be approached and solved. It firstly introduces the characteristics of range images and highlights examples of 3D image visualizations, associated technical issues, applications, and the differences of range imaging with respect to the traditional digital broadband imaging. Subsequently, one of the most popular feature extraction and matching methods, the signature of histograms of orientations (SHOT) method, is then outlined. However, the “matched” points generated by SHOT usually generate high proportion of false positives due to various factors such as imaging noise, lack of features, and cluttered backgrounds. Thus, the article discusses more about image‐matching issues, particularly to emphasize how the widely employed range image alignment technique, the random sample consensus (RANSAC) method, is compared with a simple, yet effective, technique based on normalized error penalization (NEP). This simple NEP method utilizes a strategy to penalize point matches whose errors are far away from the majority. The capability of the method for the evaluation of point matches between overlapping range images is illustrated by experiments using real range image data sets. Interestingly enough, these range images appear to be easier to register than expected. Finally, some conclusions have been drawn and further readings for other fundamental techniques and concepts have been suggested

    A Study into Autonomous Scanning for 3D Model Construction

    Get PDF
    3D scanning and printing has the potential to revolutionise the world. It offers a bridge between the virtual environment and the tangible world. The use of 3D scanners to capture and recreate defining objects is known as 3D virtualisation. It involves capturing a real-life scene using laser technology and representing its geometry using 3D modelling software or 3D printers. Despite being a relatively young technology, 3D printing has now become accessible and a part of modern industry. The printing of a 3D generated model can change the way in which an individual understands a concept, environment or communicates an idea. This has multiple benefits, for education, skills development, training and within the construction industry. However, using this technology relies on the operator having the skills and training required to generate accurate 3D models, and account for errors in the mesh after scanning. As such, this paper details the development into an automated 3D scanning system, and a cloud-based printing platform, where models are intelligently printed by multiple devices. Its development allows the readiness of 3D printing capabilities to unskilled users, who have no education or training in 3D model construction. Objects can be instantly manipulated and transferred into free-to-use open source graphic software. The access to detailed 3D model construction has never been so accessible to the untrained

    Virtuaalse proovikabiini 3D kehakujude ja roboti juhtimisalgoritmide uurimine

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneVirtuaalne riiete proovimine on üks põhilistest teenustest, mille pakkumine võib suurendada rõivapoodide edukust, sest tänu sellele lahendusele väheneb füüsilise töö vajadus proovimise faasis ning riiete proovimine muutub kasutaja jaoks mugavamaks. Samas pole enamikel varem välja pakutud masinnägemise ja graafika meetoditel õnnestunud inimkeha realistlik modelleerimine, eriti terve keha 3D modelleerimine, mis vajab suurt kogust andmeid ja palju arvutuslikku ressurssi. Varasemad katsed on ebaõnnestunud põhiliselt seetõttu, et ei ole suudetud korralikult arvesse võtta samaaegseid muutusi keha pinnal. Lisaks pole varasemad meetodid enamasti suutnud kujutiste liikumisi realistlikult reaalajas visualiseerida. Käesolev projekt kavatseb kõrvaldada eelmainitud puudused nii, et rahuldada virtuaalse proovikabiini vajadusi. Välja pakutud meetod seisneb nii kasutaja keha kui ka riiete skaneerimises, analüüsimises, modelleerimises, mõõtmete arvutamises, orientiiride paigutamises, mannekeenidelt võetud 3D visuaalsete andmete segmenteerimises ning riiete mudeli paigutamises ja visualiseerimises kasutaja kehal. Selle projekti käigus koguti visuaalseid andmeid kasutades 3D laserskannerit ja Kinecti optilist kaamerat ning koostati nendest andmebaas. Neid andmeid kasutati välja töötatud algoritmide testimiseks, mis peamiselt tegelevad riiete realistliku visuaalse kujutamisega inimkehal ja suuruse pakkumise süsteemi täiendamisega virtuaalse proovikabiini kontekstis.Virtual fitting constitutes a fundamental element of the developments expected to rise the commercial prosperity of online garment retailers to a new level, as it is expected to reduce the load of the manual labor and physical efforts required. Nevertheless, most of the previously proposed computer vision and graphics methods have failed to accurately and realistically model the human body, especially, when it comes to the 3D modeling of the whole human body. The failure is largely related to the huge data and calculations required, which in reality is caused mainly by inability to properly account for the simultaneous variations in the body surface. In addition, most of the foregoing techniques cannot render realistic movement representations in real-time. This project intends to overcome the aforementioned shortcomings so as to satisfy the requirements of a virtual fitting room. The proposed methodology consists in scanning and performing some specific analyses of both the user's body and the prospective garment to be virtually fitted, modeling, extracting measurements and assigning reference points on them, and segmenting the 3D visual data imported from the mannequins. Finally, superimposing, adopting and depicting the resulting garment model on the user's body. The project is intended to gather sufficient amounts of visual data using a 3D laser scanner and the Kinect optical camera, to manage it in form of a usable database, in order to experimentally implement the algorithms devised. The latter will provide a realistic visual representation of the garment on the body, and enhance the size-advisor system in the context of the virtual fitting room under study

    The Complete Reference (Volume 4)

    Get PDF
    This is the fourth volume of the successful series Robot Operating Systems: The Complete Reference, providing a comprehensive overview of robot operating systems (ROS), which is currently the main development framework for robotics applications, as well as the latest trends and contributed systems. The book is divided into four parts: Part 1 features two papers on navigation, discussing SLAM and path planning. Part 2 focuses on the integration of ROS into quadcopters and their control. Part 3 then discusses two emerging applications for robotics: cloud robotics, and video stabilization. Part 4 presents tools developed for ROS; the first is a practical alternative to the roslaunch system, and the second is related to penetration testing. This book is a valuable resource for ROS users and wanting to learn more about ROS capabilities and features.info:eu-repo/semantics/publishedVersio

    Proceedings of the 1st Annual SMACC Research Seminar 2016

    Get PDF
    The Annual SMACC Research Seminar is a new forum for researchers from VTT Technical Research Centre of Finland Ltd, Tampere University of Technology (TUT) and industry to present their research on the area of smart machines and manufacturing. The 1st seminar is held on 10th of October 2016 in Tampere, Finland. The objective of the seminar is to publish results of the research to wider audiences and to offer researchers a new forum for discussing methods, outcomes and research challenges of current research projects on SMACC themes and to find common research interests and new research ideas. Smart Machines and Manufacturing Competence Centre - SMACC is joint strategic alliance of VTT Ltd and TUT in the area of intelligent machines and manufacturing. SMACC offers unique services for SME`s in the field of machinery and manufacturing – key features are rapid solutions, cutting-edge research expertise and extensive partnership networks. SMACC is promoting digitalization in mechanical engineering and making scientific research with domestic and international partners in several different topics (www.smacc.fi)

    Toward lifelong visual localization and mapping

    Get PDF
    Thesis (Ph.D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 171-181).Mobile robotic systems operating over long durations require algorithms that are robust and scale efficiently over time as sensor information is continually collected. For mobile robots one of the fundamental problems is navigation; which requires the robot to have a map of its environment, so it can plan its path and execute it. Having the robot use its perception sensors to do simultaneous localization and mapping (SLAM) is beneficial for a fully autonomous system. Extending the time horizon of operations poses problems to current SLAM algorithms, both in terms of robustness and temporal scalability. To address this problem we propose a reduced pose graph model that significantly reduces the complexity of the full pose graph model. Additionally we develop a SLAM system using two different sensor modalities: imaging sonars for underwater navigation and vision based SLAM for terrestrial applications. Underwater navigation is one application domain that benefits from SLAM, where access to a global positioning system (GPS) is not possible. In this thesis we present SLAM systems for two underwater applications. First, we describe our implementation of real-time imaging-sonar aided navigation applied to in-situ autonomous ship hull inspection using the hovering autonomous underwater vehicle (HAUV). In addition we present an architecture that enables the fusion of information from both a sonar and a camera system. The system is evaluated using data collected during experiments on SS Curtiss and USCGC Seneca. Second, we develop a feature-based navigation system supporting multi-session mapping, and provide an algorithm for re-localizing the vehicle between missions. In addition we present a method for managing the complexity of the estimation problem as new information is received. The system is demonstrated using data collected with a REMUS vehicle equipped with a BlueView forward-looking sonar. The model we use for mapping builds on the pose graph representation which has been shown to be an efficient and accurate approach to SLAM. One of the problems with the pose graph formulation is that the state space continuously grows as more information is acquired. To address this problem we propose the reduced pose graph (RPG) model which partitions the space to be mapped and uses the partitions to reduce the number of poses used for estimation. To evaluate our approach, we present results using an online binocular and RGB-Depth visual SLAM system that uses place recognition both for robustness and multi-session operation. Additionally, to enable large-scale indoor mapping, our system automatically detects elevator rides based on accelerometer data. We demonstrate long-term mapping using approximately nine hours of data collected in the MIT Stata Center over the course of six months. Ground truth, derived by aligning laser scans to existing floor plans, is used to evaluate the global accuracy of the system. Our results illustrate the capability of our visual SLAM system to map a large scale environment over an extended period of time.by Hordur Johannsson.Ph.D

    User-oriented markerless augmented reality framework based on 3D reconstruction and loop closure detection

    Get PDF
    An augmented reality (AR) system needs to track the user-view to perform an accurate augmentation registration. The present research proposes a conceptual marker-less, natural feature-based AR framework system, the process for which is divided into two stages - an offline database training session for the application developers, and an online AR tracking and display session for the final users. In the offline session, two types of 3D reconstruction application, RGBD-SLAM and SfM are integrated into the development framework for building the reference template of a target environment. The performance and applicable conditions of these two methods are presented in the present thesis, and the application developers can choose which method to apply for their developmental demands. A general developmental user interface is provided to the developer for interaction, including a simple GUI tool for augmentation configuration. The present proposal also applies a Bag of Words strategy to enable a rapid "loop-closure detection" in the online session, for efficiently querying the application user-view from the trained database to locate the user pose. The rendering and display process of augmentation is currently implemented within an OpenGL window, which is one result of the research that is worthy of future detailed investigation and development

    Fusion of computed point clouds and integral-imaging concepts for full-parallax 3D display

    Get PDF
    During the last century, various technologies of 3D image capturing and visualization have spotlighted, due to both their pioneering nature and the aspiration to extend the applications of conventional 2D imaging technology to 3D scenes. Besides, thanks to advances in opto-electronic imaging technologies, the possibilities of capturing and transmitting 2D images in real-time have progressed significantly, and boosted the growth of 3D image capturing, processing, transmission and as well as display techniques. Among the latter, integral-imaging technology has been considered as one of the promising ones to restore real 3D scenes through the use of a multi-view visualization system that provides to observers with a sense of immersive depth. Many research groups and companies have researched this novel technique with different approaches, and occasions for various complements. In this work, we followed this trend, but processed through our novel strategies and algorithms. Thus, we may say that our approach is innovative, when compared to conventional proposals. The main objective of our research is to develop techniques that allow recording and simulating the natural scene in 3D by using several cameras which have different types and characteristics. Then, we compose a dense 3D scene from the computed 3D data by using various methods and techniques. Finally, we provide a volumetric scene which is restored with great similarity to the original shape, through a comprehensive 3D monitor and/or display system. Our Proposed integral-imaging monitor shows an immersive experience to multiple observers. In this thesis we address the challenges of integral image production techniques based on the computerized 3D information, and we focus in particular on the implementation of full-parallax 3D display system. We have also made progress in overcoming the limitations of the conventional integral-imaging technique. In addition, we have developed different refinement methodologies and restoration strategies for the composed depth information. Finally, we have applied an adequate solution that reduces the computation times significantly, associated with the repetitive calculation phase in the generation of an integral image. All these results are presented by the corresponding images and proposed display experiments
    corecore