7 research outputs found

    Rekonstruksi 3D Untuk Model Wajah Virtual Akademik Menggunakan Sensor Kinect 2

    Get PDF
    Massive multiplayer online game (MMOG) seperti world of warcraft, aion atau second life telah mendapatkan perhatian luar biasa pada perkembangan game vitual. Salah satu kelebihan MMOG pada game virtual setiap player dapat berkomunikasi secara langsung yang di wakili dengan karakter visual tiga dimensi. MMOG juga mendukung grafik permainan pada komputer hingga permainan yang digunakan menggunakan karakter visual tiga dimensi menjadi terlihat nyata.Penelitian ini memanfaatkan alat sensor kinect 2 dan Microsoft Kinect yang membantu untuk merekam avatar tiga dimensi yang dapat dipersonalisasikan. Dari perkembangan alat sensor yang bernama Kinect 2 sensor dapat mempermudah rekontruksi 3D untuk model wajah pada avatar game virtual dan di proses menggunakan teknik modeling 3D hingga visual dari hasil sensor Kinect 2 menggambarkan tampak nyata dari player dalam bentuk visual.Penelitian ini menghasilkan rekontruksi 3D untuk model wajah pada avatar game virtual akademik menggunakan sensor Kinect 2. Hasil pengujian SUS untuk uji modelling dan visual avatar 3D menghasilkan nilai rata-rata 41,6 dari sekala 5, maka masuk kategori acceptable yang artinya aplikasi dapat diterima.

    Rekonstruksi 3D Untuk Model Wajah Virtual Akademik Menggunakan Sensor Kinect 2

    Get PDF
    Massive multiplayer online game (MMOG) seperti world of warcraft, aion atau second life telah mendapatkan perhatian luar biasa pada perkembangan game vitual. Salah satu kelebihan MMOG pada game virtual setiap player dapat berkomunikasi secara langsung yang di wakili dengan karakter visual tiga dimensi. MMOG juga mendukung grafik permainan pada komputer hingga permainan yang digunakan menggunakan karakter visual tiga dimensi menjadi terlihat nyata.Penelitian ini memanfaatkan alat sensor kinect 2 dan Microsoft Kinect yang membantu untuk merekam avatar tiga dimensi yang dapat dipersonalisasikan. Dari perkembangan alat sensor yang bernama Kinect 2 sensor dapat mempermudah rekontruksi 3D untuk model wajah pada avatar game virtual dan di proses menggunakan teknik modeling 3D hingga visual dari hasil sensor Kinect 2 menggambarkan tampak nyata dari player dalam bentuk visual.Penelitian ini menghasilkan rekontruksi 3D untuk model wajah pada avatar game virtual akademik menggunakan sensor Kinect 2. Hasil pengujian SUS untuk uji modelling dan visual avatar 3D menghasilkan nilai rata-rata 41,6 dari sekala 5, maka masuk kategori acceptable yang artinya aplikasi dapat diterima.聽</div

    3D GEOSPATIAL INDOOR NAVIGATION FOR DISASTER RISK REDUCTION AND RESPONSE IN URBAN ENVIRONMENT

    Get PDF
    Disaster management for urban environments with complex structures requires 3D extensions of indoor applications to support better risk reduction and response strategies. The paper highlights the need for assessment and explores the role of 3D geospatial information and modeling regarding the indoor structure and navigational routes which can be utilized as disaster risk reduction and response strategy. The reviewed models or methods are analysed testing parameters in the context of indoor risk and disaster management. These parameters are level of detail, connection to outdoor, spatial model and network, handling constraints. 3D reconstruction of indoors requires the structural data to be collected in a feasible manner with sufficient details. Defining the indoor space along with obstacles is important for navigation. Readily available technologies embedded in smartphones allow development of mobile applications for data collection, visualization and navigation enabling access by masses at low cost. The paper concludes with recommendations for 3D modeling, navigation and visualization of data using readily available smartphone technologies, drones as well as advanced robotics for Disaster Management

    A Post-Rectification Approach of Depth Images of Kinect v2 for 3D Reconstruction of Indoor Scenes

    No full text
    3D reconstruction of indoor scenes is a hot research topic in computer vision. Reconstructing fast, low-cost, and accurate dense 3D maps of indoor scenes have applications in indoor robot positioning, navigation, and semantic mapping. In other studies, the Microsoft Kinect for Windows v2 (Kinect v2) is utilized to complete this task, however, the accuracy and precision of depth information and the accuracy of correspondence between the RGB and depth (RGB-D) images still remain to be improved. In this paper, we propose a post-rectification approach of the depth images to improve the accuracy and precision of depth information. Firstly, we calibrate the Kinect v2 with a planar checkerboard pattern. Secondly, we propose a post-rectification approach of the depth images according to the reflectivity-related depth error. Finally, we conduct tests to evaluate this post-rectification approach from the perspectives of accuracy and precision. In order to validate the effect of our post-rectification approach, we apply it to RGB-D simultaneous localization and mapping (SLAM) in an indoor environment. Experimental results show that once our post-rectification approach is employed, the RGB-D SLAM system can perform a more accurate and better visual effect 3D reconstruction of indoor scenes than other state-of-the-art methods

    Detecci贸n de acciones humanas a partir de informaci贸n de profundidad mediante redes neuronales convolucionales

    Get PDF
    El objetivo principal del presente trabajo es la implementaci贸n de un sistema de detecci贸n de acciones humanas en el 谩mbito de la seguridad y la video-vigilancia a partir de la informaci贸n de profundidad ("Depth") proporcionada por sensores RGB-D. El sistema se basa en el empleo de redes neuronales convolucionales 3D (3D-CNN) que permiten realizar de forma autom谩tica la extracci贸n de caracter铆sticas y clasificaci贸n de acciones a partir de la informaci贸n espacial y temporal de las secuencias de profundidad. La propuesta se ha evaluado de forma exhaustiva, obteniendo como resultados experimentales, una precisi贸n del 94% en la detecci贸n de acciones. Si ten茅is problemas, sugerencias o comentarios sobre el mismo, dirigidlas por favor a Sergio de L贸pez Diz .The main objective of this work is the implementation of human actions detection system in the field of security and video-surveillance from depth information provided by RGB-D sensors. The system is based on 3D convolutional neural networks (3D-CNN) that allow the automatic features extraction and actions classification from spatial and temporal information of depth sequences. The proposal has been exhaustively evaluated, obtaining as experimental results, an accuracy of 94% in the actions detection. If you have problems, suggestions or comments on the document, please forward them to Sergio de L贸pez Diz .Grado en Ingenier铆a Electr贸nica de Comunicacione

    Modeling and Compensating of Noise in Time-of-Flight Sensors

    Get PDF
    Three-dimensional (3D) sensors provide the ability to perform contactless measurements of objects and distances that are within their field of view. Unlike traditional two-dimensional (2D) cameras, which only provide RGB data about objects within a scene, 3D sensors are able to directly provide depth information for objects within a scene. Of these 3D sensing technologies, Time-of-Flight (ToF) sensors are becoming more compact which allows them to be more easily integrated with other devices and to find use in more applications. ToF sensors also provide several benefits over other 3D sensing technologies that increase the types of applications where ToF sensors can be used. For example, over the last decade, ToF sensors have become more widely used in applications such as 3D scanning, drone positioning, robotics, logistics, structural health monitoring, and road surveillance. To further extend the applications where ToF sensors can be employed, this work focuses on how to improve the performance of ToF sensors by suppressing and mitigating the effects of noise artifacts that are associated with ToF sensors. These issues include multipath interference, motion blur, and multicamera interference in 3D depth maps and point clouds
    corecore