2,411 research outputs found

    A Sketch-based Rapid Modeling Method for Crime Scene Presentation

    Get PDF
    The reconstruction of crime scene plays an important role in digital forensic application. This article integrates computer graphics, sketch-based retrieval and virtual reality (VR) techniques to develop a low-cost and rapid 3D crime scene presentation approach, which can be used by investigators to analyze and simulate the criminal process. First, we constructed a collection of 3D models for indoor crime scenes using various popular techniques, including laser scanning, image-based modeling and geometric modeling. Second, to quickly obtain an object of interest from the 3D model database, a sketch-based retrieval method was proposed. Finally, a rapid modeling system that integrates our database and retrieval algorithm was developed to quickly build a digital crime scene. For practical use, an interactive real-time virtual roaming application was developed in Unity 3D and a low-cost VR head-mounted display (HMD). Practical cases have been implemented to demonstrate the feasibility and availability of our method

    A simulation environment for drone cinematography

    Full text link
    In this paper, we present a workflow for the simulation of drone operations exploiting realistic background environments constructed within Unreal Engine 4 (UE4). Methods for environmental image capture, 3D reconstruction (photogrammetry) and the creation of foreground assets are presented along with a flexible and user-friendly simulation interface. Given the geographical location of the selected area and the camera parameters employed, the scanning strategy and its associated flight parameters are first determined for image capture. Source imagery can be extracted from virtual globe software or obtained through aerial photography of the scene (e.g. using drones). The latter case is clearly more time consuming but can provide enhanced detail, particularly where coverage of virtual globe software is limited. The captured images are then used to generate 3D background environment models employing photogrammetry software. The reconstructed 3D models are then imported into the simulation interface as background environment assets together with appropriate foreground object models as a basis for shot planning and rehearsal. The tool supports both free-flight and parameterisable standard shot types along with programmable scenarios associated with foreground assets and event dynamics. It also supports the exporting of flight plans. Camera shots can also be designed to provide suitable coverage of any landmarks which need to appear in-shot. This simulation tool will contribute to enhanced productivity, improved safety (awareness and mitigations for crowds and buildings), improved confidence of operators and directors and ultimately enhanced quality of viewer experience

    Development of Multi-Robotic Arm System for Sorting System Using Computer Vision

    Get PDF
    This paper develops a multi-robotic arm system and a stereo vision system to sort objects in the right position according to size and shape attributes. The robotic arm system consists of one master and three slave robots associated with three conveyor belts. Each robotic arm is controlled by a robot controller based on a microcontroller. A master controller is used for the vision system and communicating with slave robotic arms using the Modbus RTU protocol through an RS485 serial interface. The stereo vision system is built to determine the 3D coordinates of the object. Instead of rebuilding the entire disparity map, which is computationally expensive, the centroids of the objects in the two images are calculated to determine the depth value. After that, we can calculate the 3D coordinates of the object by using the formula of the pinhole camera model. Objects are picked up and placed on a conveyor branch according to their shape. The conveyor transports the object to the location of the slave robot. Based on the size attribute that the slave robot receives from the master, the object is picked and placed in the right position. Experiment results reveal the effectiveness of the system. The system can be used in industrial processes to reduce the required time and improve the performance of the production line

    Scalable Real-Time Rendering for Extremely Complex 3D Environments Using Multiple GPUs

    Get PDF
    In 3D visualization, real-time rendering of high-quality meshes in complex 3D environments is still one of the major challenges in computer graphics. New data acquisition techniques like 3D modeling and scanning have drastically increased the requirement for more complex models and the demand for higher display resolutions in recent years. Most of the existing acceleration techniques using a single GPU for rendering suffer from the limited GPU memory budget, the time-consuming sequential executions, and the finite display resolution. Recently, people have started building commodity workstations with multiple GPUs and multiple displays. As a result, more GPU memory is available across a distributed cluster of GPUs, more computational power is provided throughout the combination of multiple GPUs, and a higher display resolution can be achieved by connecting each GPU to a display monitor (resulting in a tiled large display configuration). However, using a multi-GPU workstation may not always give the desired rendering performance due to the imbalanced rendering workloads among GPUs and overheads caused by inter-GPU communication. In this dissertation, I contribute a multi-GPU multi-display parallel rendering approach for complex 3D environments. The approach has the capability to support a high-performance and high-quality rendering of static and dynamic 3D environments. A novel parallel load balancing algorithm is developed based on a screen partitioning strategy to dynamically balance the number of vertices and triangles rendered by each GPU. The overhead of inter-GPU communication is minimized by transferring only a small amount of image pixels rather than chunks of 3D primitives with a novel frame exchanging algorithm. The state-of-the-art parallel mesh simplification and GPU out-of-core techniques are integrated into the multi-GPU multi-display system to accelerate the rendering process

    3D SURVEY POINT CLOUD DATA AS DIRECT RENDERING ASSETS FOR VISUALISING COMPLEX HERITAGE IN VIRTUAL APPLICATIONS

    Get PDF
    Digital technology provides methods to record and preserve cultural heritage, support conservation and restoration efforts, and share our collective past with a worldwide audience. Between 2011 and 2017, the 3D Survey Group from Politecnico di Milano operated an annual workshop in the medieval village of Ghesc in which photogrammetry and laser-scanner surveys were carried out. The point cloud data acquired in these activities has become “time slices” documenting different stages of the preservation interventions in Ghesc and the evolution of advanced survey techniques. The main objective of this research is to streamline the workflow of delivering immersive and interactive experiences for complex heritage by directly utilising the 3D survey point cloud data, whether derived from a photogrammetric survey, static laser scanner, or mobile mapping. A point cloud-based multiplatform application is designed and delivered with versatile functions. It runs on PC and VR devices to provide virtual access to the village and narrate its revitalisation story. Additionally, it operates on mobile devices with an AR feature that brings vibrancy to the on-site experience. This application integrates high-fidelity point cloud models, detailed information on vernacular architecture in the Ossola Valley, and information on the preservation project with gamified learning experiences. The unconventional approach of using points as rendering primitives in virtual applications offers a practical solution for visualising complex heritage, enabling an efficient transition from the data collection stage to the data sharing stage without the need for 3D reconstruction and intricate BIM modelling

    A Reinforcement Learning Approach to Reforestation through Robots

    Get PDF
    openAl giorno d'oggi il contrasto alle emissioni di anidride carbonica costitituisce una sfida su scala globale. È noto come la deforestazione contribuisca a ridurre la capacità del pianeta di assorbimento dell'anidride carbonica generata dall'attività umana, in particolare dai processi industriali. Viene qui documentata una simulazione di un ambiente desertico, costruita tramite il motore grafico Unity, nella quale una rete di pannelli fotovoltaici va ad alimentare un macchinario che produce acqua attraverso la deumidificazione dell'aria. L'acqua prodotta viene quindi raccolta in un dispenser e trasferita ad un veicolo robotico su ruote, che ha il compito di spostarsi e irrigare degli alberi piantati in una zona circoscritta. Per mezzo di tecniche di Reinforcement Learning tale robot è addestrato ad irrigare la zona garantendo la crescita giornaliera degli alberi e minimizzando lo spreco di acqua, agendo anche sulla base delle condizioni meteorologiche locali. Questo studio non riguarda l'addestramento e i suoi risultati in senso stretto: tuttavia, vengono qui fornite alcune indicazioni su come eseguire il training.Carbon mitigation effort is nowadays a pressing global challenge. While some advancements are achieved on mitigation techniques, the fact that deforestation hinders the capacity of the Earth to absorb the carbon dioxide generated by human endeavour needs to be addressed. A software simulation of a desert environment created through the Unity game engine is hereby documented. In this simulation, a grid of photovoltaic panels powers a device that produces water through dehumidification of the air. The water is consequently stored inside a reservoir and transferred to a robotic vehicle with the task of moving and watering trees that are planted within a given area. By means of Reinforcement Learning techniques, the robot will have to be trained to water the trees in the area, assuring their daily growth while minimizing the waste of water and assessing the local weather conditions in order to make decisions. This study does not concern the training itself and its results: nevertheless, some indications on how to perform the training are provided

    The Simulation System for Propagation of Fire and Smoke

    Get PDF
    This work presents a solution for a real-time fire suppression control system. It also serves as a support tool that allows creation of virtual ship models and testing them against a range of representative fire scenarios. Model testing includes generating predictions faster than real time, using the simulation network model developed by Hughes Associates, Inc., their visualization, as well as interactive modification of the model settings through the user interface. In the example, the ship geometry represents ex-USS Shadwell, test area 688, imitating a submarine. Applying the designed visualization techniques to the example model revealed the ability of the system to process, store and render data much faster than the real time (in average, 40 times faster)

    MIMo: A Multi-Modal Infant Model for Studying Cognitive Development

    Full text link
    Human intelligence and human consciousness emerge gradually during the process of cognitive development. Understanding this development is an essential aspect of understanding the human mind and may facilitate the construction of artificial minds with similar properties. Importantly, human cognitive development relies on embodied interactions with the physical and social environment, which is perceived via complementary sensory modalities. These interactions allow the developing mind to probe the causal structure of the world. This is in stark contrast to common machine learning approaches, e.g., for large language models, which are merely passively ``digesting'' large amounts of training data, but are not in control of their sensory inputs. However, computational modeling of the kind of self-determined embodied interactions that lead to human intelligence and consciousness is a formidable challenge. Here we present MIMo, an open-source multi-modal infant model for studying early cognitive development through computer simulations. MIMo's body is modeled after an 18-month-old child with detailed five-fingered hands. MIMo perceives its surroundings via binocular vision, a vestibular system, proprioception, and touch perception through a full-body virtual skin, while two different actuation models allow control of his body. We describe the design and interfaces of MIMo and provide examples illustrating its use. All code is available at https://github.com/trieschlab/MIMo .Comment: 11 pages, 8 figures. Submitted to IEEE Transactions on Congnitive and Developmental Systems (TCDS

    Performance Factors in Neurosurgical Simulation and Augmented Reality Image Guidance

    Get PDF
    Virtual reality surgical simulators have seen widespread adoption in an effort to provide safe, cost-effective and realistic practice of surgical skills. However, the majority of these simulators focus on training low-level technical skills, providing only prototypical surgical cases. For many complex procedures, this approach is deficient in representing anatomical variations that present clinically, failing to challenge users’ higher-level cognitive skills important for navigation and targeting. Surgical simulators offer the means to not only simulate any case conceivable, but to test novel approaches and examine factors that influence performance. Unfortunately, there is a void in the literature surrounding these questions. This thesis was motivated by the need to expand the role of surgical simulators to provide users with clinically relevant scenarios and evaluate human performance in relation to image guidance technologies, patient-specific anatomy, and cognitive abilities. To this end, various tools and methodologies were developed to examine cognitive abilities and knowledge, simulate procedures, and guide complex interventions all within a neurosurgical context. The first chapter provides an introduction to the material. The second chapter describes the development and evaluation of a virtual anatomical training and examination tool. The results suggest that learning occurs and that spatial reasoning ability is an important performance predictor, but subordinate to anatomical knowledge. The third chapter outlines development of automation tools to enable efficient simulation studies and data management. In the fourth chapter, subjects perform abstract targeting tasks on ellipsoid targets with and without augmented reality guidance. While the guidance tool improved accuracy, performance with the tool was strongly tied to target depth estimation – an important consideration for implementation and training with similar guidance tools. In the fifth chapter, neurosurgically experienced subjects were recruited to perform simulated ventriculostomies. Results showed anatomical variations influence performance and could impact outcome. Augmented reality guidance showed no marked improvement in performance, but exhibited a mild learning curve, indicating that additional training may be warranted. The final chapter summarizes the work presented. Our results and novel evaluative methodologies lay the groundwork for further investigation into simulators as versatile research tools to explore performance factors in simulated surgical procedures
    • 

    corecore