97 research outputs found

    Mental vision:a computer graphics platform for virtual reality, science and education

    Get PDF
    Despite the wide amount of computer graphics frameworks and solutions available for virtual reality, it is still difficult to find a perfect one fitting at the same time the many constraints of research and educational contexts. Advanced functionalities and user-friendliness, rendering speed and portability, or scalability and image quality are opposite characteristics rarely found into a same approach. Furthermore, fruition of virtual reality specific devices like CAVEs or wearable systems is limited by their costs and accessibility, being most of these innovations reserved to institutions and specialists able to afford and manage them through strong background knowledge in programming. Finally, computer graphics and virtual reality are a complex and difficult matter to learn, due to the heterogeneity of notions a developer needs to practice with before attempting to implement a full virtual environment. In this thesis we describe our contributions to these topics, assembled in what we called the Mental Vision platform. Mental Vision is a framework composed of three main entities. First, a teaching/research oriented graphics engine, simplifying access to 2D/3D real-time rendering on mobile devices, personal computers and CAVE systems. Second, a series of pedagogical modules to introduce and practice computer graphics and virtual reality techniques. Third, two advanced VR systems: a wearable, lightweight and handsfree mixed reality setup, and a four sides CAVE designed through off the shelf hardware. In this dissertation we explain our conceptual, architectural and technical approach, pointing out how we managed to create a robust and coherent solution reducing complexity related to cross-platform and multi-device 3D rendering, and answering simultaneously to contradictory common needs of computer graphics and virtual reality for researchers and students. A series of case studies evaluates how Mental Vision concretely satisfies these needs and achieves its goals on in vitro benchmarks and in vivo scientific and educational projects

    Study of Augmented Reality based manufacturing for further integration of quality control 4.0: a systematic literature review

    Get PDF
    Augmented Reality (AR) has gradually become a mainstream technology enabling Industry 4.0 and its maturity has also grown over time. AR has been applied to support different processes on the shop-floor level, such as assembly, maintenance, etc. As various processes in manufacturing require high quality and near-zero error rates to ensure the demands and safety of end-users, AR can also equip operators with immersive interfaces to enhance productivity, accuracy and autonomy in the quality sector. However, there is currently no systematic review paper about AR technology enhancing the quality sector. The purpose of this paper is to conduct a systematic literature review (SLR) to conclude about the emerging interest in using AR as an assisting technology for the quality sector in an industry 4.0 context. Five research questions (RQs), with a set of selection criteria, are predefined to support the objectives of this SLR. In addition, different research databases are used for the paper identification phase following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) methodology to find the answers for the predefined RQs. It is found that, in spite of staying behind the assembly and maintenance sector in terms of AR-based solutions, there is a tendency towards interest in developing and implementing AR-assisted quality applications. There are three main categories of current AR-based solutions for quality sector, which are AR-based apps as a virtual Lean tool, AR-assisted metrology and AR-based solutions for in-line quality control. In this SLR, an AR architecture layer framework has been improved to classify articles into different layers which are finally integrated into a systematic design and development methodology for the development of long-term AR-based solutions for the quality sector in the future

    Performance Factors in Neurosurgical Simulation and Augmented Reality Image Guidance

    Get PDF
    Virtual reality surgical simulators have seen widespread adoption in an effort to provide safe, cost-effective and realistic practice of surgical skills. However, the majority of these simulators focus on training low-level technical skills, providing only prototypical surgical cases. For many complex procedures, this approach is deficient in representing anatomical variations that present clinically, failing to challenge users’ higher-level cognitive skills important for navigation and targeting. Surgical simulators offer the means to not only simulate any case conceivable, but to test novel approaches and examine factors that influence performance. Unfortunately, there is a void in the literature surrounding these questions. This thesis was motivated by the need to expand the role of surgical simulators to provide users with clinically relevant scenarios and evaluate human performance in relation to image guidance technologies, patient-specific anatomy, and cognitive abilities. To this end, various tools and methodologies were developed to examine cognitive abilities and knowledge, simulate procedures, and guide complex interventions all within a neurosurgical context. The first chapter provides an introduction to the material. The second chapter describes the development and evaluation of a virtual anatomical training and examination tool. The results suggest that learning occurs and that spatial reasoning ability is an important performance predictor, but subordinate to anatomical knowledge. The third chapter outlines development of automation tools to enable efficient simulation studies and data management. In the fourth chapter, subjects perform abstract targeting tasks on ellipsoid targets with and without augmented reality guidance. While the guidance tool improved accuracy, performance with the tool was strongly tied to target depth estimation – an important consideration for implementation and training with similar guidance tools. In the fifth chapter, neurosurgically experienced subjects were recruited to perform simulated ventriculostomies. Results showed anatomical variations influence performance and could impact outcome. Augmented reality guidance showed no marked improvement in performance, but exhibited a mild learning curve, indicating that additional training may be warranted. The final chapter summarizes the work presented. Our results and novel evaluative methodologies lay the groundwork for further investigation into simulators as versatile research tools to explore performance factors in simulated surgical procedures

    Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space

    Get PDF
    This major component of the research described in this thesis is 3D computer graphics, specifically the realistic physics-based softbody simulation and haptic responsive environments. Minor components include advanced human-computer interaction environments, non-linear documentary storytelling, and theatre performance. The journey of this research has been unusual because it requires a researcher with solid knowledge and background in multiple disciplines; who also has to be creative and sensitive in order to combine the possible areas into a new research direction. [...] It focuses on the advanced computer graphics and emerges from experimental cinematic works and theatrical artistic practices. Some development content and installations are completed to prove and evaluate the described concepts and to be convincing. [...] To summarize, the resulting work involves not only artistic creativity, but solving or combining technological hurdles in motion tracking, pattern recognition, force feedback control, etc., with the available documentary footage on film, video, or images, and text via a variety of devices [....] and programming, and installing all the needed interfaces such that it all works in real-time. Thus, the contribution to the knowledge advancement is in solving these interfacing problems and the real-time aspects of the interaction that have uses in film industry, fashion industry, new age interactive theatre, computer games, and web-based technologies and services for entertainment and education. It also includes building up on this experience to integrate Kinect- and haptic-based interaction, artistic scenery rendering, and other forms of control. This research work connects all the research disciplines, seemingly disjoint fields of research, such as computer graphics, documentary film, interactive media, and theatre performance together.Comment: PhD thesis copy; 272 pages, 83 figures, 6 algorithm

    Rapid Prototyping for Virtual Environments

    Get PDF
    Development of Virtual Environment (VE) applications is challenging where application developers are required to have expertise in the target VE technologies along with the problem domain expertise. New VE technologies impose a significant learning curve to even the most experienced VE developer. The proposed solution relies on synthesis to automate the migration of a VE application to a new unfamiliar VE platform/technology. To solve the problem, the Common Scene Definition Framework (CSDF) is developed, that serves as a superset/model representation of the target virtual world. Input modules are developed to populate the framework with the capabilities of the virtual world imported from VRML 2.0 and X3D formats. The synthesis capability is built into the framework to synthesize the virtual world into a subset of VRML 2.0, VRML 1.0, X3D, Java3D, JavaFX, JavaME, and OpenGL technologies, which may reside on different platforms. Interfaces are designed to keep the framework extensible to different and new VE formats/technologies. The framework demonstrated the ability to quickly synthesize a working prototype of the input virtual environment in different VE formats

    Visualizing three-dimensional graph drawings

    Get PDF
    viii, 110 leaves : ill. (some col.) ; 29 cm.The GLuskap system for interactive three-dimensional graph drawing applies techniques of scientific visualization and interactive systems to the construction, display, and analysis of graph drawings. Important features of the system include support for large-screen stereographic 3D display with immersive head-tracking and motion-tracked interactive 3D wand control. A distributed rendering architecture contributes to the portability of the system, with user control performed on a laptop computer without specialized graphics hardware. An interface for implementing graph drawing layout and analysis algorithms in the Python programming language is also provided. This thesis describes comprehensively the work on the system by the author—this work includes the design and implementation of the major features described above. Further directions for continued development and research in cognitive tools for graph drawing research are also suggested

    Ανάπτυξη τεχνολογιών επαυξημένης πραγματικότητας στην ιατρική εκπαίδευση με προσομοιωτές

    Get PDF
    Στην παρούσα διδακτορική διατριβή παρουσιάζουμε ένα πρωτοπόρο σύστημα εκπαίδευσης και αξιολόγησης βασικών δεξιοτήτων λαπαροσκοπικής χειρουργικής σε περιβάλλον Επαυξημένης Πραγματικότητας (ΕΠ). Το προτεινόμενο σύστημα αποτελεί μια πλήρως λειτουργική πλατφόρμα εκπαίδευσης η οποία επιτρέπει σε χειρουργούς να εξασκηθούν χρησιμοποιώντας πραγματικά λαπαροσκοπικά εργαλεία και αλληλεπιδρώντας με ψηφιακά αντικείμενα εντός ενός πραγματικού περιβάλλοντος εκπαίδευσης. Το σύστημα αποτελείται από ένα τυπικό κουτί λαπαροσκοπικής εκπαίδευσης, πραγματικά χειρουργικά εργαλεία, κάμερα και συστοιχία αισθητήρων που επιτρέπουν την ανίχνευση και καταγραφή των κινήσεων του χειρουργού σε πραγματικό χρόνο. Χρησιμοποιώντας το προτεινόμενο σύστημα, σχεδιάσαμε και υλοποιήσαμε σενάρια εκπαίδευσης παρόμοια με τις ασκήσεις του προγράμματος FLS®, στοχεύοντας σε δεξιότητες όπως η αίσθηση βάθους, ο συντονισμός χεριού-ματιού, και η παράλληλη χρήση δύο χεριών. Επιπλέον των βασικών δεξιοτήτων, το προτεινόμενο σύστημα χρησιμοποιήθηκε για τον σχεδιασμό σεναρίου εξάσκησης διαδικαστικών δεξιοτήτων, οι οποίες περιλάμβανουν την εφαρμογή χειρουργικών clips καθώς και την απολίνωση εικονικής αρτηρίας, σε περιβάλλον ΕΠ. Τα αποτελέσματα συγκριτικών μελετών μεταξύ έμπειρων και αρχαρίων χειρουργών που πραγματοποιήθηκαν στα πλαίσια της παρούσας διατριβής υποδηλώνουν την εγκυρότητα του προτεινόμενου συστήματος. Επιπλέον, εξήχθησαν σημαντικά συμπεράσματα σχετικά με την πιθανή χρήση της ΕΑ στην λαπαροσκοπική προσομοίωση. Η συγκεκριμένη τεχνολογία προσφέρει αυξημένη αίσθηση οπτικού ρεαλισμού και ευελιξία στον σχεδιασμό εκπαιδευτικών σεναρίων, παρουσιάζοντας σημαντικά μικρότερες απαιτήσεις από πλευράς εξοπλισμού σε σύγκριση με τις υπάρχουσες εμπορικές πλατφόρμες. Βάσει των αποτελεσμάτων της παρούσας διατριβής μπορεί με ασφάλεια να εξαχθεί το συμπέρασμα πως η ΕΠ αποτελεί μια πολλά υποσχόμενη τεχνολογία που θα μπορούσε να χρησιμοποιηθεί για τον σχεδιασμό προσομοιωτών λαπαροσκοπικής χειρουργικής ως εναλλακτική των υπαρχόντων τεχνολογιών και συστημάτων.In this thesis we present what is, to the best of our knowledge, the first framework for training and assessment of fundamental psychomotor and procedural laparoscopic skills in an interactive Augmented Reality (AR) environment. The proposed system is a fully-featured laparoscopic training platform, allowing surgeons to practice by manipulating real instruments while interacting with virtual objects within a real environment. It consists of a standard laparoscopic box-trainer, real instruments, a camera and a set of sensory devices for real-time tracking of surgeons’ actions. The proposed framework has been used for the implementation of AR-based training scenarios similar to the drills of the FLS® program, focusing on fundamental laparoscopic skills such as depth-perception, hand-eye coordination and bimanual operation. Moreover, this framework allowed the implementation of a proof-of-concept procedural skills training scenario, which involved clipping and cutting of a virtual artery within an AR environment. Comparison studies conducted for the evaluation of the presented framework indicated high content and face validity. In addition, significant conclusions regarding the potentials of introducing AR in laparoscopic simulation training and assessment were drawn. This technology provides an advanced sense of visual realism combined with a great flexibility in training task prototyping, with minimum requirements in terms of hardware as compared to commercially available platforms. Thereby, it can be safely stated that AR is a promising technology which can indeed provide a valuable alternative to the training modalities currently used in MIS

    Application-driven visual computing towards industry 4.0 2018

    Get PDF
    245 p.La Tesis recoge contribuciones en tres campos: 1. Agentes Virtuales Interactivos: autónomos, modulares, escalables, ubicuos y atractivos para el usuario. Estos IVA pueden interactuar con los usuarios de manera natural.2. Entornos de RV/RA Inmersivos: RV en la planificación de la producción, el diseño de producto, la simulación de procesos, pruebas y verificación. El Operario Virtual muestra cómo la RV y los Co-bots pueden trabajar en un entorno seguro. En el Operario Aumentado la RA muestra información relevante al trabajador de una manera no intrusiva. 3. Gestión Interactiva de Modelos 3D: gestión online y visualización de modelos CAD multimedia, mediante conversión automática de modelos CAD a la Web. La tecnología Web3D permite la visualización e interacción de estos modelos en dispositivos móviles de baja potencia.Además, estas contribuciones han permitido analizar los desafíos presentados por Industry 4.0. La tesis ha contribuido a proporcionar una prueba de concepto para algunos de esos desafíos: en factores humanos, simulación, visualización e integración de modelos
    corecore