4,847 research outputs found

    Wireless Software Synchronization of Multiple Distributed Cameras

    Full text link
    We present a method for precisely time-synchronizing the capture of image sequences from a collection of smartphone cameras connected over WiFi. Our method is entirely software-based, has only modest hardware requirements, and achieves an accuracy of less than 250 microseconds on unmodified commodity hardware. It does not use image content and synchronizes cameras prior to capture. The algorithm operates in two stages. In the first stage, we designate one device as the leader and synchronize each client device's clock to it by estimating network delay. Once clocks are synchronized, the second stage initiates continuous image streaming, estimates the relative phase of image timestamps between each client and the leader, and shifts the streams into alignment. We quantitatively validate our results on a multi-camera rig imaging a high-precision LED array and qualitatively demonstrate significant improvements to multi-view stereo depth estimation and stitching of dynamic scenes. We release as open source 'libsoftwaresync', an Android implementation of our system, to inspire new types of collective capture applications.Comment: Main: 9 pages, 10 figures. Supplemental: 3 pages, 5 figure

    Creation of a Virtual Atlas of Neuroanatomy and Neurosurgical Techniques Using 3D Scanning Techniques

    Get PDF
    Neuroanatomy is one of the most challenging and fascinating topics within the human anatomy, due to the complexity and interconnection of the entire nervous system. The gold standard for learning neurosurgical anatomy is cadaveric dissections. Nevertheless, it has a high cost (needs of a laboratory, acquisition of cadavers, and fixation), is time-consuming, and is limited by sociocultural restrictions. Due to these disadvantages, other tools have been investigated to improve neuroanatomy learning. Three-dimensional modalities have gradually begun to supplement traditional 2-dimensional representations of dissections and illustrations. Volumetric models (VM) are the new frontier for neurosurgical education and training. Different workflows have been described to create these VMs -photogrammetry (PGM) and structured light scanning (SLS). In this study, we aimed to describe and use the currently available 3D scanning techniques to create a virtual atlas of neurosurgical anatomy. Dissections on post-mortem human heads and brains were performed at the skull base laboratories of Stanford University - NeuroTraIn Center and the University of California, San Francisco - SBCVL (skull base and cerebrovascular laboratory). Then VMs were created following either SLS or PGM workflow. Fiber tract reconstructions were also generated from DICOM using DSI-studio and incorporated into VMs from dissections. Moreover, common creative license materials models were used to simplify the understanding of the specific anatomical region. Both methods yielded VMs with suitable clarity and structural integrity for anatomical education, surgical illustration, and procedural simulation. We described the roadmap of SLS and PGM for creating volumetric models, including the required equipment and software. We have also provided step-by-step procedures on how users can post-processing and refine these images according to their specifications. The VMs generated were used for several publications, to describe the step-by-step of a specific neurosurgical approach and to enhance the understanding of an anatomical region and its function. These models were used in neuroanatomical education and research (workshops and publications). VMs offer a new, immersive, and innovative way to accurately visualize neuroanatomy. Given the straightforward workflow, the presently described techniques may serve as a reference point for an entirely new way of capturing and depicting neuroanatomy and offer new opportunities for the application of VMs in education, simulation, and surgical planning. The virtual atlas, divided into specific areas concerning different neurosurgical approaches (such as skull base, cortex and fiber tracts, and spine operative anatomy), will increase the viewer's understanding of neurosurgical anatomy. The described atlas is the first surgical collection of VMs from cadaveric dissections available in the medical field and could be a used as reference for future creation of analogous collection in the different medical subspeciality.La neuroanatomia Ăš, grazie alle intricate connessioni che caratterizzano il sistema nervoso e alla sua affascinante complessitĂ , una delle discipline piĂč stimolanti della anatomia umana. Nonostante il gold standard per l’apprendimento dell’anatomia neurochirurgica sia ancora rappresentato dalle dissezioni cadaveriche, l’accessibilitĂ  a queste ultime rimane limitata, a causa della loro dispendiositĂ  in termini di tempo e costi (necessitĂ  di un laboratorio, acquisizione di cadaveri e fissazione), e alle restrizioni socioculturali per la donazione di cadaveri. Al fine di far fronte a questi impedimenti, e con lo scopo di garantire su larga scala l’apprendimento tridimensionale della neuroanatomia, nel corso degli anni sono stati sviluppati nuovi strumenti e tecnologie. Le tradizionali rappresentazioni anatomiche bidimensionali sono state gradualmente sostituite dalle modalitĂ  3-dimensionali (3D) – foto e video. Tra questi ultimi, i modelli volumetrici (VM) rappresentano la nuova frontiera per l'istruzione e la formazione neurochirurgica. Diversi metodi per creare questi VM sono stati descritti, tra cui la fotogrammetria (PGM) e la scansione a luce strutturata (SLS). Questo studio descrive l’utilizzo delle diverse tecniche di scansione 3D grazie alle quali Ăš stato creato un atlante virtuale di anatomia neurochirurgica. Le dissezioni su teste e cervelli post-mortem sono state eseguite presso i laboratori di base cranica di Stanford University -NeuroTraIn Center e dell'UniversitĂ  della California, San Francisco - SBCVL. I VM dalle dissezioni sono stati creati seguendo i metodi di SLS e/o PGM. Modelli di fibra bianca sono stati generate utilizzando DICOM con il software DSI-studio e incorporati ai VM di dissezioni anatomiche. Inoltre, sono stati utilizzati VM tratti da common creative license material (materiale con licenze creative comuni) al fine di semplificare la comprensione di alcune regioni anatomiche. I VM generati con entrambi i metodi sono risultati adeguati, sia in termini di chiarezza che di integritĂ  strutturale, per l’educazione anatomica, l’illustrazione medica e la simulazione chirurgica. Nel nostro lavoro sono stati esaustivamente descritti tutti gli step necessari, di entrambe le tecniche (SLS e PGM), per la creazione di VM, compresi le apparecchiature e i software utilizzati. Sono state inoltre descritte le tecniche di post-elaborazione e perfezionamento dei VM da poter utilizzare in base alle necessitĂ  richieste. I VM generati durante la realizzazione del nostro lavoro sono stati utilizzati per molteplici pubblicazioni, nella descrizione step-by-step di uno specifico approccio neurochirurgico o per migliorare la comprensione di una regione anatomica e della sua funzione. Questi modelli sono stati utilizzati a scopo didattico per la formazione neuroanatomica di studenti di medicina, specializzandi e giovani neurochirurghi. I VM offrono un modo nuovo, coinvolgente e innovativo con cui poter raggiungere un’accurata conoscenza tridimensionale della neuroanatomia. La metodologia delle due tecniche descritte puĂČ servire come punto di riferimento per un nuovo modo di acquisizione e rappresentazione della neuroanatomia, ed offrire nuove opportunitĂ  di utilizzo dei VM nella formazione didattica, nella simulazione e nella pianificazione chirurgica. L'atlante virtuale qui descritto, suddiviso in aree specifiche relative a diversi approcci neurochirurgici, aumenterĂ  la comprensione dell'anatomia neurochirurgica da parte dello spettatore. Questa Ăš la prima raccolta chirurgica di VM da dissezioni anatomiche disponibile in ambito medico e potrebbe essere utilizzato come riferimento per la futura creazione di analoga raccolta nelle diverse sotto specialitĂ  mediche

    D3MOBILE METROLOGY WORLD LEAGUE: TRAINING SECONDARY STUDENTS ON SMARTPHONE-BASED PHOTOGRAMMETRY

    Get PDF
    The advent of the smartphones brought with them higher processing capabilities and improved camera specifications which boosted the applications of mobile-based imagery in a range of domains. One of them is the 3-D reconstruction of objects by means of photogrammetry, which now enjoys great popularity. This fact brings potential opportunities to develop educational procedures in high schools using smartphone-based 3-D scanning techniques. On this basis, we designed a Project Based e-Learning (PBeL) initiative to introduce secondary students to the disciplines of photogrammetry through the use of their mobile phones in an attractive and challenging way for them. The paper describes the motivation behind the project "D3MOBILE Metrology World League", supported by ISPRS as part of the "Educational and Capacity Building Initiative 2020"programme. With this Science, Technology, Engineering and Mathematics (STEM) initiative, we implement a methodology with the format of an international competition, that can be adapted to daily classwork at the high school level anywhere in the world. Therefore, the championship is essentially structured around a collection of well-thought-out e-learning materials (text guidelines, video tutorials, proposed exercises, etc.), providing a more flexible access to content and instruction at any time and from any place. The methodology allows students to gain spatial skills and to practice other transversal abilities, learn the basics of photogrammetric techniques and workflows, gain experience in the 3-D modelling of simple objects and practice a range of techniques related to the science of measurementS

    A Critical Comparison of 3D Digitization Techniques for Heritage Objects

    Get PDF
    Techniques for the three-dimensional digitization of tangible heritage are continuously updated, as regards active and passive sensors, data acquisition approaches, implemented algorithms and employed computational systems. These developments enable higher automation and processing velocities, increased accuracy, and precision for digitizing heritage assets. For large-scale applications, as for investigations on ancient remains, heritage objects, or architectural details, scanning and imagebased modeling approaches have prevailed, due to reduced costs and processing durations, fast acquisition, and the reproducibility of workflows. This paper presents an updated metric comparison of common heritage digitization approaches, providing a thorough examination of sensors, capturing workflows, processing parameters involved, metric and radiometric results produced. A variety of photogrammetric software were evaluated (both commercial and open sourced), as well as photo-capturing equipment of various characteristics and prices, and scanners employing different technologies. The experimentations were performed on case studies of different geometrical and surface characteristics to thoroughly assess the implemented three-dimensional modeling pipelines

    Personalized medicine in surgical treatment combining tracking systems, augmented reality and 3D printing

    Get PDF
    MenciĂłn Internacional en el tĂ­tulo de doctorIn the last twenty years, a new way of practicing medicine has been focusing on the problems and needs of each patient as an individual thanks to the significant advances in healthcare technology, the so-called personalized medicine. In surgical treatments, personalization has been possible thanks to key technologies adapted to the specific anatomy of each patient and the needs of the physicians. Tracking systems, augmented reality (AR), three-dimensional (3D) printing and artificial intelligence (AI) have previously supported this individualized medicine in many ways. However, their independent contributions show several limitations in terms of patient-to-image registration, lack of flexibility to adapt to the requirements of each case, large preoperative planning times, and navigation complexity. The main objective of this thesis is to increase patient personalization in surgical treatments by combining these technologies to bring surgical navigation to new complex cases by developing new patient registration methods, designing patient-specific tools, facilitating access to augmented reality by the medical community, and automating surgical workflows. In the first part of this dissertation, we present a novel framework for acral tumor resection combining intraoperative open-source navigation software, based on an optical tracking system, and desktop 3D printing. We used additive manufacturing to create a patient-specific mold that maintained the same position of the distal extremity during image-guided surgery as in the preoperative images. The feasibility of the proposed workflow was evaluated in two clinical cases (soft-tissue sarcomas in hand and foot). We achieved an overall accuracy of the system of 1.88 mm evaluated on the patient-specific 3D printed phantoms. Surgical navigation was feasible during both surgeries, allowing surgeons to verify the tumor resection margin. Then, we propose and augmented reality navigation system that uses 3D printed surgical guides with a tracking pattern enabling automatic patient-to-image registration in orthopedic oncology. This specific tool fits on the patient only in a pre-designed location, in this case bone tissue. This solution has been developed as a software application running on Microsoft HoloLens. The workflow was validated on a 3D printed phantom replicating the anatomy of a patient presenting an extraosseous Ewing’s sarcoma, and then tested during the actual surgical intervention. The results showed that the surgical guide with the reference marker can be placed precisely with an accuracy of 2 mm and a visualization error lower than 3 mm. The application allowed physicians to visualize the skin, bone, tumor and medical images overlaid on the phantom and patient. To enable the use of AR and 3D printing by inexperienced users without broad technical knowledge, we designed a step-by-step methodology. The proposed protocol describes how to develop an AR smartphone application that allows superimposing any patient-based 3D model onto a real-world environment using a 3D printed marker tracked by the smartphone camera. Our solution brings AR solutions closer to the final clinical user, combining free and open-source software with an open-access protocol. The proposed guide is already helping to accelerate the adoption of these technologies by medical professionals and researchers. In the next section of the thesis, we wanted to show the benefits of combining these technologies during different stages of the surgical workflow in orthopedic oncology. We designed a novel AR-based smartphone application that can display the patient’s anatomy and the tumor’s location. A 3D printed reference marker, designed to fit in a unique position of the affected bone tissue, enables automatic registration. The system has been evaluated in terms of visualization accuracy and usability during the whole surgical workflow on six realistic phantoms achieving a visualization error below 3 mm. The AR system was tested in two clinical cases during surgical planning, patient communication, and surgical intervention. These results and the positive feedback obtained from surgeons and patients suggest that the combination of AR and 3D printing can improve efficacy, accuracy, and patients’ experience In the final section, two surgical navigation systems have been developed and evaluated to guide electrode placement in sacral neurostimulation procedures based on optical tracking and augmented reality. Our results show that both systems could minimize patient discomfort and improve surgical outcomes by reducing needle insertion time and number of punctures. Additionally, we proposed a feasible clinical workflow for guiding SNS interventions with both navigation methodologies, including automatically creating sacral virtual 3D models for trajectory definition using artificial intelligence and intraoperative patient-to-image registration. To conclude, in this thesis we have demonstrated that the combination of technologies such as tracking systems, augmented reality, 3D printing, and artificial intelligence overcomes many current limitations in surgical treatments. Our results encourage the medical community to combine these technologies to improve surgical workflows and outcomes in more clinical scenarios.Programa de Doctorado en Ciencia y TecnologĂ­a BiomĂ©dica por la Universidad Carlos III de MadridPresidenta: MarĂ­a JesĂșs Ledesma Carbayo.- Secretaria: MarĂ­a Arrate Muñoz Barrutia.- Vocal: Csaba Pinte

    Advanced Calibration of Automotive Augmented Reality Head-Up Displays = Erweiterte Kalibrierung von Automotiven Augmented Reality-Head-Up-Displays

    Get PDF
    In dieser Arbeit werden fortschrittliche Kalibrierungsmethoden fĂŒr Augmented-Reality-Head-up-Displays (AR-HUDs) in Kraftfahrzeugen vorgestellt, die auf parametrischen perspektivischen Projektionen und nichtparametrischen Verzerrungsmodellen basieren. Die AR-HUD-Kalibrierung ist wichtig, um virtuelle Objekte in relevanten Anwendungen wie z.B. Navigationssystemen oder ParkvorgĂ€ngen korrekt zu platzieren. Obwohl es im Stand der Technik einige nĂŒtzliche AnsĂ€tze fĂŒr dieses Problem gibt, verfolgt diese Dissertation das Ziel, fortschrittlichere und dennoch weniger komplizierte AnsĂ€tze zu entwickeln. Als Voraussetzung fĂŒr die Kalibrierung haben wir mehrere relevante Koordinatensysteme definiert, darunter die dreidimensionale (3D) Welt, den Ansichtspunkt-Raum, den HUD-Sichtfeld-Raum (HUD-FOV) und den zweidimensionalen (2D) virtuellen Bildraum. Wir beschreiben die Projektion der Bilder von einem AR-HUD-Projektor in Richtung der Augen des Fahrers als ein ansichtsabhĂ€ngiges Lochkameramodell, das aus intrinsischen und extrinsischen Matrizen besteht. Unter dieser Annahme schĂ€tzen wir zunĂ€chst die intrinsische Matrix unter Verwendung der Grenzen des HUD-Sichtbereichs. Als nĂ€chstes kalibrieren wir die extrinsischen Matrizen an verschiedenen Blickpunkten innerhalb einer ausgewĂ€hlten "Eyebox" unter BerĂŒcksichtigung der sich Ă€ndernden Augenpositionen des Fahrers. Die 3D-Positionen dieser Blickpunkte werden von einer Fahrerkamera verfolgt. FĂŒr jeden einzelnen Blickpunkt erhalten wir eine Gruppe von 2D-3D-Korrespondenzen zwischen einer Menge Punkten im virtuellen Bildraum und ihren ĂŒbereinstimmenden Kontrollpunkten vor der Windschutzscheibe. Sobald diese Korrespondenzen verfĂŒgbar sind, berechnen wir die extrinsische Matrix am entsprechenden Betrachtungspunkt. Durch Vergleichen der neu projizierten und realen Pixelpositionen dieser virtuellen Punkte erhalten wir eine 2D-Verteilung von Bias-Vektoren, mit denen wir Warping-Karten rekonstruieren, welche die Informationen ĂŒber die Bildverzerrung enthalten. FĂŒr die VollstĂ€ndigkeit wiederholen wir die obigen extrinsischen Kalibrierungsverfahren an allen ausgewĂ€hlten Betrachtungspunkten. Mit den kalibrierten extrinsischen Parametern stellen wir die Betrachtungspunkte wieder her im Weltkoordinatensystem. Da wir diese Punkte gleichzeitig im Raum der Fahrerkamera verfolgen, kalibrieren wir weiter die Transformation von der Fahrerkamera in den Weltraum unter Verwendung dieser 3D-3D-Korrespondenzen. Um mit nicht teilnehmenden Betrachtungspunkten innerhalb der Eyebox umzugehen, erhalten wir ihre extrinsischen Parameter und Warping-Karten durch nichtparametrische Interpolationen. Unsere Kombination aus parametrischen und nichtparametrischen Modellen ĂŒbertrifft den Stand der Technik hinsichtlich der ZielkomplexitĂ€t sowie Zeiteffizienz, wĂ€hrend wir eine vergleichbare Kalibrierungsgenauigkeit beibehalten. Bei allen unseren Kalibrierungsschemen liegen die Projektionsfehler in der Auswertungsphase bei einer Entfernung von 7,5 Metern innerhalb weniger Millimeter, was einer Winkelgenauigkeit von ca. 2 Bogenminuten entspricht, was nahe am Auflösungvermögen des Auges liegt

    Optimization of computer-assisted intraoperative guidance for complex oncological procedures

    Get PDF
    MenciĂłn Internacional en el tĂ­tulo de doctorThe role of technology inside the operating room is constantly increasing, allowing surgical procedures previously considered impossible or too risky due to their complexity or limited access. These reliable tools have improved surgical efficiency and safety. Cancer treatment is one of the surgical specialties that has benefited most from these techniques due to its high incidence and the accuracy required for tumor resections with conservative approaches and clear margins. However, in many cases, introducing these technologies into surgical scenarios is expensive and entails complex setups that are obtrusive, invasive, and increase the operative time. In this thesis, we proposed convenient, accessible, reliable, and non-invasive solutions for two highly complex regions for tumor resection surgeries: pelvis and head and neck. We explored how the introduction of 3D printing, surgical navigation, and augmented reality in these scenarios provided high intraoperative precision. First, we presented a less invasive setup for osteotomy guidance in pelvic tumor resections based on small patient-specific instruments (PSIs) fabricated with a desktop 3D printer at a low cost. We evaluated their accuracy in a cadaveric study, following a realistic workflow, and obtained similar results to previous studies with more invasive setups. We also identified the ilium as the region more prone to errors. Then, we proposed surgical navigation using these small PSIs for image-to-patient registration. Artificial landmarks included in the PSIs substitute the anatomical landmarks and the bone surface commonly used for this step, which require additional bone exposure and is, therefore, more invasive. We also presented an alternative and more convenient installation of the dynamic reference frame used to track the patient movements in surgical navigation. The reference frame is inserted in a socket included in the PSIs and can be attached and detached without losing precision and simplifying the installation. We validated the setup in a cadaveric study, evaluating the accuracy and finding the optimal PSI configuration in the three most common scenarios for pelvic tumor resection. The results demonstrated high accuracy, where the main source of error was again incorrect placements of PSIs in regular and homogeneous regions such as the ilium. The main limitation of PSIs is the guidance error resulting from incorrect placements. To overcome this issue, we proposed augmented reality as a tool to guide PSI installation in the patient’s bone. We developed an application for smartphones and HoloLens 2 that displays the correct position intraoperatively. We measured the placement errors in a conventional and a realistic phantom, including a silicone layer to simulate tissue. The results demonstrated a significant reduction of errors with augmented reality compared to freehand placement, ensuring an installation of the PSI close to the target area. Finally, we proposed three setups for surgical navigation in palate tumor resections, using optical trackers and augmented reality. The tracking tools for the patient and surgical instruments were fabricated with low-cost desktop 3D printers and designed to provide less invasive setups compared to previous solutions. All setups presented similar results with high accuracy when tested in a 3D-printed patient-specific phantom. They were then validated in the real surgical case, and one of the solutions was applied for intraoperative guidance. Postoperative results demonstrated high navigation accuracy, obtaining optimal surgical outcomes. The proposed solution enabled a conservative surgical approach with a less invasive navigation setup. To conclude, in this thesis we have proposed new setups for intraoperative navigation in two complex surgical scenarios for tumor resection. We analyzed their navigation precision, defining the optimal configurations to ensure accuracy. With this, we have demonstrated that computer-assisted surgery techniques can be integrated into the surgical workflow with accessible and non-invasive setups. These results are a step further towards optimizing the procedures and continue improving surgical outcomes in complex surgical scenarios.Programa de Doctorado en Ciencia y TecnologĂ­a BiomĂ©dica por la Universidad Carlos III de MadridPresidente: RaĂșl San JosĂ© EstĂ©par.- Secretario: Alba GonzĂĄlez Álvarez.- Vocal: Simon Droui
    • 

    corecore