236 research outputs found
Interactive volume visualization in a virtual environment.
by Yu-Hang Siu.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 74-80).Abstract also in Chinese.Abstract --- p.iiiAcknowledgements --- p.vChapter 1 --- Introduction --- p.1Chapter 1.1 --- Volume Visualization --- p.2Chapter 1.2 --- Virtual Environment --- p.11Chapter 1.3 --- Approach --- p.12Chapter 1.4 --- Thesis Overview --- p.13Chapter 2 --- Contour Extraction --- p.15Chapter 2.1 --- Concept of Intelligent Scissors --- p.16Chapter 2.2 --- Dijkstra's Algorithm --- p.18Chapter 2.3 --- Cost Function --- p.20Chapter 2.4 --- Summary --- p.23Chapter 3 --- Volume Cutting --- p.24Chapter 3.1 --- Basic idea of the algorithm --- p.25Chapter 3.2 --- Intelligent Scissors on Surface Mesh --- p.27Chapter 3.3 --- Internal Cutting Surface --- p.29Chapter 3.4 --- Summary --- p.34Chapter 4 --- Three-dimensional Intelligent Scissors --- p.35Chapter 4.1 --- 3D Graph Construction --- p.36Chapter 4.2 --- Cost Function --- p.40Chapter 4.3 --- Applications --- p.42Chapter 4.3.1 --- Surface Extraction --- p.42Chapter 4.3.2 --- Vessel Tracking --- p.47Chapter 4.4 --- Summary --- p.49Chapter 5 --- Implementations in a Virtual Environment --- p.52Chapter 5.1 --- Volume Cutting --- p.53Chapter 5.2 --- Surface Extraction --- p.56Chapter 5.3 --- Vessel Tracking --- p.59Chapter 5.4 --- Summary --- p.64Chapter 6 --- Conclusions --- p.68Chapter 6.1 --- Summary of Results --- p.68Chapter 6.2 --- Future Directions --- p.70Chapter A --- Performance of Dijkstra's Shortest Path Algorithm --- p.72Chapter B --- IsoRegion Construction --- p.7
Medical Robotics
The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently âmedical roboticistsâ or not
Deep Reinforcement Learning in Surgical Robotics: Enhancing the Automation Level
Surgical robotics is a rapidly evolving field that is transforming the
landscape of surgeries. Surgical robots have been shown to enhance precision,
minimize invasiveness, and alleviate surgeon fatigue. One promising area of
research in surgical robotics is the use of reinforcement learning to enhance
the automation level. Reinforcement learning is a type of machine learning that
involves training an agent to make decisions based on rewards and punishments.
This literature review aims to comprehensively analyze existing research on
reinforcement learning in surgical robotics. The review identified various
applications of reinforcement learning in surgical robotics, including
pre-operative, intra-body, and percutaneous procedures, listed the typical
studies, and compared their methodologies and results. The findings show that
reinforcement learning has great potential to improve the autonomy of surgical
robots. Reinforcement learning can teach robots to perform complex surgical
tasks, such as suturing and tissue manipulation. It can also improve the
accuracy and precision of surgical robots, making them more effective at
performing surgeries
Simulation Method for the Physical Deformation of a Three-Dimensional Soft Body in Augmented Reality-Based External Ventricular Drainage
Objectives Intraoperative navigation reduces the risk of major complications and increases the likelihood of optimal surgical outcomes. This paper presents an augmented reality (AR)-based simulation technique for ventriculostomy that visualizes brain deformations caused by the movements of a surgical instrument in a three-dimensional brain model. This is achieved by utilizing a position-based dynamics (PBD) physical deformation method on a preoperative brain image. Methods An infrared camera-based AR surgical environment aligns the real-world space with a virtual space and tracks the surgical instruments. For a realistic representation and reduced simulation computation load, a hybrid geometric model is employed, which combines a high-resolution mesh model and a multiresolution tetrahedron model. Collision handling is executed when a collision between the brain and surgical instrument is detected. Constraints are used to preserve the properties of the soft body and ensure stable deformation. Results The experiment was conducted once in a phantom environment and once in an actual surgical environment. The tasks of inserting the surgical instrument into the ventricle using only the navigation information presented through the smart glasses and verifying the drainage of cerebrospinal fluid were evaluated. These tasks were successfully completed, as indicated by the drainage, and the deformation simulation speed averaged 18.78 fps. Conclusions This experiment confirmed that the AR-based method for external ventricular drain surgery was beneficial to clinicians
Navigation system based in motion tracking sensor for percutaneous renal access
Tese de Doutoramento em Engenharia BiomédicaMinimally-invasive kidney interventions are daily performed to diagnose and treat several renal
diseases. Percutaneous renal access (PRA) is an essential but challenging stage for most of these
procedures, since its outcome is directly linked to the physicianâs ability to precisely visualize and
reach the anatomical target.
Nowadays, PRA is always guided with medical imaging assistance, most frequently using X-ray
based imaging (e.g. fluoroscopy). Thus, radiation on the surgical theater represents a major risk to
the medical team, where its exclusion from PRA has a direct impact diminishing the dose exposure
on both patients and physicians.
To solve the referred problems this thesis aims to develop a new hardware/software framework
to intuitively and safely guide the surgeon during PRA planning and puncturing.
In terms of surgical planning, a set of methodologies were developed to increase the certainty of
reaching a specific target inside the kidney. The most relevant abdominal structures for PRA were
automatically clustered into different 3D volumes. For that, primitive volumes were merged as a local
optimization problem using the minimum description length principle and image statistical
properties. A multi-volume Ray Cast method was then used to highlight each segmented volume.
Results show that it is possible to detect all abdominal structures surrounding the kidney, with the
ability to correctly estimate a virtual trajectory.
Concerning the percutaneous puncturing stage, either an electromagnetic or optical solution
were developed and tested in multiple in vitro, in vivo and ex vivo trials. The optical tracking solution
aids in establishing the desired puncture site and choosing the best virtual puncture trajectory.
However, this system required a line of sight to different optical markers placed at the needle base,
limiting the accuracy when tracking inside the human body. Results show that the needle tip can
deflect from its initial straight line trajectory with an error higher than 3 mm. Moreover, a complex
registration procedure and initial setup is needed.
On the other hand, a real-time electromagnetic tracking was developed. Hereto, a catheter
was inserted trans-urethrally towards the renal target. This catheter has a position and orientation
electromagnetic sensor on its tip that function as a real-time target locator. Then, a needle integrating a similar sensor is used. From the data provided by both sensors, one computes a virtual puncture
trajectory, which is displayed in a 3D visualization software. In vivo tests showed a median renal and
ureteral puncture times of 19 and 51 seconds, respectively (range 14 to 45 and 45 to 67 seconds).
Such results represent a puncture time improvement between 75% and 85% when comparing to
state of the art methods.
3D sound and vibrotactile feedback were also developed to provide additional information about
the needle orientation. By using these kind of feedback, it was verified that the surgeon tends to
follow a virtual puncture trajectory with a reduced amount of deviations from the ideal trajectory,
being able to anticipate any movement even without looking to a monitor. Best results show that 3D
sound sources were correctly identified 79.2 ± 8.1% of times with an average angulation error of
10.4Âș degrees. Vibration sources were accurately identified 91.1 ± 3.6% of times with an average
angulation error of 8.0Âș degrees.
Additionally to the EMT framework, three circular ultrasound transducers were built with a needle
working channel. One explored different manufacture fabrication setups in terms of the piezoelectric
materials, transducer construction, single vs. multi array configurations, backing and matching
material design. The A-scan signals retrieved from each transducer were filtered and processed to
automatically detect reflected echoes and to alert the surgeon when undesirable anatomical
structures are in between the puncture path. The transducers were mapped in a water tank and
tested in a study involving 45 phantoms. Results showed that the beam cross-sectional area
oscillates around the ceramics radius and it was possible to automatically detect echo signals in
phantoms with length higher than 80 mm.
Hereupon, it is expected that the introduction of the proposed system on the PRA procedure,
will allow to guide the surgeon through the optimal path towards the precise kidney target, increasing
surgeonâs confidence and reducing complications (e.g. organ perforation) during PRA. Moreover, the
developed framework has the potential to make the PRA free of radiation for both patient and surgeon
and to broad the use of PRA to less specialized surgeons.IntervençÔes renais minimamente invasivas são realizadas diariamente para o tratamento e
diagnóstico de vårias doenças renais. O acesso renal percutùneo (ARP) é uma etapa essencial e
desafiante na maior parte destes procedimentos. O seu resultado encontra-se diretamente
relacionado com a capacidade do cirurgiĂŁo visualizar e atingir com precisĂŁo o alvo anatĂłmico.
Hoje em dia, o ARP Ă© sempre guiado com recurso a sistemas imagiolĂłgicos, na maior parte
das vezes baseados em raios-X (p.e. a fluoroscopia). A radiação destes sistemas nas salas cirĂșrgicas
representa um grande risco para a equipa médica, aonde a sua remoção levarå a um impacto direto
na diminuição da dose exposta aos pacientes e cirurgiÔes.
De modo a resolver os problemas existentes, esta tese tem como objetivo o desenvolvimento
de uma framework de hardware/software que permita, de forma intuitiva e segura, guiar o cirurgiĂŁo
durante o planeamento e punção do ARP.
Em termos de planeamento, foi desenvolvido um conjunto de metodologias de modo a
aumentar a eficåcia com que o alvo anatómico é alcançado. As estruturas abdominais mais
relevantes para o procedimento de ARP, foram automaticamente agrupadas em volumes 3D, através
de um problema de optimização global com base no princĂpio de âminimum description lengthâ e
propriedades estatĂsticas da imagem. Por fim, um procedimento de Ray Cast, com mĂșltiplas funçÔes
de transferĂȘncia, foi utilizado para enfatizar as estruturas segmentadas. Os resultados mostram que
Ă© possĂvel detetar todas as estruturas abdominais envolventes ao rim, com a capacidade para
estimar corretamente uma trajetĂłria virtual.
No que diz respeito à fase de punção percutùnea, foram testadas duas soluçÔes de deteção
de movimento (Ăłtica e eletromagnĂ©tica) em mĂșltiplos ensaios in vitro, in vivo e ex vivo. A solução
baseada em sensores óticos ajudou no cålculo do melhor ponto de punção e na definição da melhor
trajetĂłria a seguir. Contudo, este sistema necessita de uma linha de visĂŁo com diferentes
marcadores Ăłticos acoplados Ă base da agulha, limitando a precisĂŁo com que a agulha Ă© detetada
no interior do corpo humano. Os resultados indicam que a agulha pode sofrer deflexÔes à medida
que vai sendo inserida, com erros superiores a 3 mm.
Por outro lado, foi desenvolvida e testada uma solução com base em sensores
eletromagnéticos. Para tal, um cateter que integra um sensor de posição e orientação na sua ponta, foi colocado por via trans-uretral junto do alvo renal. De seguida, uma agulha, integrando um sensor
semelhante, é utilizada para a punção percutùnea. A partir da diferença espacial de ambos os
sensores, Ă© possĂvel gerar uma trajetĂłria de punção virtual. A mediana do tempo necessĂĄrio para
puncionar o rim e ureter, segundo esta trajetĂłria, foi de 19 e 51 segundos, respetivamente
(variaçÔes de 14 a 45 e 45 a 67 segundos). Estes resultados representam uma melhoria do tempo
de punção entre 75% e 85%, quando comparados com o estado da arte dos métodos atuais.
Além do feedback visual, som 3D e feedback vibratório foram explorados de modo a fornecer
informaçÔes complementares da posição da agulha. Verificou-se que com este tipo de feedback, o
cirurgiĂŁo tende a seguir uma trajetĂłria de punção com desvios mĂnimos, sendo igualmente capaz
de antecipar qualquer movimento, mesmo sem olhar para o monitor. Fontes de som e vibração
podem ser corretamente detetadas em 79,2 ± 8,1% e 91,1 ± 3,6%, com erros médios de angulação
de 10.4Âș e 8.0 graus, respetivamente.
Adicionalmente ao sistema de navegação, foram tambĂ©m produzidos trĂȘs transdutores de
ultrassom circulares com um canal de trabalho para a agulha. Para tal, foram exploradas diferentes
configuraçÔes de fabricação em termos de materiais piezoelétricos, transdutores multi-array ou
singulares e espessura/material de layers de suporte. Os sinais originados em cada transdutor
foram filtrados e processados de modo a detetar de forma automĂĄtica os ecos refletidos, e assim,
alertar o cirurgião quando existem variaçÔes anatómicas ao longo do caminho de punção. Os
transdutores foram mapeados num tanque de ĂĄgua e testados em 45 phantoms. Os resultados
mostraram que o feixe de ĂĄrea em corte transversal oscila em torno do raio de cerĂąmica, e que os
ecos refletidos sĂŁo detetados em phantoms com comprimentos superiores a 80 mm.
Desta forma, Ă© expectĂĄvel que a introdução deste novo sistema a nĂvel do ARP permitirĂĄ
conduzir o cirurgião ao longo do caminho de punção ideal, aumentado a confiança do cirurgião e
reduzindo possĂveis complicaçÔes (p.e. a perfuração dos ĂłrgĂŁos). AlĂ©m disso, de realçar que este
sistema apresenta o potencial de tornar o ARP livre de radiação e alarga-lo a cirurgiÔes menos
especializados.The present work was only possible thanks to the support by the Portuguese Science and
Technology Foundation through the PhD grant with reference SFRH/BD/74276/2010 funded by
FCT/MEC (PIDDAC) and by Fundo Europeu de Desenvolvimento Regional (FEDER), Programa
COMPETE - Programa Operacional Factores de Competitividade (POFC) do QREN
Liver Segmentation and its Application to Hepatic Interventions
The thesis addresses the development of an intuitive and accurate liver segmentation approach, its integration into software prototypes for the planning of liver interventions, and research on liver regeneration. The developed liver segmentation approach is based on a combination of the live wire paradigm and shape-based interpolation. Extended with two correction modes and integrated into a user-friendly workflow, the method has been applied to more than 5000 data sets. The combination of the liver segmentation with image analysis of hepatic vessels and tumors allows for the computation of anatomical and functional remnant liver volumes. In several projects with clinical partners world-wide, the benefit of the computer-assisted planning was shown. New insights about the postoperative liver function and regeneration could be gained, and most recent investigations into the analysis of MRI data provide the option to further improve hepatic intervention planning
The assessment of visual behaviour and depth perception in surgery
Imperial Users onl
FACING EXPERIENCE: A PAINTERâS CANVAS IN VIRTUAL REALITY
Full version unavailable due to 3rd party copyright restrictions.This research investigates how shifts in perception might be brought about through the development of visual imagery created by the use of virtual environment technology.
Through a discussion of historical uses of immersion in art, this thesis will explore how immersion functions and why immersion has been a goal for artists throughout history. It begins with a discussion of ancient cave drawings and the relevance of Platoâs Allegory of the Cave. Next it examines the biological origins of âmaking special.â The research will discuss how this concept, combined with the ideas of âactionâ and âreaction,â has reinforced the view that art is fundamentally experiential rather than static. The research emphasizes how present-day virtual environment art, in providing a space that engages visitors in computer graphics, expands on previous immersive artistic practices.
The thesis examines the technical context in which the research occurs by briefly describing the use of computer science technologies, the fundamentals of visual arts practices, and the importance of aesthetics in new media and provides a description of my artistic practice. The aim is to investigate how combining these approaches can enhance virtual environments as artworks. The computer science of virtual environments includes both hardware and software programming. The resultant virtual environment experiences are technologically dependent on the types of visual displays being used, including screens and monitors, and their subsequent viewing affordances. Virtual environments fill the field of view and can be experienced with a head mounted display (HMD) or a large screen display. The sense of immersion gained through the experience depends on how tracking devices and related peripheral devices are used to facilitate interaction.
The thesis discusses visual arts practices with a focus on how illusions shift our cognition and perception in the visual modalities. This discussion includes how perceptual thinking is the foundation of art experiences, how analogies are the foundation of cognitive experiences and how the two intertwine in art experiences for virtual environments. An examination of the aesthetic strategies used by artists and new media critics are presented to discuss new media art. This thesis investigates the visual elements used in virtual environments and prescribes strategies for creating art for virtual environments. Methods constituting a unique virtual environment practice that focuses on visual analogies are discussed. The artistic practice that is discussed as the basis for this research also concentrates on experiential moments and shifts in perception and cognition and references Douglas Hofstadter, Rudolf Arnheim and John Dewey.
iv
Virtual environments provide for experiences in which the imagery generated updates in real time. Following an analysis of existing artwork and critical writing relative to the field, the process of inquiry has required the creation of artworks that involve tracking systems, projection displays, sound work, and an understanding of the importance of the visitor. In practice, the research has shown that the visitor should be seen as an interlocutor, interacting from a first-person perspective with virtual environment events, where avatars or other instrumental intermediaries, such as guns, vehicles, or menu systems, do not to occlude the view. The aesthetic outcomes of this research are the result of combining visual analogies, real time interactive animation, and operatic performance in immersive space.
The environments designed in this research were informed initially by paintings created with imagery generated in a hypnopompic state or during the moments of transitioning from sleeping to waking. The drawings often emphasize emotional moments as caricatures and/or elements of the face as seen from a number of perspectives simultaneously, in the way of some cartoons, primitive artwork or Cubist imagery. In the imagery, the faces indicate situations, emotions and confrontations which can offer moments of humour and reflective exploration. At times, the faces usurp the space and stand in representation as both face and figure. The power of the placement of the caricatures in the paintings become apparent as the imagery stages the expressive moment. The placement of faces sets the scene, establishes relationships and promotes the honesty and emotions that develop over time as the paintings are scrutinized.
The development process of creating virtual environment imagery starts with hand drawn sketches of characters, develops further as paintings on âdigital canvasâ, are built as animated, three-dimensional models and finally incorporated into a virtual environment. The imagery is generated while drawing, typically with paper and pencil, in a stream of consciousness during the hypnopompic state. This method became an aesthetic strategy for producing a snappy straightforward sketch. The sketches are explored further as they are worked up as paintings. During the painting process, the figures become fleshed out and their placement on the page, in essence brings them to life. These characters inhabit a world that I explore even further by building them into three dimensional models and placing them in computer generated virtual environments. The methodology of developing and placing the faces/figures became an operational strategy for building virtual environments. In order to open up the range of art virtual environments, and develop operational strategies for visitorsâ experience, the characters and their facial features are used as navigational strategies, signposts and methods of wayfinding in order to sustain a stream of consciousness type of navigation.
Faces and characters were designed to represent those intimate moments of self-reflection and confrontation that occur daily within ourselves and with others. They sought to reflect moments of wonderment, hurt, curiosity and humour that could subsequently be relinquished for more practical or purposeful endeavours. They were intended to create conditions in which visitors might reflect upon their emotional state,
v
enabling their understanding and trust of their personal space, in which decisions are made and the nature of world is determined.
In order to extend the split-second, frozen moment of recognition that a painting affords, the caricatures and their scenes are given new dimensions as they become characters in a performative virtual reality. Emotables, distinct from avatars, are characters confronting visitors in the virtual environment to engage them in an interactive, stream of consciousness, non-linear dialogue.
Visitors are also situated with a role in a virtual world, where they were required to adapt to the language of the environment in order to progress through the dynamics of a drama. The research showed that imagery created in a context of whimsy and fantasy could bring ontological meaning and aesthetic experience into the interactive environment, such that emotables or facially expressive computer graphic characters could be seen as another brushstroke in painting a world of virtual reality
Segmentierung medizinischer Bilddaten und bildgestĂŒtzte intraoperative Navigation
Die Entwicklung von Algorithmen zur automatischen oder semi-automatischen Verarbeitung von medizinischen Bilddaten hat in den letzten Jahren mehr und mehr an Bedeutung gewonnen. Das liegt zum einen an den immer besser werdenden medizinischen AufnahmemodalitĂ€ten, die den menschlichen Körper immer feiner virtuell abbilden können. Zum anderen liegt dies an der verbesserten Computerhardware, die eine algorithmische Verarbeitung der teilweise im Gigabyte-Bereich liegenden Datenmengen in einer vernĂŒnftigen Zeit erlaubt. Das Ziel dieser Habilitationsschrift ist die Entwicklung und Evaluation von Algorithmen fĂŒr die medizinische Bildverarbeitung. Insgesamt besteht die Habilitationsschrift aus einer Reihe von Publikationen, die in drei ĂŒbergreifende Themenbereiche gegliedert sind:
-Segmentierung medizinischer Bilddaten anhand von vorlagenbasierten Algorithmen
-Experimentelle Evaluation quelloffener Segmentierungsmethoden unter medizinischen Einsatzbedingungen
-Navigation zur UnterstĂŒtzung intraoperativer Therapien
Im Bereich Segmentierung medizinischer Bilddaten anhand von vorlagenbasierten Algorithmen wurden verschiedene graphbasierte Algorithmen in 2D und 3D entwickelt, die einen gerichteten Graphen mittels einer Vorlage aufbauen. Dazu gehört die Bildung eines Algorithmus zur Segmentierung von Wirbeln in 2D und 3D. In 2D wird eine rechteckige und in 3D eine wĂŒrfelförmige Vorlage genutzt, um den Graphen aufzubauen und das Segmentierungsergebnis zu berechnen. AuĂerdem wird eine graphbasierte Segmentierung von ProstatadrĂŒsen durch eine Kugelvorlage zur automatischen Bestimmung der Grenzen zwischen ProstatadrĂŒsen und umliegenden Organen vorgestellt. Auf den vorlagenbasierten Algorithmen aufbauend, wurde ein interaktiver Segmentierungsalgorithmus, der einem Benutzer in Echtzeit das Segmentierungsergebnis anzeigt, konzipiert und implementiert. Der Algorithmus nutzt zur Segmentierung die verschiedenen Vorlagen, benötigt allerdings nur einen Saatpunkt des Benutzers. In einem weiteren Ansatz kann der Benutzer die Segmentierung interaktiv durch zusĂ€tzliche Saatpunkte verfeinern. Dadurch wird es möglich, eine semi-automatische Segmentierung auch in schwierigen FĂ€llen zu einem zufriedenstellenden Ergebnis zu fĂŒhren.
Im Bereich Evaluation quelloffener Segmentierungsmethoden unter medizinischen Einsatzbedingungen wurden verschiedene frei verfĂŒgbare Segmentierungsalgorithmen anhand von Patientendaten aus der klinischen Routine getestet. Dazu gehörte die Evaluierung der semi-automatischen Segmentierung von Hirntumoren, zum Beispiel Hypophysenadenomen und Glioblastomen, mit der frei verfĂŒgbaren Open Source-Plattform 3D Slicer. Dadurch konnte gezeigt werden, wie eine rein manuelle Schicht-fĂŒr-Schicht-Vermessung des Tumorvolumens in der Praxis unterstĂŒtzt und beschleunigt werden kann. Weiterhin wurde die Segmentierung von Sprachbahnen in medizinischen Aufnahmen von Hirntumorpatienten auf verschiedenen Plattformen evaluiert.
Im Bereich Navigation zur UnterstĂŒtzung intraoperativer Therapien wurden Softwaremodule zum Begleiten von intra-operativen Eingriffen in verschiedenen Phasen einer Behandlung (Therapieplanung, DurchfĂŒhrung, Kontrolle) entwickelt. Dazu gehört die erstmalige Integration des OpenIGTLink-Netzwerkprotokolls in die medizinische Prototyping-Plattform MeVisLab, die anhand eines NDI-Navigationssystems evaluiert wurde. AuĂerdem wurde hier ebenfalls zum ersten Mal die Konzeption und Implementierung eines medizinischen Software-Prototypen zur UnterstĂŒtzung der intraoperativen gynĂ€kologischen Brachytherapie vorgestellt. Der Software-Prototyp enthielt auch ein Modul zur erweiterten Visualisierung bei der MR-gestĂŒtzten interstitiellen gynĂ€kologischen Brachytherapie, welches unter anderem die Registrierung eines gynĂ€kologischen Brachytherapie-Instruments in einen intraoperativen Datensatz einer Patientin ermöglichte. Die einzelnen Module fĂŒhrten zur Vorstellung eines umfassenden bildgestĂŒtzten Systems fĂŒr die gynĂ€kologische Brachytherapie in einem multimodalen Operationssaal. Dieses System deckt die prĂ€-, intra- und postoperative Behandlungsphase bei einer interstitiellen gynĂ€kologischen Brachytherapie ab
- âŠ