16 research outputs found

    Scaling Up Medical Visualization : Multi-Modal, Multi-Patient, and Multi-Audience Approaches for Medical Data Exploration, Analysis and Communication

    Get PDF
    Medisinsk visualisering er en av de mest applikasjonsrettede områdene av visualiseringsforsking. Tett samarbeid med medisinske eksperter er nødvendig for å tolke medisinsk bildedata og lage betydningsfulle visualiseringsteknikker og visualiseringsapplikasjoner. Kreft er en av de vanligste dødsårsakene, og med økende gjennomsnittsalder i i-land øker også antallet diagnoser av gynekologisk kreft. Moderne avbildningsteknikker er et viktig verktøy for å vurdere svulster og produsere et økende antall bildedata som radiologer må tolke. I tillegg til antallet bildemodaliteter, øker også antallet pasienter, noe som fører til at visualiseringsløsninger må bli skalert opp for å adressere den økende kompleksiteten av multimodal- og multipasientdata. Dessuten er ikke medisinsk visualisering kun tiltenkt medisinsk personale, men har også som mål å informere pasienter, pårørende, og offentligheten om risikoen relatert til visse sykdommer, og mulige behandlinger. Derfor har vi identifisert behovet for å skalere opp medisinske visualiseringsløsninger for å kunne håndtere multipublikumdata. Denne avhandlingen adresserer skaleringen av disse dimensjonene i forskjellige bidrag vi har kommet med. Først presenterer vi teknikkene våre for å skalere visualiseringer i flere modaliteter. Vi introduserer en visualiseringsteknikk som tar i bruk små multipler for å vise data fra flere modaliteter innenfor et bildesnitt. Dette lar radiologer utforske dataen effektivt uten å måtte bruke flere sidestilte vinduer. I det neste steget utviklet vi en analyseplatform ved å ta i bruk «radiomic tumor profiling» på forskjellige bildemodaliteter for å analysere kohortdata og finne nye biomarkører fra bilder. Biomarkører fra bilder er indikatorer basert på bildedata som kan forutsi variabler relatert til kliniske utfall. «Radiomic tumor profiling» er en teknikk som genererer mulige biomarkører fra bilder basert på første- og andregrads statistiske målinger. Applikasjonen lar medisinske eksperter analysere multiparametrisk bildedata for å finne mulige korrelasjoner mellom kliniske parameter og data fra «radiomic tumor profiling». Denne tilnærmingen skalerer i to dimensjoner, multimodal og multipasient. I en senere versjon la vi til funksjonalitet for å skalere multipublikumdimensjonen ved å gjøre applikasjonen vår anvendelig for livmorhalskreft- og prostatakreftdata, i tillegg til livmorkreftdataen som applikasjonen var designet for. I et senere bidrag fokuserer vi på svulstdata på en annen skala og muliggjør analysen av svulstdeler ved å bruke multimodal bildedata i en tilnærming basert på hierarkisk gruppering. Applikasjonen vår finner mulige interessante regioner som kan informere fremtidige behandlingsavgjørelser. I et annet bidrag, en digital sonderingsinteraksjon, fokuserer vi på multipasientdata. Bildedata fra flere pasienter kan sammenlignes for å finne interessante mønster i svulstene som kan være knyttet til hvor aggressive svulstene er. Til slutt skalerer vi multipublikumdimensjonen med en likhetsvisualisering som er anvendelig for forskning på livmorkreft, på bilder av nevrologisk kreft, og maskinlæringsforskning på automatisk segmentering av svulstdata. Som en kontrast til de allerede fremhevete bidragene, fokuserer vårt siste bidrag, ScrollyVis, hovedsakelig på multipublikumkommunikasjon. Vi muliggjør skapelsen av dynamiske og vitenskapelige “scrollytelling”-opplevelser for spesifikke eller generelle publikum. Slike historien kan bli brukt i spesifikke brukstilfeller som kommunikasjon mellom lege og pasient, eller for å kommunisere vitenskapelige resultater via historier til et generelt publikum i en digital museumsutstilling. Våre foreslåtte applikasjoner og interaksjonsteknikker har blitt demonstrert i brukstilfeller og evaluert med domeneeksperter og fokusgrupper. Dette har ført til at noen av våre bidrag allerede er i bruk på andre forskingsinstitusjoner. Vi ønsker å evaluere innvirkningen deres på andre vitenskapelige felt og offentligheten i fremtidige arbeid.Medical visualization is one of the most application-oriented areas of visualization research. Close collaboration with medical experts is essential for interpreting medical imaging data and creating meaningful visualization techniques and visualization applications. Cancer is one of the most common causes of death, and with increasing average age in developed countries, gynecological malignancy case numbers are rising. Modern imaging techniques are an essential tool in assessing tumors and produce an increasing number of imaging data radiologists must interpret. Besides the number of imaging modalities, the number of patients is also rising, leading to visualization solutions that must be scaled up to address the rising complexity of multi-modal and multi-patient data. Furthermore, medical visualization is not only targeted toward medical professionals but also has the goal of informing patients, relatives, and the public about the risks of certain diseases and potential treatments. Therefore, we identify the need to scale medical visualization solutions to cope with multi-audience data. This thesis addresses the scaling of these dimensions in different contributions we made. First, we present our techniques to scale medical visualizations in multiple modalities. We introduced a visualization technique using small multiples to display the data of multiple modalities within one imaging slice. This allows radiologists to explore the data efficiently without having several juxtaposed windows. In the next step, we developed an analysis platform using radiomic tumor profiling on multiple imaging modalities to analyze cohort data and to find new imaging biomarkers. Imaging biomarkers are indicators based on imaging data that predict clinical outcome related variables. Radiomic tumor profiling is a technique that generates potential imaging biomarkers based on first and second-order statistical measurements. The application allows medical experts to analyze the multi-parametric imaging data to find potential correlations between clinical parameters and the radiomic tumor profiling data. This approach scales up in two dimensions, multi-modal and multi-patient. In a later version, we added features to scale the multi-audience dimension by making our application applicable to cervical and prostate cancer data and the endometrial cancer data the application was designed for. In a subsequent contribution, we focus on tumor data on another scale and enable the analysis of tumor sub-parts by using the multi-modal imaging data in a hierarchical clustering approach. Our application finds potentially interesting regions that could inform future treatment decisions. In another contribution, the digital probing interaction, we focus on multi-patient data. The imaging data of multiple patients can be compared to find interesting tumor patterns potentially linked to the aggressiveness of the tumors. Lastly, we scale the multi-audience dimension with our similarity visualization applicable to endometrial cancer research, neurological cancer imaging research, and machine learning research on the automatic segmentation of tumor data. In contrast to the previously highlighted contributions, our last contribution, ScrollyVis, focuses primarily on multi-audience communication. We enable the creation of dynamic scientific scrollytelling experiences for a specific or general audience. Such stories can be used for specific use cases such as patient-doctor communication or communicating scientific results via stories targeting the general audience in a digital museum exhibition. Our proposed applications and interaction techniques have been demonstrated in application use cases and evaluated with domain experts and focus groups. As a result, we brought some of our contributions to usage in practice at other research institutes. We want to evaluate their impact on other scientific fields and the general public in future work.Doktorgradsavhandlin

    MuSIC: Multi-Sequential Interactive Co-Registration for Cancer Imaging Data based on Segmentation Masks

    Get PDF
    In gynecologic cancer imaging, multiple magnetic resonance imaging (MRI) sequences are acquired per patient to reveal different tissue characteristics. However, after image acquisition, the anatomical structures can be misaligned in the various sequences due to changing patient location in the scanner and organ movements. The co-registration process aims to align the sequences to allow for multi-sequential tumor imaging analysis. However, automatic co-registration often leads to unsatisfying results. To address this problem, we propose the web-based application MuSIC (Multi-Sequential Interactive Co-registration). The approach allows medical experts to co-register multiple sequences simultaneously based on a pre-defined segmentation mask generated for one of the sequences. Our contributions lie in our proposed workflow. First, a shape matching algorithm based on dual annealing searches for the tumor position in each sequence. The user can then interactively adapt the proposed segmentation positions if needed. During this procedure, we include a multi-modal magic lens visualization for visual quality assessment. Then, we register the volumes based on the segmentation mask positions. We allow for both rigid and deformable registration. Finally, we conducted a usability analysis with seven medical and machine learning experts to verify the utility of our approach. Our participants highly appreciate the multi-sequential setup and see themselves using MuSIC in the future. Best Paper Honorable Mention at VCBM2022publishedVersio

    The timing of pregnancies after bariatric surgery has no impact on children’s health—a nationwide population-based registry analysis

    Get PDF
    Purpose Bariatric surgery has a favorable effect on fertility in women. However, due to a lack of data regarding children’s outcomes, the ideal time for conception following bariatric surgery is unknown. Current guidelines advise avoiding pregnancy during the initial weight loss phase (12–24 months after surgery) as there may be potential risks to offspring. Thus, we aimed to analyze health outcomes in children born to mothers who had undergone bariatric surgery. The surgery-to-delivery interval was studied. Materials and Methods A nationwide registry belonging to the Austrian health insurance funds and containing health-related data claims was searched. Data for all women who had bariatric surgery in Austria between 01/2010 and 12/2018 were analyzed. A total of 1057 women gave birth to 1369 children. The offspring’s data were analyzed for medical health claims based on International Classification of Diseases (ICD) codes and number of days hospitalized. Three different surgery-to-delivery intervals were assessed: 12, 18, and 24 months. Results Overall, 421 deliveries (31%) were observed in the first 2 years after surgery. Of these, 70 births (5%) occurred within 12 months after surgery. The median time from surgery to delivery was 34 months. Overall, there were no differences noted in frequency of hospitalization and diagnoses leading to hospitalization in the first year of life, regardless of the surgery-to-delivery interval. Conclusion Pregnancies in the first 24 months after bariatric surgery were common. Importantly, the surgery-to-delivery interval had no significant impact on the health outcome of the children.publishedVersio

    Sex‑Specifc Diferences in Mortality of Patients with a History of Bariatric Surgery: a Nation‑Wide Population‑Based Study

    Get PDF
    Purpose Bariatric surgery reduces mortality in patients with severe obesity and is predominantly performed in women. Therefore, an analysis of sex-specific differences after bariatric surgery in a population-based dataset from Austria was performed. The focus was on deceased patients after bariatric surgery. Materials and Methods The Austrian health insurance funds cover about 98% of the Austrian population. Medical health claims data of all Austrians who underwent bariatric surgery from 01/2010 to 12/2018 were analyzed. In total, 19,901 patients with 107,806 observed years postoperative were eligible for this analysis. Comorbidities based on International Classification of Diseases (ICD)-codes and drug intake documented by Anatomical Therapeutical Chemical (ATC)-codes were analyzed in patients deceased and grouped according to clinically relevant obesity-associated comorbidities: diabetes mellitus (DM), cardiovascular disease (CV), psychiatric disorder (PSY), and malignancy (M). Results In total, 367 deaths were observed (1.8%) within the observation period from 01/2010 to 04/2020. The overall mortality rate was 0.34% per year of observation and significantly higher in men compared to women (0.64 vs. 0.24%; p < 0.001(Chi-squared)). Moreover, the 30-day mortality was 0.19% and sixfold higher in men compared to women (0.48 vs. 0.08%; p < 0.001). CV (82%) and PSY (55%) were the most common comorbidities in deceased patients with no sex-specific differences. Diabetes (38%) was more common in men (43 vs. 33%; p = 0.034), whereas malignant diseases (36%) were more frequent in women (30 vs. 41%; p = 0.025). Conclusion After bariatric surgery, short-term mortality as well as long-term mortality was higher in men compared to women. In deceased patients, diabetes was more common in men, whereas malignant diseases were more common in women.publishedVersio

    Para além do pensamento abissal: das linhas globais a uma ecologia de saberes

    Full text link

    Interactive reformation of fetal ultrasound data to a T-position

    No full text
    Dreidimensionale Ultraschall Untersuchungen werden heutzutage häufig bei fetalen Untersuchungen eingesetzt. Diese Untersuchungsmethode liefert genaue Informationen sowohl über die Haut und die Oberfläche des Fötus als auch über die inneren Organe. Pränatale Untersuchungen werden häufig eingesetzt, um die Entwicklung des Fötus zu untersuchen. Diese Untersuchungen sind sehr wichtig, um die gesunde Entwicklung des Fötus zu kontrollieren. Die Analyse der Daten basiert auf einer zweidimensionalen Schicht-Ansicht, um die Abmessungen durchzuführen, die für die Berechnung des Volumens und des Gewichtes des Fötus notwendig sind. Die Ergebnisse sind jedoch auch immer abhängig von der Person, die die Messungen durchführt, da es oft auch wenig eindeutige Daten gibt. Die Messungen und Berechnungen sind jedoch für die Kontrolle der Entwicklung des Fötus und auch für die Geburtsvorbereitung wichtig. Das Ultraschall Bildgebungsverfahren ist oft von Artefakten wie Punkten, Rauschen oder blockierter Sicht betroffen. Diese Störungen treten auf, weil das Verfahren akustischer Natur ist. 2D-Schichten als Basis für verschiedene Messungen sind nicht optimal, weil die Messungen von der Projektion abhängt und nur korrekt ist, wenn genau die richtige Ebene gewählt wurde, um diese durchzuführen. Eine Analyse der Daten in 3D wäre sinnvoller, da der Beobachter bzw. die Beobachterin einen besseren Überblick hätte und besser zwischen Artefakten und „echten“Daten unterscheiden könnte. Gewisse Messungen des Fötus wie zum Beispiel die Länge von Kopf bis Rumpf oder die Länge des femoralen Knochens können mit standardisierten Tabellen verglichen werden, um die Entwicklung des Fötus zu überprüfen. Standardisierung ist in der Medizin weit verbreitet und hat den Zweck, die Vergleichbarkeit der Daten verschiedener Patienten zu ermöglichen. Deshalb wird hier eine Methode vorgestellt, die 3D Ultraschalldaten einer fötalen Untersuchung standardisiert. Den Fötus in eine standardisierte Pose zu bringen, ermöglicht die Durchführung von automatischen Messungen und ermöglicht auch die Einführung von neuen Messungen, die unter anderem das Volumen von bestimmten Körperregionen beinhalten könnte. Eine standardisierte Pose des Fötus würde es auch ermöglichen, Föten miteinander zu vergleichen oder die Daten eines Fötus im Verlauf der Schwangerschaft zu beobachten. Die neue Methode besteht aus sechs Schritten: Laden der Daten, Vorverarbeitung, „Rigging“ des Modells, Gewichtung, die eigentliche Transformation oder auch „Vitruvian Baby“genannt und zum Schluss die Analyse. Es wurde versucht so viele Schritte des Workflows zu automatisieren, manche Schritte jedoch sind manuell. Vollkommen automatisiert ist die Gewichtung des Modells und die Transformation. Die Vorverarbeitung und das „Rigging“des Modells benötigt manuellen Input des Users. Beim „Rigging“wird ein abstrahiertes Skelett in das Modell eingesetzt. Dieses wird dazu verwendet, den Fötus zu beschreiben und dient als Grundlage für die Transformation. Die Leistung unseres neuen Verfahrens wurde getestet, indem ein Phantom-Datensatz eines Mannes verwendet wurde. Das Phantom ist bereits in der T-Pose gegeben und gilt daher auch als Vergleichsbasis für das Ergebnis unserer Methode. Das Phantom wurde zuerst in sieben verschiedene Fötus Posen gebracht. Der Vergleich des Ergebnisses unserer Methode und der Ziel T-Pose ergab eine Übereinstimmung auf Voxel Ebene von 79.02% im Durchschnitt. Die Ähnlichkeit der Messung von Kopf bis Fuß und von den Fingern der linken Hand zu den Fingern der rechten Hand wurde ebenfalls gemessen. VOn Kopf bis Fuß ergab sich eine Genauigkeit von 94.05% im Durchschnitt und bei der Messung von Finger zu Finger von 91.08% im Durchschnitt. Der manuelle Schritt des „Riggings“ dauerte in Durchschnitt sieben Minuten. Die Methode wurde auch mit einem Fötus-Modell und einem Phantom-Datensatz einer dreidimensionalen Ultraschall Untersuchung getestet und die Ergebnisse sind vielversprechend.Three dimensional ultrasound images are commonly used in prenatal screening. The acquisition delivers detailed information about the skin as well as the inner organs of the fetus. Prenatal screenings in terms of growth analysis are very important to support a healthy development of the fetus. The analysis of this data involves viewing of two dimensional (2D) slices in order to take measurements or calculate the volume and weight of the fetus. These steps involve manual investigation and are dependent on the skills of the person who performs them. These measurements and calculations are very important to analyze the development of the fetus and for the birth preparation. Ultrasound imaging is affected by artifacts like speckles, noise and also of structures obstructing the regions of interest. These artifacts occur because the imaging technique is using sound waves and their echo to create images. 2D slices as used as basis for the measurement of the fetus therefore might not be the best solution. Analyzing the data in a three dimensional (3D) way would enable the viewer to have a better overview and to better distinguish between artifacts and the real data of the fetus. The growth of a fetus can be analysed by comparing standardized measurements like the crown foot length, the femur length or the derived head circumference as well as the abdominal circumference. Standardization is well known in many fields of medicine and is used to enable comparability between investigations of the same patient or between patients. Therefore we introduce a standardized way of analyzing 3D ultrasound images of fetuses. Bringing the fetus in a standardized position would enable automatized measurements by the machine and there could also be new measurements applied like the volume of specific body parts. A standardized pose would also provide possibilities to compare the results of different measurements of one fetus as well as the measurements of different fetuses. The novel method consists of six steps, namely the loading of the data, the preprocessing, the rigging of the model, the weighting of the data, the actual transformation called the "Vitruvian Baby" and at the end the analysis of the result. We tried to automatize the workflow as far as possible resulting in some manual tasks and some automatic ones. The loading of the data works with standard medical image formats and the preprocessing involves some interaction in order to get rid of the ultrasound induced artifacts. Transforming data into a specific position is a complex task which might involve a manual processing steps. In the method presented in this work one step of the transformation namely the rigging of the model, where a skeleton is placed in the data, is performed manually. The weighting as well as the transformation although are performed completely automatically resulting in a T-pose representation of the data. We analysed the performance of our novel approach in several ways. We first use a phantom model which has been used as a reference already presented in a T-pose. After using seven different fetus poses of the model as input the result was an average of 79,02% voxel overlapping between the output of the method and the goal T-pose. When having a look at the similarity of the finger to finger span and the head to toe measurement we considered a value of 91,08% and 94,05% in average. The time needed for the most complex manual task was in average seven minutes. After using a phantom model of a man, we also assessed the performance of the method using a computer model of a fetus and a phantom model of a 3D ultrasound investigation. The results also look very promising.12

    Scaling Up Medical Visualization : Multi-Modal, Multi-Patient, and Multi-Audience Approaches for Medical Data Exploration, Analysis and Communication

    No full text
    Medisinsk visualisering er en av de mest applikasjonsrettede områdene av visualiseringsforsking. Tett samarbeid med medisinske eksperter er nødvendig for å tolke medisinsk bildedata og lage betydningsfulle visualiseringsteknikker og visualiseringsapplikasjoner. Kreft er en av de vanligste dødsårsakene, og med økende gjennomsnittsalder i i-land øker også antallet diagnoser av gynekologisk kreft. Moderne avbildningsteknikker er et viktig verktøy for å vurdere svulster og produsere et økende antall bildedata som radiologer må tolke. I tillegg til antallet bildemodaliteter, øker også antallet pasienter, noe som fører til at visualiseringsløsninger må bli skalert opp for å adressere den økende kompleksiteten av multimodal- og multipasientdata. Dessuten er ikke medisinsk visualisering kun tiltenkt medisinsk personale, men har også som mål å informere pasienter, pårørende, og offentligheten om risikoen relatert til visse sykdommer, og mulige behandlinger. Derfor har vi identifisert behovet for å skalere opp medisinske visualiseringsløsninger for å kunne håndtere multipublikumdata. Denne avhandlingen adresserer skaleringen av disse dimensjonene i forskjellige bidrag vi har kommet med. Først presenterer vi teknikkene våre for å skalere visualiseringer i flere modaliteter. Vi introduserer en visualiseringsteknikk som tar i bruk små multipler for å vise data fra flere modaliteter innenfor et bildesnitt. Dette lar radiologer utforske dataen effektivt uten å måtte bruke flere sidestilte vinduer. I det neste steget utviklet vi en analyseplatform ved å ta i bruk «radiomic tumor profiling» på forskjellige bildemodaliteter for å analysere kohortdata og finne nye biomarkører fra bilder. Biomarkører fra bilder er indikatorer basert på bildedata som kan forutsi variabler relatert til kliniske utfall. «Radiomic tumor profiling» er en teknikk som genererer mulige biomarkører fra bilder basert på første- og andregrads statistiske målinger. Applikasjonen lar medisinske eksperter analysere multiparametrisk bildedata for å finne mulige korrelasjoner mellom kliniske parameter og data fra «radiomic tumor profiling». Denne tilnærmingen skalerer i to dimensjoner, multimodal og multipasient. I en senere versjon la vi til funksjonalitet for å skalere multipublikumdimensjonen ved å gjøre applikasjonen vår anvendelig for livmorhalskreft- og prostatakreftdata, i tillegg til livmorkreftdataen som applikasjonen var designet for. I et senere bidrag fokuserer vi på svulstdata på en annen skala og muliggjør analysen av svulstdeler ved å bruke multimodal bildedata i en tilnærming basert på hierarkisk gruppering. Applikasjonen vår finner mulige interessante regioner som kan informere fremtidige behandlingsavgjørelser. I et annet bidrag, en digital sonderingsinteraksjon, fokuserer vi på multipasientdata. Bildedata fra flere pasienter kan sammenlignes for å finne interessante mønster i svulstene som kan være knyttet til hvor aggressive svulstene er. Til slutt skalerer vi multipublikumdimensjonen med en likhetsvisualisering som er anvendelig for forskning på livmorkreft, på bilder av nevrologisk kreft, og maskinlæringsforskning på automatisk segmentering av svulstdata. Som en kontrast til de allerede fremhevete bidragene, fokuserer vårt siste bidrag, ScrollyVis, hovedsakelig på multipublikumkommunikasjon. Vi muliggjør skapelsen av dynamiske og vitenskapelige “scrollytelling”-opplevelser for spesifikke eller generelle publikum. Slike historien kan bli brukt i spesifikke brukstilfeller som kommunikasjon mellom lege og pasient, eller for å kommunisere vitenskapelige resultater via historier til et generelt publikum i en digital museumsutstilling. Våre foreslåtte applikasjoner og interaksjonsteknikker har blitt demonstrert i brukstilfeller og evaluert med domeneeksperter og fokusgrupper. Dette har ført til at noen av våre bidrag allerede er i bruk på andre forskingsinstitusjoner. Vi ønsker å evaluere innvirkningen deres på andre vitenskapelige felt og offentligheten i fremtidige arbeid

    MyoBeatz Myoelectric prosthesis control training using an android-based mobile rehabilitation game

    No full text
    Myoelektrische Prothesen für Patienten ohne Unterarm gibt es schon lange auf dem Markt. Diese können meist nur in Kombination mit einer langen Zeit an Signaltraining, unter Beobachtung von Physiotherapeuten, verwendet werden [1]. Die Prothesen werden mit einer präzisen und proportionalen Aktivierung der Muskeln im verbleibenden Unterarm gesteuert [2]. Die verschiedenen Funktionen der Prothese werden durch verschiedene Aktivierungen der Unterarmmuskeln kontrolliert. Die Beherrschung dieser Muster muss so lange trainiert werden, bis sie vom Nutzer in jeder Situation abrufbar sind. Die Ablehnungsrate von myoelektrischen Prothesen ist sehr hoch, da eine zu geringe Trainingszeit zu einer stark reduzierten Funktionalität führt [1]. Mobile Visualisierungsgeräte für myoelektrische Signale, wie der MyoBoy von Otto Bock [3], wurden bereits entwickelt und auch virtuelles Rehabilitationstraining wurde bereits wissenschaftlich beschrieben [4]. Die Lösung, die in dieser Masterarbeit beschrieben wird kombiniert die virtuelle Rehabilitation und den Aspekt des Spielens in einer mobilen Anwendung. Das System besteht aus einem myoelektrischen Sensorarmband [5], in Kombination mit einem selbst entwickelten Android Rehabilitationsspiel, welches eine motivierende Trainingserfahrung ermöglicht. Das Spiel wird mit denselben Signalen, welche über Bluetooth übertragen werden, gespielt, die auch für die Prothesensteuerung verwendet werden. Das Spiel besteht aus Spiel Elementen, welche passend zu einem Musikstück aktiviert werden müssen. Die Leistung des Spielers wird gemessen und anhand eines visuellen Feedbacks angezeigt. Der Prototyp des Spiels wurde mit körperlich gesunden und mit Patienten, welche die Zielgruppe der App darstellen, getestet. Die Patienten haben der App ein positives Feedback gegeben und würden diese regelmäßig für das Prothesentraining, für deren myoelektrische Steuerung, verwenden. Die finale Version der App inkludiert sieben Musikstücke und bietet Informationen zur Leistung des Spielers bei jedem einzelnen Lied. Die Version inkludiert auch einen Elektromyographie (EMG) Test, welcher es dem betreuenden Physiotherapeuten ermöglicht, die Leistung des Pateinten zu verfolgen. Das Spiel ist mit dem Internet verbunden und synchronisiert sowohl Statistiken über die Nutzung der App als auch den Highscore des Spielers und die Resultate der durchgeführten EMG Tests. Die Daten werden in eine Firebase Echtzeit Datenbank geschrieben. Das MyoBeatz System wurde von zwei Patienten bereits für das Signaltraining für vier Wochen gespielt. Die Resultate eines Patienten zeigen, dass nach dem Rehabiliationstraining eine sichtliche Verbesserung eingetreten ist. Bei der zweiten Person waren die Ergebnisse bereits vor der Rehabiliation sehr gut und konnten jedoch im Bereich der mittleren und starken Muskelkontraktion gesteigert werden. [1] E. Biddiss and T. Chau, “Upper limb prosthesis use and abandonment: A survey of the last 25 years,” Tech. Rep. 3, 2007. [2] A. D. Roche, H. Rehbaum, D. Farina, and O. C. Aszmann, “Prosthetic Myoelectric Control Strategies: A Clinical Perspective,” Current Surgery Reports, vol. 2, no. 3, p. 44, 2014. [Online]. Available: http://link.springer.com/10.1007/s40137-013-0044-8 [3] Ottobock.at, “Otto Bock MyoBoy.” [Online]. Available: https://professionals.ottobockus.com/Prosthetics/Upper-Limb-Prosthetics/Myo-Hands-and-Components/Myo-Software/MyoBoy/p/757M11Myoelectric prostheses for upper limb amputees are on the market for a long time now, but they can normally only be used in combination with a huge amount of signal training sessions under observation of physiotherapists [1]. Prostheses are controlled by the precise and proportional activation of the forearm muscles in the residual limb [2]. The various functions of the prosthesis are mapped to specific contractions. These have to be trained in order to correctly perform them in every situation users would like to control their prosthesis. The rejection rate of myoelectric prostheses is also very high because of the needed training and the reduced functionality if there is too less of it [1]. Mobile visualization devices for myoelectric signals like the MyoBoy of Otto Bock [3] have already been developed and virtual rehabilitation training has also already been scientifically researched [4]. In this master thesis a solution is described combining the virtual rehabilitation and the game aspect in a mobile solution. The system should provide a more motivational training experience and consists of a myoelectric sensor armband [5] in combination with a developed android rehabilitation game. For playing the game signals as needed for the prosthesis control are used. These are transferred via Bluetooth to the Android application. The game is implemented in Unity which is integrated in the application. Catching game elements according to the rhythm of a music song is the aim of the game. The user performance gets measured and the game responds with visual feedback. The songs can be played in three different levels of difficulty, which determine how precise and fast the player has to act. The player is informed about the score they reached after each game. Users are also able to have a look at the highscore they already reached. The prototype of the game has been tested by able bodied persons and by patients who are the target group of the app. Feedback given by the patients was generally positive. The final version of the app includes seven music songs and provides information about the performance of the user for each song. The delivered version also includes a so called electromyography (EMG) test which enables the physio therapist to track the performance of the patient. The game is connected to the internet and synchronizes statistics as well as highscores and EMG test results with a Firebase realtime database. The game has already been played by two forearm amputees for four weeks. One patient was able to reach a well improvement and managed to improve the precision of muscle control in all three contraction levels for flexor and extensor. Another patient had already performed very well before the rehabilitation program but was also able to improve their results in the range of middle and high muscle contraction. [1] E. Biddiss and T. Chau, “Upper limb prosthesis use and abandonment: A survey of the last 25 years,” Tech. Rep. 3, 2007. [2] A. D. Roche, H. Rehbaum, D. Farina, and O. C. Aszmann, “Prosthetic Myoelectric Control Strategies: A Clinical Perspective,” Current Surgery Reports, vol. 2, no. 3, p. 44, 2014. [Online]. Available: http://link.springer.com/10.1007/s40137-013-0044-8 [3] Ottobock.at, “Otto Bock MyoBoy.” [Online]. Available: https://professionals.ottobockus.com/Prosthetics/Upper-Limb-Prosthetics/Myo-Hands-and-Components/Myo-Software/MyoBoy/p/757M11 Available: https://games.jmir.org/2017/1/e3/ [5] Thalmic, “Myo armband.” [Online]. Available: https://www.myo.com/eingereicht von Eric MörthAbweichender Titel laut Übersetzung der Verfasserin/des VerfassersMedizinische Universität Wien, Masterarb., 2018(VLID)265186

    The Vitruvian Baby: Interactive Reformation of Fetal Ultrasound Data to a T-Position

    Get PDF
    Three-dimensional (3D) ultrasound imaging and visualization is often used in medical diagnostics, especially in prenatal screening. Screening the development of the organs of the fetus as well as the overall growth is important to assess possible complications early on. State of the art approaches involve taking standardized measurements to compare them with standardized tables. The measurements are taken in a 2D slice view where the fetal pose may complicate taking precise measurements. Performing the analysis in a 3D view would enable the viewer to better discriminate between artefacts and representative information. Making data comparable between different investigations and patients is a goal in medical imaging techniques and is often achieved by standardization, as is done in magnetic resonance imaging (MRI). With this paper, we introduce a novel approach to provide a standardization method for 3D ultrasound fetus screenings. Our approach is called ”The Vitruvian Baby” and incorporates a complete pipeline for standardized measuring in fetal 3D ultrasound. The input of the method is a 3D ultrasound screening of a fetus and the output is the fetus in a standardized T-pose. In this pose, taking measurements is easier and comparison of different fetuses is possible. In addition to the transformation of the 3D ultrasound data, we create an abstract representation of the fetus based on accurate measurements. We demonstrate the accuracy of our approach on simulated data where the ground truth is known.publishedVersio

    ScrollyVis: Interactive visual authoring of guided dynamic narratives for scientific scrollytelling

    Full text link
    Visual stories are an effective and powerful tool to convey specific information to a diverse public. Scrollytelling is a recent visual storytelling technique extensively used on the web, where content appears or changes as users scroll up or down a page. By employing the familiar gesture of scrolling as its primary interaction mechanism, it provides users with a sense of control, exploration and discoverability while still offering a simple and intuitive interface. In this paper, we present a novel approach for authoring, editing, and presenting data-driven scientific narratives using scrollytelling. Our method flexibly integrates common sources such as images, text, and video, but also supports more specialized visualization techniques such as interactive maps as well as scalar field and mesh data visualizations. We show that scrolling navigation can be used to traverse dynamic narratives and demonstrate how it can be combined with interactive parameter exploration. The resulting system consists of an extensible web-based authoring tool capable of exporting stand-alone stories that can be hosted on any web server. We demonstrate the power and utility of our approach with case studies from several of diverse scientific fields and with a user study including 12 participants of diverse professional backgrounds. Furthermore, an expert in creating interactive articles assessed the usefulness of our approach and the quality of the created stories
    corecore