9 research outputs found

    How different augmented reality visualizations for drilling affect trajectory deviation, visual attention, and user experience

    Full text link
    PURPOSE Previous work has demonstrated the high accuracy of augmented reality (AR) head-mounted displays for pedicle screw placement in spinal fusion surgery. An important question that remains unanswered is how pedicle screw trajectories should be visualized in AR to best assist the surgeon. METHODOLOGY We compared five AR visualizations displaying the drill trajectory via Microsoft HoloLens 2 with different configurations of abstraction level (abstract or anatomical), position (overlay or small offset), and dimensionality (2D or 3D) against standard navigation on an external screen. We tested these visualizations in a study with 4 expert surgeons and 10 novices (residents in orthopedic surgery) on lumbar spine models covered by Plasticine. We assessed trajectory deviations ([Formula: see text]) from the preoperative plan, dwell times (%) on areas of interest, and the user experience. RESULTS Two AR visualizations resulted in significantly lower trajectory deviations (mixed-effects ANOVA, p<0.0001 and p<0.05) compared to standard navigation, whereas no significant differences were found between participant groups. The best ratings for ease of use and cognitive load were obtained with an abstract visualization displayed peripherally around the entry point and with a 3D anatomical visualization displayed with some offset. For visualizations displayed with some offset, participants spent on average only 20% of their time examining the entry point area. CONCLUSION Our results show that real-time feedback provided by navigation can level task performance between experts and novices, and that the design of a visualization has a significant impact on task performance, visual attention, and user experience. Both abstract and anatomical visualizations can be suitable for navigation when not directly occluding the execution area. Our results shed light on how AR visualizations guide visual attention and the benefits of anchoring information in the peripheral field around the entry point

    SAMHD1 is a biomarker for cytarabine response and a therapeutic target in acute myeloid leukemia.

    Get PDF
    The nucleoside analog cytarabine (Ara-C) is an essential component of primary and salvage chemotherapy regimens for acute myeloid leukemia (AML). After cellular uptake, Ara-C is converted into its therapeutically active triphosphate metabolite, Ara-CTP, which exerts antileukemic effects, primarily by inhibiting DNA synthesis in proliferating cells. Currently, a substantial fraction of patients with AML fail to respond effectively to Ara-C therapy, and reliable biomarkers for predicting the therapeutic response to Ara-C are lacking. SAMHD1 is a deoxynucleoside triphosphate (dNTP) triphosphohydrolase that cleaves physiological dNTPs into deoxyribonucleosides and inorganic triphosphate. Although it has been postulated that SAMHD1 sensitizes cancer cells to nucleoside-analog derivatives through the depletion of competing dNTPs, we show here that SAMHD1 reduces Ara-C cytotoxicity in AML cells. Mechanistically, dGTP-activated SAMHD1 hydrolyzes Ara-CTP, which results in a drastic reduction of Ara-CTP in leukemic cells. Loss of SAMHD1 activity-through genetic depletion, mutational inactivation of its triphosphohydrolase activity or proteasomal degradation using specialized, virus-like particles-potentiates the cytotoxicity of Ara-C in AML cells. In mouse models of retroviral AML transplantation, as well as in retrospective analyses of adult patients with AML, the response to Ara-C-containing therapy was inversely correlated with SAMHD1 expression. These results identify SAMHD1 as a potential biomarker for the stratification of patients with AML who might best respond to Ara-C-based therapy and as a target for treating Ara-C-refractory AML

    Sorafenib paradoxically activates NK cell effector function in polyclonal expanded NK cells in a time- and dose dependent manner

    No full text
    Paradoxerweise aktiviert Sorafenib in menschlichen polyklonal expandierten NK-Zellen den MAPK/ERK- Signaltransduktionsweg und fĂŒhrt zeit- und dosisabhĂ€ngig zu einer VerstĂ€rkung der Effektorfunktionen. Polyklonal expandierte NK-Zellen von gesunden menschlichen Blutspendern wurden mit Sorafenib bzw. dem spezifischen RAF-Inhibitor ZM336372 in variierenden Konzentrationen und Expositionsdauern behandelt und die Effekte auf NK-Zell Effektorfunktionen sowie Signaltansduktion geprĂŒft. Paradoxerweise fĂŒhrte die Behandlung mit Sorafenib sowie ZM336372 in einem bestimmten Konzentrationsbereich zeitabhĂ€ngig zu einer VerstĂ€rkung der Effektorfunktionen. Diese Effekte waren mit einem erhöhten Phosphorylierungsniveau von ERK1/2 sowie CRAF verbunden, wĂ€hrend keine Effekte auf AKT zu beobachten waren.Sorafenib paradoxically activates NK cell effector function in polyclonal expanded NK cells in a time- and dose dependent manner. Polyclonal NK cells from healthy blood donors were treated with with Sorafenib or ZM336372 with varying concentations and exposition duration. Treatment with Paradoxicall Sorafenib and the specific RAF Inhibitor ZM336372 led in a certain dosing range to an increase in NK cell effector function and increased phosphorylation Level of CRAF and ERK1/2. The phosphorylation level of AKT was not altered

    Object-gaze distance: Quantifying near-peripheral gaze behavior in real-world applications

    No full text
    Eye tracking (ET) has shown to reveal the wearer’s cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers’ use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object-Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments

    Object-gaze distance: Quantifying near-peripheral gaze behavior in real-world applications

    No full text
    Eye tracking (ET) has shown to reveal the wearer’s cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers’ use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object-Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses machine learning for area of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two AOIs in a real surgical procedure, the results show that a considerable increase of interpretable fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally, the evaluation of a multi-OGD time series representation has shown the potential to reveal novel gaze patterns, which may provide a more accurate depiction of human gaze behavior in multi-object environments

    Automating areas of interest analysis in mobile eye tracking experiments based on machine learning

    Get PDF
    For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assignment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies with several hours of recording difficult to analyse quantitatively. We introduce a new machine learning-based algorithm, the computational Gaze-Object Mapping (cGOM), that automatically maps gaze data onto respective AOIs. cGOM extends state-of-the-art object detection and segmentation by mask R-CNN with a gaze mapping feature. The new algorithm’s performance is validated against a manual fixation-by-fixation mapping, which is considered as ground truth, in terms of true positive rate (TPR), true negative rate (TNR) and efficiency. Using only 72 training images with 264 labelled object representations, cGOM is able to reach a TPR of approx. 80% and a TNR of 85% compared to the manual mapping. The break-even point is reached at 2 hours of eye tracking recording for the total procedure, respectively 1 hour considering human working time only. Together with a real-time capability of the mapping process after completed training, even hours of eye tracking recording can be evaluated efficiently

    How different augmented reality visualizations for drilling affect trajectory deviation, visual attention, and user experience

    No full text
    Purpose Previous work has demonstrated the high accuracy of augmented reality (AR) head-mounted displays for pedicle screw placement in spinal fusion surgery. An important question that remains unanswered is how pedicle screw trajectories should be visualized in AR to best assist the surgeon. Methodology We compared five AR visualizations displaying the drill trajectory via Microsoft HoloLens 2 with different configurations of abstraction level (abstract or anatomical), position (overlay or small offset), and dimensionality (2D or 3D) against standard navigation on an external screen. We tested these visualizations in a study with 4 expert surgeons and 10 novices (residents in orthopedic surgery) on lumbar spine models covered by Plasticine. We assessed trajectory deviations (∘ ) from the preoperative plan, dwell times (%) on areas of interest, and the user experience. Results Two AR visualizations resulted in significantly lower trajectory deviations (mixed-effects ANOVA, p<0.0001 and p<0.05) compared to standard navigation, whereas no significant differences were found between participant groups. The best ratings for ease of use and cognitive load were obtained with an abstract visualization displayed peripherally around the entry point and with a 3D anatomical visualization displayed with some offset. For visualizations displayed with some offset, participants spent on average only 20% of their time examining the entry point area. Conclusion Our results show that real-time feedback provided by navigation can level task performance between experts and novices, and that the design of a visualization has a significant impact on task performance, visual attention, and user experience. Both abstract and anatomical visualizations can be suitable for navigation when not directly occluding the execution area. Our results shed light on how AR visualizations guide visual attention and the benefits of anchoring information in the peripheral field around the entry point.ISSN:1861-6410ISSN:1861-642

    Comparing the effectiveness of augmented reality-based and conventional instructions during single ECMO cannulation training

    No full text
    Purpose Effective training of extracorporeal membrane oxygenation (ECMO) cannulation is key to fighting the persistently high mortality rate of ECMO interventions. Though augmented reality (AR) is a promising technology for improving information display, only a small percentage of AR projects have addressed training procedures. The present study investigates the potential benefits of AR-based, contextual instructions for ECMO cannulation training as compared to instructions used during conventional training at a university hospital. Methodology An AR step-by-step guide was developed for the Microsoft HoloLens 2 that combines text, images, and videos from the conventional training program with simple 3D models. A study was conducted with 21 medical students performing two surgical procedures on a simulator. Participants were divided into two groups, with one group using the conventional instructions for the first procedure and AR instructions for the second and the other group using instructions in reverse order. Training times, a detailed error protocol, and a standardized user experience questionnaire (UEQ) were evaluated. Results AR-based execution was associated with slightly higher training times and with significantly fewer errors for the more complex second procedure (p<0.05, Mann–Whitney U). These differences in errors were most present for knowledge-related errors, resulting in a 66% reduction in the number of errors. AR instructions also led to significantly better ratings on 5 out of the 6 scales used in the UEQ, pointing to higher perceived clarify of information, information acquisition speed, and stimulation. Conclusion The results extend previous research on AR instructions to ECMO cannulation training, indicating its high potential to improve training outcomes as a result of better information acquisition by participants during task execution. Future work should investigate how better performance in a single training session relates to better performance in the long run.ISSN:1861-6410ISSN:1861-642
    corecore