32 research outputs found

    Menschzentrierte Modellierung von Produktentstehungssystemen unter besonderer BerĂĽcksichtigung der Synthese und Analyse dynamischer Zielsysteme

    Get PDF
    In dieser Arbeit wird mit dem erweiterten ZHO-Modell ein menschzentriertes Erklärungsmodell entwickelt, das den Entwickler als denkenden und handelnden Menschen im Mittelpunkt eines unsicherheitsbehafteten und iterativ verlaufenden Produktentstehungsprozesses beschreibt

    POV-Surgery: A Dataset for Egocentric Hand and Tool Pose Estimation During Surgical Activities

    Full text link
    The surgical usage of Mixed Reality (MR) has received growing attention in areas such as surgical navigation systems, skill assessment, and robot-assisted surgeries. For such applications, pose estimation for hand and surgical instruments from an egocentric perspective is a fundamental task and has been studied extensively in the computer vision field in recent years. However, the development of this field has been impeded by a lack of datasets, especially in the surgical field, where bloody gloves and reflective metallic tools make it hard to obtain 3D pose annotations for hands and objects using conventional methods. To address this issue, we propose POV-Surgery, a large-scale, synthetic, egocentric dataset focusing on pose estimation for hands with different surgical gloves and three orthopedic surgical instruments, namely scalpel, friem, and diskplacer. Our dataset consists of 53 sequences and 88,329 frames, featuring high-resolution RGB-D video streams with activity annotations, accurate 3D and 2D annotations for hand-object pose, and 2D hand-object segmentation masks. We fine-tune the current SOTA methods on POV-Surgery and further show the generalizability when applying to real-life cases with surgical gloves and tools by extensive evaluations. The code and the dataset are publicly available at batfacewayne.github.io/POV_Surgery_io/

    MOBILE EYE TRACKING IN ENGINEERING DESIGN EDUCATION

    Get PDF
    A central part of basic engineering design education aims for imparting profound knowledge of how machine elements are designed and building an understanding about how they work in detail within a technical product. In this context, a basic challenge lies in teaching to analyse complex systems that are usually characterized by a high number of interacting parts and interfering movements. In our basic engineering design education at (blended for review), we recognized that high-performing students in functional analysis are able to gain more insights from analysing machine systems than low-performing students. Indeed, high-performers are not only effectively using previous knowledge, they are also more successful in identifying relevant parts. This observation raises two questions. (Q1) Which previous knowledge is required to single-handedly be able to fully understand how a specific system works? and (Q2) How can we support students in drawing special attention to the relevant parts and the areas revealing their role within the system? In order to answer these questions, we conduct a mobile eye tracking study, including concurrent reporting. Students are asked to analyse a small, but difficult to understand machine system and to explain how it works. This paper highlights the differences between successful und and non-successful functional analysis and discusses them in the context of the two questions presented above. The two main results of this paper are that successful students had a wider knowledge-base of mechanical systems and that analysis strategies like “following the flow line of force” gives a guide rail. Both helped them to identify single subfunctions and to evaluate their importance

    How different augmented reality visualizations for drilling affect trajectory deviation, visual attention, and user experience

    Full text link
    PURPOSE Previous work has demonstrated the high accuracy of augmented reality (AR) head-mounted displays for pedicle screw placement in spinal fusion surgery. An important question that remains unanswered is how pedicle screw trajectories should be visualized in AR to best assist the surgeon. METHODOLOGY We compared five AR visualizations displaying the drill trajectory via Microsoft HoloLens 2 with different configurations of abstraction level (abstract or anatomical), position (overlay or small offset), and dimensionality (2D or 3D) against standard navigation on an external screen. We tested these visualizations in a study with 4 expert surgeons and 10 novices (residents in orthopedic surgery) on lumbar spine models covered by Plasticine. We assessed trajectory deviations ([Formula: see text]) from the preoperative plan, dwell times (%) on areas of interest, and the user experience. RESULTS Two AR visualizations resulted in significantly lower trajectory deviations (mixed-effects ANOVA, p<0.0001 and p<0.05) compared to standard navigation, whereas no significant differences were found between participant groups. The best ratings for ease of use and cognitive load were obtained with an abstract visualization displayed peripherally around the entry point and with a 3D anatomical visualization displayed with some offset. For visualizations displayed with some offset, participants spent on average only 20% of their time examining the entry point area. CONCLUSION Our results show that real-time feedback provided by navigation can level task performance between experts and novices, and that the design of a visualization has a significant impact on task performance, visual attention, and user experience. Both abstract and anatomical visualizations can be suitable for navigation when not directly occluding the execution area. Our results shed light on how AR visualizations guide visual attention and the benefits of anchoring information in the peripheral field around the entry point

    Automatized Self-Supervised Learning for Skin Lesion Screening

    Full text link
    The incidence rates of melanoma, the deadliest form of skin cancer, have been increasing steadily worldwide, presenting a significant challenge to dermatologists. Early detection of melanoma is crucial for improving patient survival rates, but identifying suspicious lesions through ugly duckling (UD) screening, the current method used for skin cancer screening, can be challenging and often requires expertise in pigmented lesions. To address these challenges and improve patient outcomes, an artificial intelligence (AI) decision support tool was developed to assist dermatologists in identifying UD from wide-field patient images. The tool uses a state-of-the-art object detection algorithm to identify and extract all skin lesions from patient images, which are then sorted by suspiciousness using a self-supervised AI algorithm. A clinical validation study was conducted to evaluate the tool's performance, which demonstrated an average sensitivity of 93% for the top-10 AI-identified UDs on skin lesions selected by the majority of experts in pigmented skin lesions. The study also found that dermatologists confidence increased, and the average majority agreement with the top-10 AI-identified UDs improved to 100% when assisted by AI. The development of this AI decision support tool aims to address the shortage of specialists, enable at-risk patients to receive faster consultations and understand the impact of AI-assisted screening. The tool's automation can assist dermatologists in identifying suspicious lesions and provide a more objective assessment, reducing subjectivity in the screening process. The future steps for this project include expanding the dataset to include histologically confirmed melanoma cases and increasing the number of participants for clinical validation to strengthen the tool's reliability and adapt it for real-world consultation.Comment: 11 pages, 4 figure

    Where do ICU trainees really look? An eye-tracking analysis of gaze patterns during central venous catheter insertion

    Get PDF
    Background: There is limited knowledge about gaze patterns of intensive care unit (ICU) trainee doctors during the insertion of a central venous catheter (CVC). The primary objective of this study was to examine visual patterns exhibited by ICU trainee doctors during CVC insertion. Additionally, the study investigated whether differences in gaze patterns could be identified between more and less experienced trainee doctors. Methods: In a real-life, prospective observational study conducted at the interdisciplinary ICU at the University Hospital Zurich, Switzerland, ICU trainee doctors underwent eye-tracking during CVC insertion in a real ICU patient. Using mixed-effects model analyses, the primary outcomes were dwell time, first fixation duration, revisits, fixation count, and average fixation time on different areas of interest (AOI). Secondary outcomes were above eye-tracking outcome measures stratified according to experience level of participants. Results: Eighteen participants were included, of whom 10 were inexperienced and eight more experienced. Dwell time was highest for CVC preparation table ( p = 0.02), jugular vein on ultrasound image ( p < 0.001) and cervical puncture location ( p < 0.001). Concerning experience, dwell time and revisits on jugular vein on ultrasound image ( p = 0.02 and p = 0.04, respectively) and cervical puncture location ( p = 0.004 and p = 0.01, respectively) were decreased in more experienced ICU trainees. Conclusions: Various AOIs have distinct significance for ICU trainee doctors during CVC insertion. Experienced participants exhibited different gaze behavior, requiring less attention for preparation and handling tasks, emphasizing the importance of hand-eye coordination

    Increasing the Power Density of E-Motors by Innovative Winding Design

    Get PDF
    In the future, the sustainable use of renewable energy is becoming more important. About 80% of the total world energy demand is actually derived from fossil fuels. Mobility currently uses over 50% of total global energy. Approaches for the efficient and sustainable energy use are electric and hybrid vehicles. An important component to power these cars is the electric motor. The economic and efficient design of an electric motor requires knowledge of the exact thermal conductivities of all components. A difficult parameter to determine and to improve is the thermal conductivity of the winding. This paper presents measurement results of an innovative motor winding with an improved thermal conductivity.ISSN:2212-827

    Genau hinschauen

    Full text link
    Wirkstoffnamen, die ähnlich aussehen oder ähnlich klingen, können beim Richten und Ver- abreichen von Medikamenten zu Verwechslungen führen. Im Intensivbereich ist das Risiko für Medikationsfehler besonders hoch. Mit Großbuchstaben – Tall Man Letters – gekennzeichnete Spritzenetiketten helfen, Fehler zu vermeiden. Dies zeigte eine Studie auf einer Intensivstation des Universitätsspitals Zürich

    Measuring teamwork for training in healthcare using eye tracking and pose estimation

    No full text
    Teamwork is critical for safe patient care. Healthcare teams typically train teamwork in simulated clinical situations, which require the ability to measure teamwork via behavior observation. However, the required observations are prone to human biases and include significant cognitive load even for trained instructors. In this observational study we explored how eye tracking and pose estimation as two minimal invasive video-based technologies may measure teamwork during simulation-based teamwork training in healthcare. Mobile eye tracking, measuring where participants look, and multi-person pose estimation, measuring 3D human body and joint position, were used to record 64 third-year medical students who completed a simulated handover case in teams of four. On one hand, we processed the recorded data into the eye contact metric, based on eye tracking and relevant for situational awareness and communication patterns. On the other hand, the distance to patient metric was processed, based on multiperson pose estimation and relevant for team positioning and coordination. After successful data recording, we successfully processed the raw videos to specific teamwork metrics. The average eye contact time was 6.46 s [min 0 s – max 28.01 s], while the average distance to the patient resulted in 1.01 m [min 0.32 m – max 1.6 m]. Both metrics varied significantly between teams and simulated roles of participants (p < 0.001). With the objective, continuous, and reliable metrics we created visualizations illustrating the teams’ interactions. Future research is necessary to generalize our findings and how they may complement existing methods, support instructors, and contribute to the quality of teamwork training in healthcare.ISSN:1664-107
    corecore