35 research outputs found
Kontextsensitivität für den Operationssaal der Zukunft
The operating room of the future is a topic of high interest. In this thesis, which is among the first in the recently defined field of Surgical Data Science, three major topics for automated context awareness in the OR of the future will be examined: improved surgical workflow analysis, the newly developed event impact factors, and as application combining these and other concepts the unified surgical display.Der Operationssaal der Zukunft ist ein Forschungsfeld von großer Bedeutung. In dieser Dissertation, die eine der ersten im kürzlich definierten Bereich „Surgical Data Science“ ist, werden drei Themen für die automatisierte Kontextsensitivität im OP der Zukunft untersucht: verbesserte chirurgische Worflowanalyse, die neuentwickelten „Event Impact Factors“ und als Anwendungsfall, der diese Konzepte mit anderen kombiniert, das vereinheitlichte chirurgische Display
TeCNO: Surgical Phase Recognition with Multi-Stage Temporal Convolutional Networks
Automatic surgical phase recognition is a challenging and crucial task with
the potential to improve patient safety and become an integral part of
intra-operative decision-support systems. In this paper, we propose, for the
first time in workflow analysis, a Multi-Stage Temporal Convolutional Network
(MS-TCN) that performs hierarchical prediction refinement for surgical phase
recognition. Causal, dilated convolutions allow for a large receptive field and
online inference with smooth predictions even during ambiguous transitions. Our
method is thoroughly evaluated on two datasets of laparoscopic cholecystectomy
videos with and without the use of additional surgical tool information.
Outperforming various state-of-the-art LSTM approaches, we verify the
suitability of the proposed causal MS-TCN for surgical phase recognition.Comment: 10 pages, 2 figure
Kontextsensitivität für den Operationssaal der Zukunft
The operating room of the future is a topic of high interest. In this thesis, which is among the first in the recently defined field of Surgical Data Science, three major topics for automated context awareness in the OR of the future will be examined: improved surgical workflow analysis, the newly developed event impact factors, and as application combining these and other concepts the unified surgical display.Der Operationssaal der Zukunft ist ein Forschungsfeld von großer Bedeutung. In dieser Dissertation, die eine der ersten im kürzlich definierten Bereich „Surgical Data Science“ ist, werden drei Themen für die automatisierte Kontextsensitivität im OP der Zukunft untersucht: verbesserte chirurgische Worflowanalyse, die neuentwickelten „Event Impact Factors“ und als Anwendungsfall, der diese Konzepte mit anderen kombiniert, das vereinheitlichte chirurgische Display
Robust Surgical Tools Detection in Endoscopic Videos with Noisy Data
Over the past few years, surgical data science has attracted substantial
interest from the machine learning (ML) community. Various studies have
demonstrated the efficacy of emerging ML techniques in analysing surgical data,
particularly recordings of procedures, for digitizing clinical and non-clinical
functions like preoperative planning, context-aware decision-making, and
operating skill assessment. However, this field is still in its infancy and
lacks representative, well-annotated datasets for training robust models in
intermediate ML tasks. Also, existing datasets suffer from inaccurate labels,
hindering the development of reliable models. In this paper, we propose a
systematic methodology for developing robust models for surgical tool detection
using noisy data. Our methodology introduces two key innovations: (1) an
intelligent active learning strategy for minimal dataset identification and
label correction by human experts; and (2) an assembling strategy for a
student-teacher model-based self-training framework to achieve the robust
classification of 14 surgical tools in a semi-supervised fashion. Furthermore,
we employ weighted data loaders to handle difficult class labels and address
class imbalance issues. The proposed methodology achieves an average F1-score
of 85.88\% for the ensemble model-based self-training with class weights, and
80.88\% without class weights for noisy labels. Also, our proposed method
significantly outperforms existing approaches, which effectively demonstrates
its effectiveness