1,268 research outputs found
A Method for Forecasting the Commercial Air Traffic Schedule in the Future
This report presents an integrated set of models that forecasts air carriers' future operations when delays due to limited terminal-area capacity are considered. This report models the industry as a whole, avoiding unnecessary details of competition among the carriers. To develop the schedule outputs, we first present a model to forecast the unconstrained flight schedules in the future, based on the assumption of rational behavior of the carriers. Then we develop a method to modify the unconstrained schedules, accounting for effects of congestion due to limited NAS capacities. Our underlying assumption is that carriers will modify their operations to keep mean delays within certain limits. We estimate values for those limits from changes in planned block times reflected in the OAG. Our method for modifying schedules takes many means of reducing the delays into considerations, albeit some of them indirectly. The direct actions include depeaking, operating in off-hours, and reducing hub airports'operations. Indirect actions include using secondary airports, using larger aircraft, and selecting new hub airports, which, we assume, have already been modeled in the FAA's TAF. Users of our suite of models can substitute an alternative forecast for the TAF
Visual-Kinematics Graph Learning for Procedure-agnostic Instrument Tip Segmentation in Robotic Surgeries
Accurate segmentation of surgical instrument tip is an important task for
enabling downstream applications in robotic surgery, such as surgical skill
assessment, tool-tissue interaction and deformation modeling, as well as
surgical autonomy. However, this task is very challenging due to the small
sizes of surgical instrument tips, and significant variance of surgical scenes
across different procedures. Although much effort has been made on visual-based
methods, existing segmentation models still suffer from low robustness thus not
usable in practice. Fortunately, kinematics data from the robotic system can
provide reliable prior for instrument location, which is consistent regardless
of different surgery types. To make use of such multi-modal information, we
propose a novel visual-kinematics graph learning framework to accurately
segment the instrument tip given various surgical procedures. Specifically, a
graph learning framework is proposed to encode relational features of
instrument parts from both image and kinematics. Next, a cross-modal
contrastive loss is designed to incorporate robust geometric prior from
kinematics to image for tip segmentation. We have conducted experiments on a
private paired visual-kinematics dataset including multiple procedures, i.e.,
prostatectomy, total mesorectal excision, fundoplication and distal gastrectomy
on cadaver, and distal gastrectomy on porcine. The leave-one-procedure-out
cross validation demonstrated that our proposed multi-modal segmentation method
significantly outperformed current image-based state-of-the-art approaches,
exceeding averagely 11.2% on Dice.Comment: Accepted to IROS 202
AutoLaparo: A New Dataset of Integrated Multi-tasks for Image-guided Surgical Automation in Laparoscopic Hysterectomy
Computer-assisted minimally invasive surgery has great potential in
benefiting modern operating theatres. The video data streamed from the
endoscope provides rich information to support context-awareness for
next-generation intelligent surgical systems. To achieve accurate perception
and automatic manipulation during the procedure, learning based technique is a
promising way, which enables advanced image analysis and scene understanding in
recent years. However, learning such models highly relies on large-scale,
high-quality, and multi-task labelled data. This is currently a bottleneck for
the topic, as available public dataset is still extremely limited in the field
of CAI. In this paper, we present and release the first integrated dataset
(named AutoLaparo) with multiple image-based perception tasks to facilitate
learning-based automation in hysterectomy surgery. Our AutoLaparo dataset is
developed based on full-length videos of entire hysterectomy procedures.
Specifically, three different yet highly correlated tasks are formulated in the
dataset, including surgical workflow recognition, laparoscope motion
prediction, and instrument and key anatomy segmentation. In addition, we
provide experimental results with state-of-the-art models as reference
benchmarks for further model developments and evaluations on this dataset. The
dataset is available at https://autolaparo.github.io.Comment: Accepted at MICCAI 202
- …