53 research outputs found
Method for robotic motion compensation during PET imaging of mobile subjects
Studies of the human brain during natural activities, such as locomotion,
would benefit from the ability to image deep brain structures during these
activities. While Positron Emission Tomography (PET) can image these
structures, the bulk and weight of current scanners are not compatible with the
desire for a wearable device. This has motivated the design of a robotic system
to support a PET imaging system around the subject's head and to move the
system to accommodate natural motion. We report here the design and
experimental evaluation of a prototype robotic system that senses motion of a
subject's head, using parallel string encoders connected between the
robot-supported imaging ring and a helmet worn by the subject. This measurement
is used to robotically move the imaging ring (coarse motion correction) and to
compensate for residual motion during image reconstruction (fine motion
correction). Minimization of latency and measurement error are the key design
goals, respectively, for coarse and fine motion correction. The system is
evaluated using recorded human head motions during locomotion, with a mock
imaging system consisting of lasers and cameras, and is shown to provide an
overall system latency of about 80 ms, which is sufficient for coarse motion
correction and collision avoidance, as well as a measurement accuracy of about
0.5 mm for fine motion correction.Comment: 2023 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS
Calibration and evaluation of a motion measurement system for PET imaging studies
Positron Emission Tomography (PET) enables functional imaging of deep brain
structures, but the bulk and weight of current systems preclude their use
during many natural human activities, such as locomotion. The proposed
long-term solution is to construct a robotic system that can support an imaging
system surrounding the subject's head, and then move the system to accommodate
natural motion. This requires a system to measure the motion of the head with
respect to the imaging ring, for use by both the robotic system and the image
reconstruction software. We report here the design, calibration, and
experimental evaluation of a parallel string encoder mechanism for sensing this
motion. Our results indicate that with kinematic calibration, the measurement
system can achieve accuracy within 0.5mm, especially for small motions.Comment: arXiv admin note: text overlap with arXiv:2311.1786
GRACE: Online Gesture Recognition for Autonomous Camera-Motion Enhancement in Robot-Assisted Surgery
Camera navigation in minimally invasive surgery changed significantly since the introduction of robotic assistance. Robotic surgeons are subjected to a cognitive workload increase due to the asynchronous control over tools and camera, which also leads to interruptions in the workflow. Camera motion automation has been addressed as a possible solution, but still lacks situation awareness. We propose an online surgical Gesture Recognition for Autonomous Camera-motion Enhancement (GRACE) system to introduce situation awareness in autonomous camera navigation. A recurrent neural network is used in combination with a tool tracking system to offer gesture-specific camera motion during a robotic-assisted suturing task. GRACE was integrated with a research version of the da Vinci surgical system and a user study (involving 10 participants) was performed to evaluate the benefits introduced by situation awareness in camera motion, both with respect to a state of the art autonomous system (S) and current clinical approach (P). Results show GRACE improving completion time by a median reduction of 18.9s (8.1% ) with respect to S and 65.1s (21.1% ) with respect to P. Also, workload reduction was confirmed by statistical difference in the NASA Task Load Index with respect to S (p < 0.05). Reduction of motion sickness, a common issue related to continuous camera motion of autonomous systems, was assessed by a post-experiment survey ( p < 0.01 )
Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study
Learning high-performance deep neural networks for dynamic modeling of high
Degree-Of-Freedom (DOF) robots remains challenging due to the sampling
complexity. Typical unknown system disturbance caused by unmodeled dynamics
(such as internal compliance, cables) further exacerbates the problem. In this
paper, a novel framework characterized by both high data efficiency and
disturbance-adapting capability is proposed to address the problem of modeling
gravitational dynamics using deep nets in feedforward gravity compensation
control for high-DOF master manipulators with unknown disturbance. In
particular, Feedforward Deep Neural Networks (FDNNs) are learned from both
prior knowledge of an existing analytical model and observation of the robot
system by Knowledge Distillation (KD). Through extensive experiments in
high-DOF master manipulators with significant disturbance, we show that our
method surpasses a standard Learning-from-Scratch (LfS) approach in terms of
data efficiency and disturbance adaptation. Our initial feasibility study has
demonstrated the potential of outperforming the analytical teacher model as the
training data increases
Fully Immersive Virtual Reality for Skull-base Surgery: Surgical Training and Beyond
Purpose: A virtual reality (VR) system, where surgeons can practice
procedures on virtual anatomies, is a scalable and cost-effective alternative
to cadaveric training. The fully digitized virtual surgeries can also be used
to assess the surgeon's skills using measurements that are otherwise hard to
collect in reality. Thus, we present the Fully Immersive Virtual Reality System
(FIVRS) for skull-base surgery, which combines surgical simulation software
with a high-fidelity hardware setup.
Methods: FIVRS allows surgeons to follow normal clinical workflows inside the
VR environment. FIVRS uses advanced rendering designs and drilling algorithms
for realistic bone ablation. A head-mounted display with ergonomics similar to
that of surgical microscopes is used to improve immersiveness. Extensive
multi-modal data is recorded for post-analysis, including eye gaze, motion,
force, and video of the surgery. A user-friendly interface is also designed to
ease the learning curve of using FIVRS.
Results: We present results from a user study involving surgeons with various
levels of expertise. The preliminary data recorded by FIVRS differentiates
between participants with different levels of expertise, promising future
research on automatic skill assessment. Furthermore, informal feedback from the
study participants about the system's intuitiveness and immersiveness was
positive.
Conclusion: We present FIVRS, a fully immersive VR system for skull-base
surgery. FIVRS features a realistic software simulation coupled with modern
hardware for improved realism. The system is completely open-source and
provides feature-rich data in an industry-standard format.Comment: IPCAI/IJCARS 202
- …