6,512 research outputs found
Optical Non-Line-of-Sight Physics-based 3D Human Pose Estimation
We describe a method for 3D human pose estimation from transient images
(i.e., a 3D spatio-temporal histogram of photons) acquired by an optical
non-line-of-sight (NLOS) imaging system. Our method can perceive 3D human pose
by `looking around corners' through the use of light indirectly reflected by
the environment. We bring together a diverse set of technologies from NLOS
imaging, human pose estimation and deep reinforcement learning to construct an
end-to-end data processing pipeline that converts a raw stream of photon
measurements into a full 3D human pose sequence estimate. Our contributions are
the design of data representation process which includes (1) a learnable
inverse point spread function (PSF) to convert raw transient images into a deep
feature vector; (2) a neural humanoid control policy conditioned on the
transient image feature and learned from interactions with a physics simulator;
and (3) a data synthesis and augmentation strategy based on depth data that can
be transferred to a real-world NLOS imaging system. Our preliminary experiments
suggest that our method is able to generalize to real-world NLOS measurement to
estimate physically-valid 3D human poses.Comment: CVPR 2020. Video: https://youtu.be/4HFulrdmLE8. Project page:
https://marikoisogawa.github.io/project/nlos_pos
Quantum-inspired computational imaging
Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.Y.A. acknowledges support from the UK Royal Academy of Engineering under the Research Fellowship Scheme (RF201617/16/31). S.McL. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grant EP/J015180/1). V.G. acknowledges support from the U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office award W911NF-10-1-0404, the U.S. DARPA REVEAL program through contract HR0011-16-C-0030, and U.S. National Science Foundation through grants 1161413 and 1422034. A.H. acknowledges support from U.S. Army Research Office award W911NF-15-1-0479, U.S. Department of the Air Force grant FA8650-15-D-1845, and U.S. Department of Energy National Nuclear Security Administration grant DE-NA0002534. D.F. acknowledges financial support from the UK Engineering and Physical Sciences Research Council (grants EP/M006514/1 and EP/M01326X/1). (RF201617/16/31 - UK Royal Academy of Engineering; EP/J015180/1 - UK Engineering and Physical Sciences Research Council; EP/M006514/1 - UK Engineering and Physical Sciences Research Council; EP/M01326X/1 - UK Engineering and Physical Sciences Research Council; W911NF-10-1-0404 - U.S. Defense Advanced Research Projects Agency (DARPA) InPho program through U.S. Army Research Office; HR0011-16-C-0030 - U.S. DARPA REVEAL program; 1161413 - U.S. National Science Foundation; 1422034 - U.S. National Science Foundation; W911NF-15-1-0479 - U.S. Army Research Office; FA8650-15-D-1845 - U.S. Department of the Air Force; DE-NA0002534 - U.S. Department of Energy National Nuclear Security Administration)Accepted manuscrip
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion - A Feasibility Study
Modern neurosurgical procedures often rely on computer-assisted real-time guidance using multiple medical imaging modalities. State-of-the-art commercial products enable the fusion of pre-operative with intra-operative images (e.g., magnetic resonance [MR] with ultrasound [US] images), as well as the on-screen visualization of procedures in progress. In so doing, US images can be employed as a template to which pre-operative images can be registered, to correct for anatomical changes, to provide live-image feedback, and consequently to improve confidence when making resection margin decisions near eloquent regions during tumour surgery.
In spite of the potential for tracked ultrasound to improve many neurosurgical procedures, it is not widely used. State-of-the-art systems are handicapped by optical tracking’s need for consistent line-of-sight, keeping tracked rigid bodies clean and rigidly fixed, and requiring a calibration workflow. The goal of this work is to improve the value offered by co-registered ultrasound images without the workflow drawbacks of conventional systems. The novel work in this thesis includes: the exploration and development of a GPU-enabled 2D-3D multi-modal registration algorithm based on the existing LC2 metric; and the use of this registration algorithm in the context of a sensor and image-fusion algorithm.
The work presented here is a motivating step in a vision towards a heterogeneous tracking framework for image-guided interventions where the knowledge from intraoperative imaging, pre-operative imaging, and (potentially disjoint) wireless sensors in the surgical field are seamlessly integrated for the benefit of the surgeon. The technology described in this thesis, inspired by advances in robot localization demonstrate how inaccurate pose data from disjoint sources can produce a localization system greater than the sum of its parts
i3PosNet: Instrument Pose Estimation from X-Ray in temporal bone surgery
Purpose: Accurate estimation of the position and orientation (pose) of
surgical instruments is crucial for delicate minimally invasive temporal bone
surgery. Current techniques lack in accuracy and/or line-of-sight constraints
(conventional tracking systems) or expose the patient to prohibitive ionizing
radiation (intra-operative CT). A possible solution is to capture the
instrument with a c-arm at irregular intervals and recover the pose from the
image.
Methods: i3PosNet infers the position and orientation of instruments from
images using a pose estimation network. Said framework considers localized
patches and outputs pseudo-landmarks. The pose is reconstructed from
pseudo-landmarks by geometric considerations.
Results: We show i3PosNet reaches errors less than 0.05mm. It outperforms
conventional image registration-based approaches reducing average and maximum
errors by at least two thirds. i3PosNet trained on synthetic images generalizes
to real x-rays without any further adaptation.
Conclusion: The translation of Deep Learning based methods to surgical
applications is difficult, because large representative datasets for training
and testing are not available. This work empirically shows sub-millimeter pose
estimation trained solely based on synthetic training data.Comment: Accepted at International journal of computer assisted radiology and
surgery pending publicatio
- …