17,827 research outputs found
Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery
One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions
Active Vision for Scene Understanding
Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot\u27s view in order to explore interaction possibilities of the scene
Autonomous 3D object modeling by a humanoid using an optimization-driven Next-Best-View formulation
International audienceAn original method to build a visual model for unknown objects by a humanoid robot is proposed. The algorithm ensures successful autonomous realization of this goal by addressing the problem as an active coupling between computer vision and whole-body posture generation. The visual model is built through the repeated execution of two processes. The first one considers the current knowledge about the visual aspects and the shape of the object to deduce a preferred viewpoint with the aim of reducing the uncertainty of the shape and appearance of the object. This is done while considering the constraints related to the embodiment of the vision sensors in the humanoid head. The second process generates a whole robot posture using the desired head pose while solving additional constraints such as collision avoidance and joint limitations. The main contribution of our approach relies on the use of different optimization algorithms to find an optimal viewpoint by including the humanoid specificities in terms of constraints, an embedded vision sensor, and redundant motion capabilities. This approach differs significantly from those of traditional works addressing the problem of autonomously building an object model
Reconstruction of Patient-Specific Bone Models from X-Ray Radiography
The availability of a patient‐specific bone model has become an increasingly invaluable addition to orthopedic case evaluation and planning [1]. Utilized within a wide range of specialized visualization and analysis tools, such models provide unprecedented wealth of bone shape information previously unattainable using traditional radiographic imaging [2]. In this work, a novel bone reconstruction method from two or more x‐ray images is described. This method is superior to previous attempts in terms of accuracy and repeatability. The new technique accurately models the radiological scene in a way that eliminates the need for expensive multi‐planar radiographic imaging systems. It is also flexible enough to allow for both short and long film imaging using standard radiological protocols, which makes the technology easily utilized in standard clinical setups
Simultaneous Tactile Exploration and Grasp Refinement for Unknown Objects
This paper addresses the problem of simultaneously exploring an unknown
object to model its shape, using tactile sensors on robotic fingers, while also
improving finger placement to optimise grasp stability. In many situations, a
robot will have only a partial camera view of the near side of an observed
object, for which the far side remains occluded. We show how an initial grasp
attempt, based on an initial guess of the overall object shape, yields tactile
glances of the far side of the object which enable the shape estimate and
consequently the successive grasps to be improved. We propose a grasp
exploration approach using a probabilistic representation of shape, based on
Gaussian Process Implicit Surfaces. This representation enables initial partial
vision data to be augmented with additional data from successive tactile
glances. This is combined with a probabilistic estimate of grasp quality to
refine grasp configurations. When choosing the next set of finger placements, a
bi-objective optimisation method is used to mutually maximise grasp quality and
improve shape representation during successive grasp attempts. Experimental
results show that the proposed approach yields stable grasp configurations more
efficiently than a baseline method, while also yielding improved shape estimate
of the grasped object.Comment: IEEE Robotics and Automation Letters. Preprint Version. Accepted
February, 202
- …