498 research outputs found
A comparative study of breast surface reconstruction for aesthetic outcome assessment
Breast cancer is the most prevalent cancer type in women, and while its
survival rate is generally high the aesthetic outcome is an increasingly
important factor when evaluating different treatment alternatives. 3D scanning
and reconstruction techniques offer a flexible tool for building detailed and
accurate 3D breast models that can be used both pre-operatively for surgical
planning and post-operatively for aesthetic evaluation. This paper aims at
comparing the accuracy of low-cost 3D scanning technologies with the
significantly more expensive state-of-the-art 3D commercial scanners in the
context of breast 3D reconstruction. We present results from 28 synthetic and
clinical RGBD sequences, including 12 unique patients and an anthropomorphic
phantom demonstrating the applicability of low-cost RGBD sensors to real
clinical cases. Body deformation and homogeneous skin texture pose challenges
to the studied reconstruction systems. Although these should be addressed
appropriately if higher model quality is warranted, we observe that low-cost
sensors are able to obtain valuable reconstructions comparable to the
state-of-the-art within an error margin of 3 mm.Comment: This paper has been accepted to MICCAI201
Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation
Accounting for 26% of all new cancer cases worldwide, breast cancer remains
the most common form of cancer in women. Although early breast cancer has a
favourable long-term prognosis, roughly a third of patients suffer from a
suboptimal aesthetic outcome despite breast conserving cancer treatment.
Clinical-quality 3D modelling of the breast surface therefore assumes an
increasingly important role in advancing treatment planning, prediction and
evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive
and either infrastructure-heavy or subject to motion artefacts. In this paper
we employ a single consumer-grade RGBD camera with an ICP-based registration
approach to jointly align all points from a sequence of depth images
non-rigidly. Subtle body deformation due to postural sway and respiration is
successfully mitigated leading to a higher geometric accuracy through
regularised locally affine transformations. We present results from 6 clinical
cases where our method compares well with the gold standard and outperforms a
previous approach. We show that our method produces better reconstructions
qualitatively by visual assessment and quantitatively by consistently obtaining
lower landmark error scores and yielding more accurate breast volume estimates
Evaluation of Using Semi-Autonomy Features in Mobile Robotic Telepresence Systems
Mobile robotic telepresence systems used for social interaction scenarios require that users steer robots in a remote environment. As a consequence, a heavy workload can be put on users if they are unfamiliar with using robotic telepresence units. One way to lessen this workload is to automate certain operations performed during a telepresence session in order to assist remote drivers in navigating the robot in new environments. Such operations include autonomous robot localization and navigation to certain points in the home and automatic docking of the robot to the charging station. In this paper we describe the implementation of such autonomous features along with user evaluation study. The evaluation scenario is focused on the first experience on using the system by novice users. Importantly, that the scenario taken in this study assumed that participants have as little as possible prior information about the system. Four different use-cases were identified from the user behaviour analysis.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech. Plan Nacional de Investigación, proyecto DPI2011-25483
IR Shape From Shading Enhanced RGBD for 3D Scanning
RGBD Cameras such as the Microsoft Kinect that can quickly provideusable depth maps have become very affordable, and thusvery popular and abundant in recent years. Beyond gaming, RGBDcameras can have numerous applications, including their use in affordable3D scanners. These cameras however are limited in theirability to capture finer details. We explore the use of additional3D reconstruction algorithms to enhance the depth maps producedfrom RGBD cameras, allowing them to capture more detail
Robot Mapping and Navigation in Real-World Environments
Robots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities required for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial difficulty as these components are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown environment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robot navigation pipeline that can work with different types of the sensors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots. The techniques presented in this thesis aim to be used with both RGBD and LiDAR sensors without adaptations for individual sensor models by using range image representation and aim to provide methods for navigation and scene interpretation in both static and dynamic environments. For a static world, we present a number of approaches that address the core components of a typical robot navigation pipeline. At the core of building a consistent map of the environment using a mobile robot lies point cloud matching. To this end, we present a method for photometric point cloud matching that treats RGBD and LiDAR sensors in a uniform fashion and is able to accurately register point clouds at the frame rate of the sensor. This method serves as a building block for the further mapping pipeline. In addition to the matching algorithm, we present a method for traversability analysis of the currently observed terrain in order to guide an autonomous robot to the safe parts of the surrounding environment. A source of danger when navigating difficult to access sites is the fact that the robot may fail in building a correct map of the environment. This dramatically impacts the ability of an autonomous robot to navigate towards its goal in a robust way, thus, it is important for the robot to be able to detect these situations and to find its way home not relying on any kind of map. To address this challenge, we present a method for analyzing the quality of the map that the robot has built to date, and safely returning the robot to the starting point in case the map is found to be in an inconsistent state. The scenes in dynamic environments are vastly different from the ones experienced in static ones. In a dynamic setting, objects can be moving, thus making static traversability estimates not enough. With the approaches developed in this thesis, we aim at identifying distinct objects and tracking them to aid navigation and scene understanding. We target these challenges by providing a method for clustering a scene taken with a LiDAR scanner and a measure that can be used to determine if two clustered objects are similar that can aid the tracking performance. All methods presented in this thesis are capable of supporting real-time robot operation, rely on RGBD or LiDAR sensors and have been tested on real robots in real-world environments and on real-world datasets. All approaches have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software
A Model-Driven Engineering Approach for ROS using Ontological Semantics
This paper presents a novel ontology-driven software engineering approach for
the development of industrial robotics control software. It introduces the
ReApp architecture that synthesizes model-driven engineering with semantic
technologies to facilitate the development and reuse of ROS-based components
and applications. In ReApp, we show how different ontological classification
systems for hardware, software, and capabilities help developers in discovering
suitable software components for their tasks and in applying them correctly.
The proposed model-driven tooling enables developers to work at higher
abstraction levels and fosters automatic code generation. It is underpinned by
ontologies to minimize discontinuities in the development workflow, with an
integrated development environment presenting a seamless interface to the user.
First results show the viability and synergy of the selected approach when
searching for or developing software with reuse in mind.Comment: Presented at DSLRob 2015 (arXiv:1601.00877), Stefan Zander, Georg
Heppner, Georg Neugschwandtner, Ramez Awad, Marc Essinger and Nadia Ahmed: A
Model-Driven Engineering Approach for ROS using Ontological Semantic
- …