10,941 research outputs found
From 3D Models to 3D Prints: an Overview of the Processing Pipeline
Due to the wide diffusion of 3D printing technologies, geometric algorithms
for Additive Manufacturing are being invented at an impressive speed. Each
single step, in particular along the Process Planning pipeline, can now count
on dozens of methods that prepare the 3D model for fabrication, while analysing
and optimizing geometry and machine instructions for various objectives. This
report provides a classification of this huge state of the art, and elicits the
relation between each single algorithm and a list of desirable objectives
during Process Planning. The objectives themselves are listed and discussed,
along with possible needs for tradeoffs. Additive Manufacturing technologies
are broadly categorized to explicitly relate classes of devices and supported
features. Finally, this report offers an analysis of the state of the art while
discussing open and challenging problems from both an academic and an
industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and
Innovation action; Grant agreement N. 68044
Design, analysis and kinematic control of highly redundant serial robotic arms
The use of robotic manipulators in industry has grown in the last decades to improve and speed up industrial processes. Industrial manipulators started to be investigated for machining tasks since they can cover larger workspaces, increasing the range of achievable operations and improving flexibility. The company NimblâBot developed a new mechanism, or module, to build stiffer flexible serial modular robots for machining applications. This manipulator is a kinematic redundant robot with 21 degrees of freedom. This thesis thoroughly analysis the NimblâBot robot features and is divided into three main topics. The first topic regards using a task priority kinematic redundancy resolution algorithm for the NimblâBot robot tracking trajectory while optimizing its kinetostatic performances. The second topic is the kinematic redundant robot design optimization with respect to a desired application and its kinetostatic performance. For the third topic, a new workspace determination algorithm is proposed for kinematic redundant manipulators. Several simulation tests are proposed and tested on some NimblâBot robot designs for each subjects
Kinematics and Robot Design II (KaRD2019) and III (KaRD2020)
This volume collects papers published in two Special Issues âKinematics and Robot Design II, KaRD2019â (https://www.mdpi.com/journal/robotics/special_issues/KRD2019) and âKinematics and Robot Design III, KaRD2020â (https://www.mdpi.com/journal/robotics/special_issues/KaRD2020), which are the second and third issues of the KaRD Special Issue series hosted by the open access journal robotics.The KaRD series is an open environment where researchers present their works and discuss all topics focused on the many aspects that involve kinematics in the design of robotic/automatic systems. It aims at being an established reference for researchers in the field as other serial international conferences/publications are. Even though the KaRD series publishes one Special Issue per year, all the received papers are peer-reviewed as soon as they are submitted and, if accepted, they are immediately published in MDPI Robotics. Kinematics is so intimately related to the design of robotic/automatic systems that the admitted topics of the KaRD series practically cover all the subjects normally present in well-established international conferences on âmechanisms and roboticsâ.KaRD2019 together with KaRD2020 received 22 papers and, after the peer-review process, accepted only 17 papers. The accepted papers cover problems related to theoretical/computational kinematics, to biomedical engineering and to other design/applicative aspects
Towards automated visual flexible endoscope navigation
Background:\ud
The design of flexible endoscopes has not changed significantly in the past 50 years. A trend is observed towards a wider application of flexible endoscopes with an increasing role in complex intraluminal therapeutic procedures. The nonintuitive and nonergonomical steering mechanism now forms a barrier in the extension of flexible endoscope applications. Automating the navigation of endoscopes could be a solution for this problem. This paper summarizes the current state of the art in image-based navigation algorithms. The objectives are to find the most promising navigation system(s) to date and to indicate fields for further research.\ud
Methods:\ud
A systematic literature search was performed using three general search terms in two medicalâtechnological literature databases. Papers were included according to the inclusion criteria. A total of 135 papers were analyzed. Ultimately, 26 were included.\ud
Results:\ud
Navigation often is based on visual information, which means steering the endoscope using the images that the endoscope produces. Two main techniques are described: lumen centralization and visual odometry. Although the research results are promising, no successful, commercially available automated flexible endoscopy system exists to date.\ud
Conclusions:\ud
Automated systems that employ conventional flexible endoscopes show the most promising prospects in terms of cost and applicability. To produce such a system, the research focus should lie on finding low-cost mechatronics and technologically robust steering algorithms. Additional functionality and increased efficiency can be obtained through software development. The first priority is to find real-time, robust steering algorithms. These algorithms need to handle bubbles, motion blur, and other image artifacts without disrupting the steering process
Recommended from our members
Development of a Robotic Positioning and Tracking System for a Research Laboratory
Measurement of residual stress using neutron or synchrotron diffraction relies on the accurate alignment of the sample in relation to the gauge volume of the instrument. Automatic sample alignment can be achieved using kinematic models of the positioning system provided the relevant kinematic parameters are known, or can be determined, to a suitable accuracy.
The main problem addressed in this thesis is improving the repeatability and accuracy of the sample positioning for the strain scanning, through the use of techniques from robotic calibration theory to generate kinematic models of both off-the-shelf and custom-built positioning systems. The approach is illustrated using a positioning system in use on the ENGIN-X instrument at the UKâs ISIS pulsed neutron source comprising a traditional XYZΩ table augmented with a triple axis manipulator. Accuracies better than 100microns were achieved for this compound system. Although discussed here in terms of sample positioning systems these methods are entirely applicable to other moving instrument components such as beam shaping jaws and detectors.
Several factors could lead to inaccurate positioning on a neutron or synchrotron diffractometer. It is therefore essential to validate the accuracy of positioning especially during experiments which require a high level of accuracy. In this thesis, a stereo camera system is developed to monitor the sample and other moving parts of the diffractometer. The camera metrology system is designed to measure the positions of retroreflective markers attached to any object that is being monitored. A fully automated camera calibration procedure is developed with an emphasis on accuracy. The potential accuracy of this system is demonstrated and problems that limit accuracy are discussed. It is anticipated that the camera system would be used to correct the positioning system when the error is minimal or notify the user of the error when it is significant
Autonomous Robotic Screening of Tubular Structures based only on Real-Time Ultrasound Imaging Feedback
Ultrasound (US) imaging is widely employed for diagnosis and staging of
peripheral vascular diseases (PVD), mainly due to its high availability and the
fact it does not emit radiation. However, high inter-operator variability and a
lack of repeatability of US image acquisition hinder the implementation of
extensive screening programs. To address this challenge, we propose an
end-to-end workflow for automatic robotic US screening of tubular structures
using only the real-time US imaging feedback. We first train a U-Net for
real-time segmentation of the vascular structure from cross-sectional US
images. Then, we represent the detected vascular structure as a 3D point cloud
and use it to estimate the longitudinal axis of the target tubular structure
and its mean radius by solving a constrained non-linear optimization problem.
Iterating the previous processes, the US probe is automatically aligned to the
orientation normal to the target tubular tissue and adjusted online to center
the tracked tissue based on the spatial calibration. The real-time segmentation
result is evaluated both on a phantom and in-vivo on brachial arteries of
volunteers. In addition, the whole process is validated both in simulation and
physical phantoms. The mean absolute radius error and orientation error (
SD) in the simulation are and ,
respectively. On a gel phantom, these errors are and
. This shows that the method is able to automatically screen
tubular tissues with an optimal probe orientation (i.e. normal to the vessel)
and at the same to accurately estimate the mean radius, both in real-time.Comment: Accepted for publication in IEEE Transactions on Industrial
Electronics Video: https://www.youtube.com/watch?v=VAaNZL0I5i
Vision technology/algorithms for space robotics applications
The thrust of automation and robotics for space applications has been proposed for increased productivity, improved reliability, increased flexibility, higher safety, and for the performance of automating time-consuming tasks, increasing productivity/performance of crew-accomplished tasks, and performing tasks beyond the capability of the crew. This paper provides a review of efforts currently in progress in the area of robotic vision. Both systems and algorithms are discussed. The evolution of future vision/sensing is projected to include the fusion of multisensors ranging from microwave to optical with multimode capability to include position, attitude, recognition, and motion parameters. The key feature of the overall system design will be small size and weight, fast signal processing, robust algorithms, and accurate parameter determination. These aspects of vision/sensing are also discussed
- âŠ