79,969 research outputs found
On Advanced Mobility Concepts for Intelligent Planetary Surface Exploration
Surface exploration by wheeled rovers on Earth's Moon (the two Lunokhods) and Mars (Nasa's Sojourner and the two MERs) have been followed since many years already very suc-cessfully, specifically concerning operations over long time. However, despite of this success, the explored surface area was very small, having in mind a total driving distance of about 8 km (Spirit) and 21 km (Opportunity) over 6 years of operation. Moreover, ESA will send its ExoMars rover in 2018 to Mars, and NASA its MSL rover probably this year. However, all these rovers are lacking sufficient on-board intelligence in order to overcome longer dis-tances, driving much faster and deciding autonomously on path planning for the best trajec-tory to follow. In order to increase the scientific output of a rover mission it seems very nec-essary to explore much larger surface areas reliably in much less time. This is the main driver for a robotics institute to combine mechatronics functionalities to develop an intelligent mo-bile wheeled rover with four or six wheels, and having specific kinematics and locomotion suspension depending on the operational terrain of the rover to operate. DLR's Robotics and Mechatronics Center has a long tradition in developing advanced components in the field of light-weight motion actuation, intelligent and soft manipulation and skilled hands and tools, perception and cognition, and in increasing the autonomy of any kind of mechatronic systems. The whole design is supported and is based upon detailed modeling, optimization, and simula-tion tasks. We have developed efficient software tools to simulate the rover driveability per-formance on various terrain characteristics such as soft sandy and hard rocky terrains as well as on inclined planes, where wheel and grouser geometry plays a dominant role. Moreover, rover optimization is performed to support the best engineering intuitions, that will optimize structural and geometric parameters, compare various kinematics suspension concepts, and make use of realistic cost functions like mass and consumed energy minimization, static sta-bility, and more. For self-localization and safe navigation through unknown terrain we make use of fast 3D stereo algorithms that were successfully used e.g. in unmanned air vehicle ap-plications and on terrestrial mobile systems. The advanced rover design approach is applica-ble for lunar as well as Martian surface exploration purposes. A first mobility concept ap-proach for a lunar vehicle will be presented
Optical coherence tomography-based consensus definition for lamellar macular hole.
BackgroundA consensus on an optical coherence tomography definition of lamellar macular hole (LMH) and similar conditions is needed.MethodsThe panel reviewed relevant peer-reviewed literature to reach an accord on LMH definition and to differentiate LMH from other similar conditions.ResultsThe panel reached a consensus on the definition of three clinical entities: LMH, epiretinal membrane (ERM) foveoschisis and macular pseudohole (MPH). LMH definition is based on three mandatory criteria and three optional anatomical features. The three mandatory criteria are the presence of irregular foveal contour, the presence of a foveal cavity with undermined edges and the apparent loss of foveal tissue. Optional anatomical features include the presence of epiretinal proliferation, the presence of a central foveal bump and the disruption of the ellipsoid zone. ERM foveoschisis definition is based on two mandatory criteria: the presence of ERM and the presence of schisis at the level of Henle's fibre layer. Three optional anatomical features can also be present: the presence of microcystoid spaces in the inner nuclear layer (INL), an increase of retinal thickness and the presence of retinal wrinkling. MPH definition is based on three mandatory criteria and two optional anatomical features. Mandatory criteria include the presence of a foveal sparing ERM, the presence of a steepened foveal profile and an increased central retinal thickness. Optional anatomical features are the presence of microcystoid spaces in the INL and a normal retinal thickness.ConclusionsThe use of the proposed definitions may provide uniform language for clinicians and future research
Learning while Competing -- 3D Modeling & Design
The e-Yantra project at IIT Bombay conducts an online competition, e-Yantra
Robotics Competition (eYRC) which uses a Project Based Learning (PBL)
methodology to train students to implement a robotics project in a step-by-step
manner over a five-month period. Participation is absolutely free. The
competition provides all resources - robot, accessories, and a problem
statement - to a participating team. If selected for the finals, e-Yantra pays
for them to come to the finals at IIT Bombay. This makes the competition
accessible to resource-poor student teams. In this paper, we describe the
methodology used in the 6th edition of eYRC, eYRC-2017 where we experimented
with a Theme (projects abstracted into rulebooks) involving an advanced topic -
3D Designing and interfacing with sensors and actuators. We demonstrate that
the learning outcomes are consistent with our previous studies [1]. We infer
that even 3D designing to create a working model can be effectively learned in
a competition mode through PBL
Bio-inspired Tensegrity Soft Modular Robots
In this paper, we introduce a design principle to develop novel soft modular
robots based on tensegrity structures and inspired by the cytoskeleton of
living cells. We describe a novel strategy to realize tensegrity structures
using planar manufacturing techniques, such as 3D printing. We use this
strategy to develop icosahedron tensegrity structures with programmable
variable stiffness that can deform in a three-dimensional space. We also
describe a tendon-driven contraction mechanism to actively control the
deformation of the tensegrity mod-ules. Finally, we validate the approach in a
modular locomotory worm as a proof of concept.Comment: 12 pages, 7 figures, submitted to Living Machine conference 201
Modeling the power consumption of a Wifibot and studying the role of communication cost in operation time
Mobile robots are becoming part of our every day living at home, work or
entertainment. Due to their limited power capabilities, the development of new
energy consumption models can lead to energy conservation and energy efficient
designs. In this paper, we carry out a number of experiments and we focus on
the motors power consumption of a specific robot called Wifibot. Based on the
experimentation results, we build models for different speed and acceleration
levels. We compare the motors power consumption to other robot running modes.
We, also, create a simple robot network scenario and we investigate whether
forwarding data through a closer node could lead to longer operation times. We
assess the effect energy capacity, traveling distance and data rate on the
operation time
Robot Autonomy for Surgery
Autonomous surgery involves having surgical tasks performed by a robot
operating under its own will, with partial or no human involvement. There are
several important advantages of automation in surgery, which include increasing
precision of care due to sub-millimeter robot control, real-time utilization of
biosignals for interventional care, improvements to surgical efficiency and
execution, and computer-aided guidance under various medical imaging and
sensing modalities. While these methods may displace some tasks of surgical
teams and individual surgeons, they also present new capabilities in
interventions that are too difficult or go beyond the skills of a human. In
this chapter, we provide an overview of robot autonomy in commercial use and in
research, and present some of the challenges faced in developing autonomous
surgical robots
AltURI: a thin middleware for simulated robot vision applications
Fast software performance is often the focus when developing real-time vision-based control applications for robot simulators. In this paper we have developed a thin, high performance middleware for USARSim and other simulators designed for real-time vision-based control applications. It includes a fast image server providing images in OpenCV, Matlab or web formats and a simple command/sensor processor. The interface has been tested in USARSim with an Unmanned Aerial Vehicle using two control applications; landing using a reinforcement learning algorithm and altitude control using elementary motion detection. The middleware has been found to be fast enough to control the flying robot as well as very easy to set up and use
- …