74,169 research outputs found

    Learning while Competing -- 3D Modeling & Design

    Full text link
    The e-Yantra project at IIT Bombay conducts an online competition, e-Yantra Robotics Competition (eYRC) which uses a Project Based Learning (PBL) methodology to train students to implement a robotics project in a step-by-step manner over a five-month period. Participation is absolutely free. The competition provides all resources - robot, accessories, and a problem statement - to a participating team. If selected for the finals, e-Yantra pays for them to come to the finals at IIT Bombay. This makes the competition accessible to resource-poor student teams. In this paper, we describe the methodology used in the 6th edition of eYRC, eYRC-2017 where we experimented with a Theme (projects abstracted into rulebooks) involving an advanced topic - 3D Designing and interfacing with sensors and actuators. We demonstrate that the learning outcomes are consistent with our previous studies [1]. We infer that even 3D designing to create a working model can be effectively learned in a competition mode through PBL

    Modeling the power consumption of a Wifibot and studying the role of communication cost in operation time

    Get PDF
    Mobile robots are becoming part of our every day living at home, work or entertainment. Due to their limited power capabilities, the development of new energy consumption models can lead to energy conservation and energy efficient designs. In this paper, we carry out a number of experiments and we focus on the motors power consumption of a specific robot called Wifibot. Based on the experimentation results, we build models for different speed and acceleration levels. We compare the motors power consumption to other robot running modes. We, also, create a simple robot network scenario and we investigate whether forwarding data through a closer node could lead to longer operation times. We assess the effect energy capacity, traveling distance and data rate on the operation time

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    AltURI: a thin middleware for simulated robot vision applications

    Get PDF
    Fast software performance is often the focus when developing real-time vision-based control applications for robot simulators. In this paper we have developed a thin, high performance middleware for USARSim and other simulators designed for real-time vision-based control applications. It includes a fast image server providing images in OpenCV, Matlab or web formats and a simple command/sensor processor. The interface has been tested in USARSim with an Unmanned Aerial Vehicle using two control applications; landing using a reinforcement learning algorithm and altitude control using elementary motion detection. The middleware has been found to be fast enough to control the flying robot as well as very easy to set up and use

    Perspective distortion modeling for image measurements

    Get PDF
    A perspective distortion modelling for monocular view that is based on the fundamentals of perspective projection is presented in this work. Perspective projection is considered to be the most ideal and realistic model among others, which depicts image formation in monocular vision. There are many approaches trying to model and estimate the perspective effects in images. Some approaches try to learn and model the distortion parameters from a set of training data that work only for a predefined structure. None of the existing methods provide deep understanding of the nature of perspective problems. Perspective distortions, in fact, can be described by three different perspective effects. These effects are pose, distance and foreshortening. They are the cause of the aberrant appearance of object shapes in images. Understanding these phenomena have long been an interesting topic for artists, designers and scientists. In many cases, this problem has to be necessarily taken into consideration when dealing with image diagnostics, high and accurate image measurement, as well as accurate pose estimation from images. In this work, a perspective distortion model for every effect is developed while elaborating the nature of perspective effects. A distortion factor for every effect is derived, then followed by proposed methods, which allows extracting the true target pose and distance, and correcting image measurements
    • …
    corecore