166 research outputs found

    Multi-robot Tethering Using Camera

    Get PDF
    An autonomous multi-robot or swarm robot able to perform various cooperative mission such as search and rescue, exploration of unknown or partially known area, transportation, surveillance, defence system, and also firefighting. However, multi-robot application often requires synchronised robotic configuration, reliable communication system and various sensors installed on each robot. This approach has resulted system complexity and very high cost of development

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Mobile robot visual navigation based on fuzzy logic and optical flow approaches

    Get PDF
    This paper presents the design of mobile robot visual navigation system in indoor environment based on fuzzy logic controllers (FLC) and optical flow (OF) approach. The proposed control system contains two Takagi–Sugeno fuzzy logic controllers for obstacle avoidance and goal seeking based on video acquisition and image processing algorithm. The first steering controller uses OF values calculated by Horn–Schunck algorithm to detect and estimate the positions of the obstacles. To extract information about the environment, the image is divided into two parts. The second FLC is used to guide the robot to the direction of the final destination. The efficiency of the proposed approach is verified in simulation using Visual Reality Toolbox. Simulation results demonstrate that the visual based control system allows autonomous navigation without any collision with obstacles.Peer ReviewedPostprint (author's final draft

    Multi-robot Tethering Using Camera

    Get PDF
    An autonomous multi-robot or swarm robot able to perform various cooperative mission such as search and rescue, exploration of unknown or partially known area, transportation, surveillance, defence system, and also firefighting. However, multi-robot application often requires synchronised robotic configuration, reliable communication system and various sensors installed on each robot. This approach has resulted system complexity and very high cost of development

    A motion rule for human-friendly robots basedinvestigations and its application to mobile robot on electrodermal activity

    Get PDF
    This paper investigates impressions on the robot motion based on EDA experiments, deduces a motion rule for human-friendly robots from the investigations, and applies it to a mobile robot experimental apparatus. In our previous work, it was suggested that actuation noise come from the robots tended to raise the sympathetic nerve system (SNS) response of the heart rate variability. In another experiment it is observed that blocking out either the sound or the sight attenuated the electrodermal activity (EDA), which reflects the SNS, to the robot motion. In the present work, the experiment was designed not so as to avoid the influence of the habituation differently from the previous experiments, which was the significant factor contributing to reducing the EDA responses. As a result of statistical analysis, it was concluded that the present work supported the result of the previous work. Based on these investigations, we deduced the motion rule for human-friendly robots from this investigation, that robots must reduce their motion speed in the immediate vicinity of humans. We constructed the experimental setup that a mobile robot approached human with its speed decreased in conformity with the rule. To estimate the distance from the human, the skin color detection and depth-from-focus techniques were applied to a monocular color video camera system with pan/tilt/zoom operation. The experimental result showed that a proper choice of commands could perform the robot motion to reduce its speed in the immediate vicinity of the human

    RADIAL OUTFLOW IN TELEOPERATION: A POSSIBLE SOLUTION FOR IMPROVING DEPTH PERCEPTION

    Get PDF
    Practical experience has shown that operators of remote robotic systems have difficulty perceiving aspects of remotely operated robots and their environments (e.g. Casper & Murphy, 2003). Operators often find it difficult, for example, to perceive accurately the distances and sizes of remote objects. Past research has demonstrated that employing a moveable camera that provides the operator optical motion allows for the perception of distance in the absence of other information about depth (Dash, 2004). In this experiment a camera was constrained to move only forward and backward, thus adding monocular radial outflow to the video stream. The ability of remote operators to perceive the sizes of remote objects and to position a mobile robot at specific distances relative to the object was tested. Two different conditions were investigated. In one condition a dynamic camera provided radial outflow by moving forward and backward while atop a mobile robot. In the second condition the camera remained stationary atop the mobile robot. Results indicated no differences between camera conditions, but superior performance for distance perception was observed when compared to previous research (Dash, 2004). This thesis provides evidence that teleoperators of a terrestrial robot are able to determine egocentric depth in a remote environment when sufficient movement of the robot is involved
    • …
    corecore