356 research outputs found

    Autonomous Movement Control of Coaxial Mobile Robot based on Aspect Ratio of Human Face for Public Relation Activity Using Stereo Thermal Camera

    Get PDF
    In recent years, robots that recognize people around them and provide guidance, information, and monitoring have been attracting attention. The mainstream of conventional human recognition technology is the method using a camera or laser range finder. However, it is difficult to recognize with a camera due to fluctuations in lighting 1), and it is often affected by the recognition environment such as misrecognition 2) with a person's leg and a chair's leg with a laser range finder. Therefore, we propose a human recognition method using a thermal camera that can visualize human heat. This study aims to realize human-following autonomous movement based on human recognition. In addition, the distance from the robot to the person is measured with a stereo thermal camera that uses two thermal cameras. A coaxial two-wheeled robot that is compact and capable of super-credit turning is used as a mobile robot. Finally, we conduct an autonomous movement experiment of a coaxial mobile robot based on human recognition by combining these. We performed human-following experiments on a coaxial two-wheeled robot based on human recognition using a stereo thermal camera and confirmed that it moves appropriately to the location where the recognized person is in multiple use cases (scenarios). However, the accuracy of distance measurement by stereo vision is inferior to that of laser measurement. It is necessary to improve it in the case of movement that requires more accuracy

    Path Tracking by a Mobile Robot Equipped with Only a Downward Facing Camera

    Get PDF
    This paper presents a practical path-tracking method for a mobile robot with only a downward camera facing the passage plane. A unique algorithm for tracking and searching ground images with natural texture is used to localize the robot without a feature-point extraction scheme commonly used in other visual odometry methods. In our tracking algorithm, groups of reference pixels are used to detect the relative translation and rotation between frames. Furthermore, a reference pixel group of another shape is registered both to record a path and to correct errors accumulated during localization. All image processing and robot control operations are carried out with low memory consumption for image registration and fast calculation times for completing the searches on a laptop PC. We also describe experimental results in which a vehicle developed by the proposed method repeatedly performed precise path tracking under indoor and outdoor environments

    A two-directional 1-gram visual motion sensor inspired by the fly's eye

    No full text
    International audienceOptic flow based autopilots for Micro-Aerial Vehicles (MAVs) need lightweight, low-power sensors to be able to fly safely through unknown environments. The new tiny 6-pixel visual motion sensor presented here meets these demanding requirements in term of its mass, size and power consumption. This 1-gram, low-power, fly-inspired sensor accurately gauges the visual motion using only this 6-pixel array with two different panoramas and illuminance conditions. The new visual motion sensor's output results from a smart combination of the information collected by several 2-pixel Local Motion Sensors (LMSs), based on the \enquote{time of travel} scheme originally inspired by the common housefly's Elementary Motion Detector (EMD) neurons. The proposed sensory fusion method enables the new visual sensor to measure the visual angular speed and determine the main direction of the visual motion without any prior knowledge. By computing the median value of the output from several LMSs, we also ended up with a more robust, more accurate and more frequently refreshed measurement of the 1-D angular speed

    Positioning device for outdoor mobile robots using optical sensors and lasers

    Get PDF
    We propose a novel method for positioning a mobile robot in an outdoor environment using lasers and optical sensors. Position estimation via a noncontact optical method is useful because the information from the wheel odometer and the global positioning system in a mobile robot is unreliable in some situations. Contact optical sensors such as computer mouse are designed to be in contact with a surface and do not function well in strong ambient light conditions. To mitigate the challenges of an outdoor environment, we developed an optical device with a bandpass filter and a pipe to restrict solar light and to detect translation. The use of two devices enables sensing of the mobile robot’s position, including posture. Furthermore, employing a collimated laser beam allows measurements against a surface to be invariable with the distance to the surface. In this paper, we describe motion estimation, device configurations, and several tests for performance evaluation. We also present the experimental positioning results from a vehicle equipped with our optical device on an outdoor path. Finally, we discuss an improvement in postural accuracy by combining an optical device with precise gyroscopes

    Insect inspired visual motion sensing and flying robots

    Get PDF
    International audienceFlying insects excellently master visual motion sensing techniques. They use dedicated motion processing circuits at a low energy and computational costs. Thanks to observations obtained on insect visual guidance, we developed visual motion sensors and bio-inspired autopilots dedicated to flying robots. Optic flow-based visuomotor control systems have been implemented on an increasingly large number of sighted autonomous robots. In this chapter, we present how we designed and constructed local motion sensors and how we implemented bio-inspired visual guidance scheme on-board several micro-aerial vehicles. An hyperacurate sensor in which retinal micro-scanning movements are performed via a small piezo-bender actuator was mounted onto a miniature aerial robot. The OSCAR II robot is able to track a moving target accurately by exploiting the microscan-ning movement imposed to its eye's retina. We also present two interdependent control schemes driving the eye in robot angular position and the robot's body angular position with respect to a visual target but without any knowledge of the robot's orientation in the global frame. This "steering-by-gazing" control strategy, which is implemented on this lightweight (100 g) miniature sighted aerial robot, demonstrates the effectiveness of this biomimetic visual/inertial heading control strategy

    Controlling docking, altitude and speed in a circular high-roofed tunnel thanks to the optic flow

    No full text
    International audienceThe new robot called BeeRotor we have developed is a tandem rotorcraft that mimicks optic flow-based behaviors previously observed in flies and bees. This tethered miniature robot (80g), which is autonomous in terms of its computational power requirements, is equipped with a 13.5-g quasi-panoramic visual system consisting of 4 individual visual motion sensors responding to the optic flow generated by photographs of natural scenes, thanks to the bio-inspired "time of travel" scheme. Based on recent findings on insects' sensing abilities and control strategies, the BeeRotor robot was designed to use optic flow to perform complex tasks such as ground and ceiling following while also automatically driving its forward speed on the basis of the ventral or dorsal optic flow. In addition, the BeeRotor robot can perform tricky manoeuvers such as automatic ceiling docking by simply regulating its dorsal or ventral optic flow in high-roofed tunnel depicting natural scenes. Although it was built as a proof of concept, the BeeRotor robot is one step further towards achieving a fully- autonomous micro-helicopter which is capable of navigating mainly on the basis of the optic flow

    Bio-inspired Landing Approaches and Their Potential Use On Extraterrestrial Bodies

    No full text
    International audienceAutomatic landing on extraterrestrial bodies is still a challenging and hazardous task. Here we propose a new type of autopilot designed to solve landing problems, which is based on neurophysiological, behavioral, and biorobotic findings on flying insects. Flying insects excel in optic flow sensing techniques and cope with highly parallel data at a low energy and computational cost using lightweight dedicated motion processing circuits. In the first part of this paper, we present our biomimetic approach in the context of a lunar landing scenario, assuming a 2-degree-of-freedom spacecraft approaching the moon, which is simulated with the PANGU software. The autopilot we propose relies only on optic flow (OF) and inertial measurements, and aims at regulating the OF generated during the landing approach, by means of a feedback control system whose sensor is an OF sensor. We put forward an estimation method based on a two-sensor setup to accurately estimate the orientation of the lander's velocity vector, which is mandatory to control the lander's pitch in a near optimal way with respect to the fuel consumption. In the second part, we present a lightweight Visual Motion Sensor (VMS) which draws on the results of neurophysiological studies on the insect visual system. The VMS was able to perform local 1-D angular speed measurements in the range 1.5°/s - 25°/s. The sensor was mounted on an 80 kg unmanned helicopter and test-flown outdoors over various fields. The OF measured onboard was shown to match the ground-truth optic flow despite the dramatic disturbances and vibrations experienced by the sensor

    Artificial Intelligence in Foreign Object Classification in Fenceless Robotic Work Cells Using 2-D Safety Cameras

    Get PDF
    Production systems using robotic manipulators have become common in the last few decades, and the trend is towards fenceless cells that save from space. Thus, the safety and flexibility of these systems have become more critical. The safety systems are based on either sensor data or camera images. Although the flexibility of the camera-based systems is better, conventional image processing methods are sensitive to the working environment. Artificial intelligence may be a powerful tool for them to adapt to change requirements quickly and improve accuracy and stability. In this study, a low-cost 2-D camera-based safety system was designed and installed in an experimental fenceless robotic work cell. The system controller was coupled with three alternative deep learning (ResNet-152, AlexNet, SqueezeNet) and three machine learning modules (support vector machine, random forest and decision tree). These modules were trained using photo images of ten distinct foreign objects penetrating the alarm zone. To include the ever-changing conditions of the industrial environment, disruptive effects including camera vibrations, shadows, reflections, illuminance variations etc. are included by using multiple images up to 550 for each class. Using the restricted data used for training and testing the six systems, the SqueezeNet deep learning model gave the best accuracy of 95% without any over-fitting. Despite this, machine learning-based models have been found to have 100 times faster prediction time than deep learning-based ones. Thus, the safety system can be adapted quickly to any possible changes and noise that may arise from working conditions is prevented, and time losses that may occur in industrial production may be prevented
    • …
    corecore