13 research outputs found

    Image-Based Flexible Endoscope Steering

    Get PDF
    Manually steering the tip of a flexible endoscope to navigate through an endoluminal path relies on the physician’s dexterity and experience. In this paper we present the realization of a robotic flexible endoscope steering system that uses the endoscopic images to control the tip orientation towards the direction of the lumen. Two image-based control algorithms are investigated, one is based on the optical flow and the other is based on the image intensity. Both are evaluated using simulations in which the endoscope was steered through the lumen. The RMS distance to the lumen center was less than 25% of the lumen width. An experimental setup was built using a standard flexible endoscope, and the image-based control algorithms were used to actuate the wheels of the endoscope for tip steering. Experiments were conducted in an anatomical model to simulate gastroscopy. The image intensity- based algorithm was capable of steering the endoscope tip through an endoluminal path from the mouth to the duodenum accurately. Compared to manual control, the robotically steered endoscope performed 68% better in terms of keeping the lumen centered in the image

    Optical flow or image subtraction in human detection from infrared camera on mobile robot

    Get PDF
    Perceiving the environment is crucial in any application related to mobile robotics research. In this paper, a new approach to real-time human detection through processing video captured by a thermal infrared camera mounted on the autonomous mobile platform mSecuritTM is introduced. The approach starts with a phase of static analysis for the detection of human candidates through some classical image processing techniques such as image normalization and thresholding. Then, the proposal starts a dynamic image analysis phase based in optical flow or image difference. Optical flow is used when the robot is moving, whilst image difference is the preferred method when the mobile platform is still. The results of both phases are compared to enhance the human segmentation by infrared camera. Indeed, optical flow or image difference will emphasize the foreground hot spot areas obtained at the initial human candidates? detection

    END-TO-END DEPTH FROM MOTION WITH STABILIZED MONOCULAR VIDEOS

    Get PDF

    Aerial Locomotion in Cluttered Environments

    Get PDF
    Many environments where robots are expected to operate are cluttered with objects, walls, debris, and different horizontal and vertical structures. In this chapter, we present four design features that allow small robots to rapidly and safely move in 3 dimensions through cluttered environments: a perceptual system capable of detecting obstacles in the robot’s surroundings, including the ground, with minimal computation, mass, and energy requirements; a flexible and protective framework capable of withstanding collisions and even using collisions to learn about the properties of the surroundings when light is not available; a mechanism for temporarily perching to vertical structures in order to monitor the environment or communicate with other robots before taking off again; and a self-deployment mechanism for getting in the air and perform repetitive jumps or glided flight. We conclude the chapter by suggesting future avenues for integration of multiple features within the same robotic platform

    A Robust Docking Strategy for a Mobile Robot Using Flow Field Divergence

    Full text link

    Designing Automated Guided Vehicle Using Image Sensor

    Get PDF
    Automated guided vehicles (AGV) are one of the greatest achievements in the field of mobile robotics. Without continuous guidance from a human they navigate on desired path thus completing various tasks, e.g. fork lifting objects, towing, and product transportation inside manufacturing firm. Their development can revolutionize the world in the sense of fool proof navigation and accurate maneuvering. Though most of the presently the AGV work in a retrofitted environment, work space as they require some identification for tracing their guide path, works are going on developing such AGVs which are dynamic in sense of navigation and whose locomotion is not limited to just a retrofitted workspace. The aim of this work was developing such a natural feature AGV which takes visual input in the form images and gains detailed object, obstacle, landmark, identification to decide its guide path. The AGV set up developed, used a commercial electric motor based car ‘Reva i’, as chassis which was fitted with camera to take real time input and resolve it using segmentation and image processing techniques to reach a decision of driving controls. These controls were communicated, or better imparted to vehicle using parallel port of computer to servo motors, which in turn controlled the motion of vehicle. The work was focused more on dynamically controlling the vehicle using refinement of driving mechanism (hardware), however it could be assisted using better segmentation and obstacle detection algorithm. All the retro-fitting and codes were developed in such a way that they could be improved at any stage of time. The results could be enhanced if a better stereoscopic camera were used with a dedicated cpu with better graphics capability. This vision based AGV can revolutionize the mobile robotics world, including systems where a human driver is required to take decisions on the basis of visualized condition

    Програмний засіб навігації БПЛА на основі OpenCV та Tensorflow

    Get PDF
    Робота публікується згідно наказу ректора від 29.12.2020 р. №580/од "Про розміщення кваліфікаційних робіт вищої освіти в репозиторії НАУ" . Керівник проекту: Глазок Олексій МихайловичЗа останнє десятиріччя область використання безпілотних літальних апаратів (БПЛА) стрімко збільшилася. Їх використовують у різноманітних умовах, таких як розвідка, зйомка, порятунок та картографування. БПЛА спритні в повітрі, ними можна керувати за допомогою пульта дистанційного керування, вони можуть досягати великих висот і відстаней. Багато безпілотних літальних апаратів оснащені приєднаною камерою, наприклад екшн-камерою, яка дозволяє дрону знімати фотографії та відео з різноманітних ракурсів. Однак є деякі недоліки: керувати безпілотником може бути досить важко. Навіть при застосовуванні останніх досягнень у програмних засобах керування пілот має бути дуже обережними, оскільки втрата контролю над безпілотником може означати втрату самого БПЛА

    Real-time obstacle avoidance using central flow divergence and peripheral flow

    No full text
    corecore