9 research outputs found

    Positioning device for outdoor mobile robots using optical sensors and lasers

    Get PDF
    We propose a novel method for positioning a mobile robot in an outdoor environment using lasers and optical sensors. Position estimation via a noncontact optical method is useful because the information from the wheel odometer and the global positioning system in a mobile robot is unreliable in some situations. Contact optical sensors such as computer mouse are designed to be in contact with a surface and do not function well in strong ambient light conditions. To mitigate the challenges of an outdoor environment, we developed an optical device with a bandpass filter and a pipe to restrict solar light and to detect translation. The use of two devices enables sensing of the mobile robot’s position, including posture. Furthermore, employing a collimated laser beam allows measurements against a surface to be invariable with the distance to the surface. In this paper, we describe motion estimation, device configurations, and several tests for performance evaluation. We also present the experimental positioning results from a vehicle equipped with our optical device on an outdoor path. Finally, we discuss an improvement in postural accuracy by combining an optical device with precise gyroscopes

    Odometrija mobilnog robota bazirana na optičkom toku podržana s više senzora i fuzijom senzora

    Get PDF
    This paper introduces an optical flow based odometry solution for indoor mobile robots. The indoor localization of mobile robots is an important issue according to the increasing mobile robot market and the needs of the industrial, service and consumer electronics sectors. The robot odometry calculated from the robot kinematics accumulates the position error caused by the wheel slip but an optical flow based measurement is independent from wheel slipping so both methods have different credibility which was considered during the sensor fusion and the development. The focus of the research was to design an embedded system with high accuracy on the possibly lowest price to serve the needs of the consumer electronics sector without the need of expensive camera and real-time embedded computer based high level robot localization solutions. The paper proposes the theoretical background, the implementation and the experimental results as well. The universal optical flow module can be implemented in any kind of indoor mobile robot to measure the position and the orientation of the robot during the motion, even in the case of a 3 DoF holonomic drive like kiwi drive. The application of omnidirectional wheels in mobile robotics requires high accurate position and orientation feedback methods contrary to differential drives.Ovaj rad predstavlja rješenje odometrije mobilnog robota za unutrašnje prostore koje se bazira na optičkom toku. Lokalizacija mobilnog robota u unutrašnjim prostorima je veoma važno pitanje u rastućem tržištu mobilnih robota i potreba industrijskog, uslužnog i sektora potrošačke elektronike. Odometrija robota izračunata iz kinematike robota nakuplja greške s vremenom radi sklizanja kotača, dok na odometriju izmjerenu optičkim tokom klizanje ne utječe, te obje metode imaju različit kredibilitet što je uzeto u obzir prilikom fuzije senzora i razvoja. Fokus istraživanja je bio dizajnirati ugradbeno sustav visoke točnosti i niske cijene koji bi zadovoljio potrebe sektora potrošačke elektronike bez potrebe za skupim kamerama i lokalizacijskim rješenjima mobilnog robota visokog nivoa namijenjenim izvođenju na ugradbenim računalima za rad u stvarnom vremenu. U radu je iznesena teorijska podloga, implementacija i eksperimentalni rezultati. Univerzalni modul za izračun optičkog toka može se implementirati na bilo kojem mobilnom robotu za unutrašnje prostore kako bi mjerio poziciju i rotaciju robota tijekom gibanja, čak i u slučaju 3 DoF holonomskog pogona kao što je kiwi pogon. Korištenje omnidirekcijskih kotača u mobilnoj robotici zahtjeva visoku točnost pozicije i orijentacije za razliku od diferencijalnog pogona

    Optical Speed Measurement and Applications

    Get PDF

    Reciclaje Tecnológico al Servicio de la Ciencia

    Get PDF
    En este trabajo se hace una revisión de los desarrollos científicos logrados a partir del reciclaje tecnológico de partes de computador, más específicamente, de unidades de CD/DVD y ratones. Estos elementos, que con frecuencia son desechados generando serios problemas de contaminación ambiental, son en sí mismos elementos sofisticados y de gran precisión, que al ser producidos en masa pueden adquirirse a muy bajo costo. Estas características los convierten en equipos interesantes para trabajos de investigación científica que buscan alternativas simples, de bajo costo y miniaturización de equipos (laboratorio en un chip). Microscopios, interferómetros, escáneres, odómetros, manómetros, péndulos, codificadores, entre otros, son los desarrollos que se han logrado a partir de la reutilización de estos elementos. Aquí se muestra el principio de funcionamiento de estas partes de computador, las aplicaciones a que han sido objeto desde su invención y otros campos de aplicación potenciales como conclusión de la revisión realizada.This paper is a review of scientific developments achieved from recycled parts of computer technology, more specifically, CD/DVD drives and mice. These elements, which are often discarded causing serious environmental pollution problems, are sophisticated high precision components, and, as they are produced in mass, can be acquired at very low cost. These features make them interesting for scientific research in seeking simplicity, low cost, and miniaturization of equipment (lab on a chip). Microscopes, interferometers, scanners, odometer, manometers, pendulums, encoders, among others, are the applications that have been achieved from the reuse of these elements. Here we present the working principle of these computer parts, the applications they have had since their invention and the prospective application fields as the conclusion of this revision work

    Localization in Low Luminance, Slippery Indoor Environment Using Afocal Optical Flow Sensor and Image Processing

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 전기·정보공학부, 2017. 8. 조동일.실내 서비스로봇의 위치 추정은 자율 주행을 위한 필수 요건이다. 특히 카메라로 위치를 추정하기 어려운 실내 저조도 환경에서 미끄러짐이 발생할 경우에는 위치 추정의 정확도가 낮아진다. 미끄러짐은 주로 카펫이나 문턱 등을 주행할 때 발생하며, 휠 엔코더 기반의 주행기록으로는 주행 거리의 정확한 인식에 한계가 있다. 본 논문에서는 카메라 기반 동시적 위치추정 및 지도작성 기술(simultaneous localization and mappingSLAM)이 동작하기 어려운 저조도, 미끄러운 환경에서 저가의 모션센서와 무한초점 광학흐름센서(afocal optical flow sensorAOFS) 및 VGA급 전방 단안카메라를 융합하여 강인하게 위치를 추정하는 방법을 제안했다. 로봇의 위치 추정은 주행거리 순간 변화량과 방위각 순간 변화량을 누적 융합하여 산출했으며, 미끄러운 환경에서도 좀 더 정확한 주행거리 추정을 위해 휠 엔코더와 AOFS로부터 획득한 이동 변위 정보를 융합했고, 방위각 추정을 위해 각속도 센서와 전방 영상으로부터 파악된 실내 공간정보를 활용했다. 광학흐름센서는 바퀴 미끄러짐에 강인하게 이동 변위를 추정 하지만, 카펫처럼 평평하지 않은 표면을 주행하는 이동 로봇에 광학흐름센서를 장착할 경우, 주행 중 발생하는 광학흐름센서와 바닥 간의 높이 변화가 광학흐름센서를 이용한 이동거리 추정 오차의 주요인으로 작용한다. 본 논문에서는 광학흐름센서에 무한초점계 원리를 적용하여 이 오차 요인을 완화하는 방안을 제시하였다. 로봇 문형 시스템(robotic gantry system)을 이용하여 카펫 및 세가지 종류의 바닥재질에서 광학흐름센서의 높이를 30 mm 에서 50 mm 로 변화시키며 80 cm 거리를 이동하는 실험을 10번씩 반복한 결과, 본 논문에서 제안하는 AOFS 모듈은 1 mm 높이 변화 당 0.1% 의 계통오차(systematic error)를 발생시켰으나, 기존의 고정초점방식의 광학흐름센서는 14.7% 의 계통오차를 나타냈다. 실내 이동용 서비스 로봇에 AOFS를 장착하여 카펫 위에서 1 m 를 주행한 결과 평균 거리 추정 오차는 0.02% 이고, 분산은 17.6% 인 반면, 고정초점 광학흐름센서를 로봇에 장착하여 같은 실험을 했을 때에는 4.09% 의 평균 오차 및 25.7% 의 분산을 나타냈다. 주위가 너무 어두워서 영상을 위치 보정에 사용하기 어려운 경우, 즉, 저조도 영상을 밝게 개선했으나 SLAM에 활용할 강인한 특징점 혹은 특징선을 추출하기 어려운 경우에도 로봇 주행 각도 보정에 저조도 이미지를 활용하는 방안을 제시했다. 저조도 영상에 히스토그램 평활화(histogram equalization) 알고리즘을 적용하면 영상이 밝게 보정 되면서 동시에 잡음도 증가하게 되는데, 영상 잡음을 없애는 동시에 이미지 경계를 뚜렷하게 하는 롤링 가이던스 필터(rolling guidance filterRGF)를 적용하여 이미지를 개선하고, 이 이미지에서 실내 공간을 구성하는 직교 직선 성분을 추출 후 소실점(vanishing pointVP)을 추정하고 소실점을 기준으로 한 로봇 상대 방위각을 획득하여 각도 보정에 활용했다. 제안하는 방법을 로봇에 적용하여 0.06 ~ 0.21 lx 의 저조도 실내 공간(77 sqm)에 카펫을 설치하고 주행했을 경우, 로봇의 복귀 위치 오차가 기존 401 cm 에서 21 cm로 줄어듦을 확인할 수 있었다.제 1 장 서 론 1 1.1 연구의 배경 1 1.2 선행 연구 조사 6 1.2.1 실내 이동형 서비스 로봇의 미끄러짐 감지 기술 6 1.2.2 저조도 영상 개선 기술 8 1.3 기여도 12 1.4 논문의 구성 14 제 2 장 무한초점 광학흐름센서(AOFS) 모듈 16 2.1 무한초점 시스템(afocal system) 16 2.2 바늘구멍 효과 18 2.3 무한초점 광학흐름센서(AOFS) 모듈 프로토타입 20 2.4 무한초점 광학흐름센서(AOFS) 모듈 실험 계획 24 2.5 무한초점 광학흐름센서(AOFS) 모듈 실험 결과 29 제 3 장 저조도영상의 방위각보정 활용방법 36 3.1 저조도 영상 개선 방법 36 3.2 한 장의 영상으로 실내 공간 파악 방법 38 3.3 소실점 랜드마크를 이용한 로봇 각도 추정 41 3.4 최종 주행기록 알고리즘 46 3.5 저조도영상의 방위각 보정 실험 계획 48 3.6 저조도영상의 방위각 보정 실험 결과 50 제 4 장 저조도 환경 위치인식 실험 결과 54 4.1 실험 환경 54 4.2 시뮬레이션 실험 결과 59 4.3 임베디드 실험 결과 61 제 5 장 결론 62Docto

    광 변위센서를 사용한 선체 청소로봇의 위치추정 시스템 개발

    Get PDF
    An industrial robot has been applied on the hull cleaning method to enhance an operational efficiency of entire cleaning process. Especially, autonomous robotic system is necessary for more efficient cleaning hull cleaning and the position estimation system is indispensible part in this system. Position estimation system of the hull cleaning robot, therefore, was studied for the autonomous hull cleaning process in this paper. Conventional position estimation method with rotary encoders is unsuitable for the hull cleaning robot on account of slippage between the robot wheel and the hull surface. Thus, a novel position estimation system using optical displacement sensors was suggested to solve this problem. Operation environments and drive characteristics of the hull cleaning robot were analyzed to design the position estimation system effectively. Reflecting the results of the analysis, a position estimation algorithm which based on the dead reckoning and instantaneous center of rotation theory was developed. Performance test of the optical displacement sensor that measures the relative displacement with a contact-free optical sensor was implemented to find out the output characteristics according to the operating conditions including direction, speed, acceleration, height and surface type. In the position estimation system, two optical displacement sensors were used to reduce the measurement error and also data selection algorithm which choose more sensitive one in the two measured data was added to error reduction method. Furthermore, the monitoring PC operates the graphical based position estimation program that contains the position estimation algorithm. Consequently, the results of the position estimation are able to be displayed on the user interface screen in real-time and save on the database simultaneously. The developed position estimation system was mounted on the scale model mobile robot which has an identical drive method with hull cleaning robot for experiments because the large scale support units, operation cost, high electric power, wide test area are required to operate the real hull cleaning robot. Experimental results demonstrate that the proposed position estimation system with the optical displacement sensors is more accurate compared to conventional system using rotary encoders.목 차 List of Tables iii List of Figures iv Nomenclatures vi Abstract viii 제 1 장 서 론 1 1.1 연구배경 1 1.2 연구동향 5 1.3 연구내용 및 구성 6 제 2 장 선체 청소로봇 시스템 7 2.1 선체 청소로봇 운용환경 7 2.2 선체 청소로봇 시스템 구성 8 2.2.1 선체 청소로봇 구조 8 2.2.2 청소로봇 지원 장치 9 2.3 선체 청소로봇의 주행특성 11 제 3 장 위치추정 알고리즘 13 3.1 위치추정 개요 13 3.2 평면상에서의 위치추정 알고리즘 14 제 4 장 위치추정 시스템 설계 20 4.1 광 변위센서 20 4.1.1 광 변위센서 개요 20 4.1.2 광 변위센서의 동작 특성 24 4.1.3 광 변위센서 교정 30 4.2 위치추정 시스템 구성 33 제 5 장 실험 및 분석 39 5.1 위치추정 실험 39 5.2 위치추정 실험 결과 42 제 6 장 결 론 50 참고문헌 52 감사의 글 5

    A Kinematic-independent Dead-reckoning Sensor for Indoor Mobile Robotics

    No full text

    Visual-Inertial Sensor Fusion Models and Algorithms for Context-Aware Indoor Navigation

    Get PDF
    Positioning in navigation systems is predominantly performed by Global Navigation Satellite Systems (GNSSs). However, while GNSS-enabled devices have become commonplace for outdoor navigation, their use for indoor navigation is hindered due to GNSS signal degradation or blockage. For this, development of alternative positioning approaches and techniques for navigation systems is an ongoing research topic. In this dissertation, I present a new approach and address three major navigational problems: indoor positioning, obstacle detection, and keyframe detection. The proposed approach utilizes inertial and visual sensors available on smartphones and are focused on developing: a framework for monocular visual internal odometry (VIO) to position human/object using sensor fusion and deep learning in tandem; an unsupervised algorithm to detect obstacles using sequence of visual data; and a supervised context-aware keyframe detection. The underlying technique for monocular VIO is a recurrent convolutional neural network for computing six-degree-of-freedom (6DoF) in an end-to-end fashion and an extended Kalman filter module for fine-tuning the scale parameter based on inertial observations and managing errors. I compare the results of my featureless technique with the results of conventional feature-based VIO techniques and manually-scaled results. The comparison results show that while the framework is more effective compared to featureless method and that the accuracy is improved, the accuracy of feature-based method still outperforms the proposed approach. The approach for obstacle detection is based on processing two consecutive images to detect obstacles. Conducting experiments and comparing the results of my approach with the results of two other widely used algorithms show that my algorithm performs better; 82% precision compared with 69%. In order to determine the decent frame-rate extraction from video stream, I analyzed movement patterns of camera and inferred the context of the user to generate a model associating movement anomaly with proper frames-rate extraction. The output of this model was utilized for determining the rate of keyframe extraction in visual odometry (VO). I defined and computed the effective frames for VO and experimented with and used this approach for context-aware keyframe detection. The results show that the number of frames, using inertial data to infer the decent frames, is decreased

    Development and Testing of Hardware Simulator for Satellite Proximity Maneuvers and Formation Flying

    Get PDF
    Satellite Formation Flying (SFF) and Proximity Operations are applications that have increasingly gained interest over the years. These applications foresee the substitution of a single spacecraft with a system of multiple satellites that perform coordinated position and attitude control maneuvers, which in turn results in higher accuracy of payload measurement, higher flexibility, robustness to failure, and reduction of development costs. These systems present however higher difficulties in their design since they have not only absolute but also relative state requirements, which make them also liable to higher control action expense with respect to (wrt) the single satellite systems. Moreover, applications like Automated Rendez-Vous and Docking (RVD) and in general close proximity maneuvers present a high risk of impact between the satellites, which must be treated with an appropriate design of the on board Guidance Navigation and Control (GNC) system. These aspects justify the development and employment of a ground hardware simulator representative of two or more satellites performing coordinate maneuvers, allowing the investigation of these problems with an easily accessible system. The aim of my Ph.D. Activities has consisted in the development and testing of the cooperating SPAcecRaft Testbed for Autonomous proximity operatioNs experimentS (SPARTANS) hardware simulator, which is under development since 2010 at the Center of Studies and Activities for Space (CISAS) of the University of Padova. This ground simulator presents robotic units that allow the reproduction of the relative position and attitude motions of satellites in proximity or in formation, and can be therefore employed for the extensive study of control algorithms and strategies for these types of applications, allowing dedicated hardware in the loop to be tested in a controlled environment. At the beginning of my Ph.D., the testbed consisted in the first prototype of Attitude Module (AM), a platform with three rotational Degrees of Freedom (DOF) of Yaw, Pitch and Roll, controllable through a GNC system based on incremental encoders and air thrusters. A small contribution was initially given in support of the execution of a series of 3 DOF attitude control maneuvers tests with the AM. Subsequently, the first activity consisted in the design and development of the air suspension system that enables a low friction translational motion of the a whole Unit of the testbed over the test table, with the characterization of air skids available in laboratory. The subsequent activity consisted in the design and development of the Translation Module (TM), the lower section of the whole Unit, as modular structure supporting the air suspension system, the AM, and the on board localization system. After this activity the on board localization system for position and Azimuth estimation, based on Optical Flow Sensors (OFS), was developed and tested. The system was installed on a TM base prototype and it was calibrated and tested with the imposition of known motions through rotational and translational motorized stages wich were used in conjunction, presenting max deviations at the level of 0.1° for a total rotational range of 40°, and max deviations of 1 mm for a total translational range of 100 mm. Combined maneuvers, i.e. translational and rotational motions imposed in sequence, were subsequently performed, showing a drift trend, up to approximately 1 cm for a 90° rotation. Subsequently the OFS system was assembled in the TM and integrated with an external vision system, under development in parallel in the context of the SPARTANS project. Results showed a good general concordance between the two systems, but combined maneuvers with extended rotational range showed again a drift trend in the OFS system solution, not only in position but also in Azimuth. A parallel activity consisted in the design and development of the levellable test table for the Units with a modular structure. Another activity consisted in the development of a Matlab Software Simulator for Units tests planning. A series of preliminary standard and optimal control maneuvers were planned with the software simulator. The last activity of my Ph.D. consisted in the analysis of an inspection scenario for satellite removal purposes, with the goal of reproducing the relative dynamics in scale with the SPARTANS simulator. The chosen scenario foresaw the inspection, through a vision system on board an inspection satellite, of the currently freely tumbling Envisat spacecraft . The analysis performed with a Matlab software simulator was focused on the acquisition and maintainance of a circular relative orbit at close range starting from a flyaround orbit, through the employment of Model Predictive Control (MPC) and Linear Quadratic Regulator (LQR) optimal controllers. Simulations results showed a lower tracking error in position with the MPC controller wrt to the LQR controller, but with a higher control action expense: for a 6 hours inspection on a 41 m radius circular relative orbit, the max total delta-v component resulted of 3.3 m/s for MPC, while it resulted of 0.7 m/s for LQR. In the present configuration the SPARTANS testbed presents a first complete Unit and test table to be assembled in the immediate future for the execution of the first position and attitude control maneuvers. The final configuration of the testbed will present a minimum of two Units allowing to perform coordinate control maneuvers for the investigation and study of problems and strategies related to SFF, Automated Rendez-Vous and Docking, and in general proximity manevuers
    corecore