2,519 research outputs found

    Tracking planes with Time of Flight cameras and J-linkage

    Full text link

    Correction of Errors in Time of Flight Cameras

    Get PDF
    En esta tesis se aborda la corrección de errores en cámaras de profundidad basadas en tiempo de vuelo (Time of Flight - ToF). De entre las más recientes tecnologías, las cámaras ToF de modulación continua (Continuous Wave Modulation - CWM) son una alternativa prometedora para la creación de sensores compactos y rápidos. Sin embargo, existen gran variedad de errores que afectan notablemente la medida de profundidad, poniendo en compromiso posibles aplicaciones. La corrección de dichos errores propone un reto desafiante. Actualmente, se consideran dos fuentes principales de error: i) sistemático y ii) no sistemático. Mientras que el primero admite calibración, el último depende de la geometría y el movimiento relativo de la escena. Esta tesis propone métodos que abordan i) la distorsión sistemática de profundidad y dos de las fuentes de error no sistemático más relevantes: ii.a) la interferencia por multicamino (Multipath Interference - MpI) y ii.b) los artefactos de movimiento. La distorsión sistemática de profundidad en cámaras ToF surge principalmente debido al uso de señales sinusoidales no perfectas para modular. Como resultado, las medidas de profundidad aparecen distorsionadas, pudiendo ser reducidas con una etapa de calibración. Esta tesis propone un método de calibración basado en mostrar a la cámara un plano en diferentes posiciones y orientaciones. Este método no requiere de patrones de calibración y, por tanto, puede emplear los planos, que de manera natural, aparecen en la escena. El método propuesto encuentra una función que obtiene la corrección de profundidad correspondiente a cada píxel. Esta tesis mejora los métodos existentes en cuanto a precisión, eficiencia e idoneidad. La interferencia por multicamino surge debido a la superposición de la señal reflejada por diferentes caminos con la reflexión directa, produciendo distorsiones que se hacen más notables en superficies convexas. La MpI es la causa de importantes errores en la estimación de profundidad en cámaras CWM ToF. Esta tesis propone un método que elimina la MpI a partir de un solo mapa de profundidad. El enfoque propuesto no requiere más información acerca de la escena que las medidas ToF. El método se fundamenta en un modelo radio-métrico de las medidas que se emplea para estimar de manera muy precisa el mapa de profundidad sin distorsión. Una de las tecnologías líderes para la obtención de profundidad en imagen ToF está basada en Photonic Mixer Device (PMD), la cual obtiene la profundidad mediante el muestreado secuencial de la correlación entre la señal de modulación y la señal proveniente de la escena en diferentes desplazamientos de fase. Con movimiento, los píxeles PMD capturan profundidades diferentes en cada etapa de muestreo, produciendo artefactos de movimiento. El método propuesto en esta tesis para la corrección de dichos artefactos destaca por su velocidad y sencillez, pudiendo ser incluido fácilmente en el hardware de la cámara. La profundidad de cada píxel se recupera gracias a la consistencia entre las muestras de correlación en el píxel PMD y de la vecindad local. Este método obtiene correcciones precisas, reduciendo los artefactos de movimiento enormemente. Además, como resultado de este método, puede obtenerse el flujo óptico en los contornos en movimiento a partir de una sola captura. A pesar de ser una alternativa muy prometedora para la obtención de profundidad, las cámaras ToF todavía tienen que resolver problemas desafiantes en relación a la corrección de errores sistemáticos y no sistemáticos. Esta tesis propone métodos eficaces para enfrentarse con estos errores

    Correction of Errors in Time of Flight Cameras

    Get PDF
    En esta tesis se aborda la corrección de errores en cámaras de profundidad basadas en tiempo de vuelo (Time of Flight - ToF). De entre las más recientes tecnologías, las cámaras ToF de modulación continua (Continuous Wave Modulation - CWM) son una alternativa prometedora para la creación de sensores compactos y rápidos. Sin embargo, existen gran variedad de errores que afectan notablemente la medida de profundidad, poniendo en compromiso posibles aplicaciones. La corrección de dichos errores propone un reto desafiante. Actualmente, se consideran dos fuentes principales de error: i) sistemático y ii) no sistemático. Mientras que el primero admite calibración, el último depende de la geometría y el movimiento relativo de la escena. Esta tesis propone métodos que abordan i) la distorsión sistemática de profundidad y dos de las fuentes de error no sistemático más relevantes: ii.a) la interferencia por multicamino (Multipath Interference - MpI) y ii.b) los artefactos de movimiento. La distorsión sistemática de profundidad en cámaras ToF surge principalmente debido al uso de señales sinusoidales no perfectas para modular. Como resultado, las medidas de profundidad aparecen distorsionadas, pudiendo ser reducidas con una etapa de calibración. Esta tesis propone un método de calibración basado en mostrar a la cámara un plano en diferentes posiciones y orientaciones. Este método no requiere de patrones de calibración y, por tanto, puede emplear los planos, que de manera natural, aparecen en la escena. El método propuesto encuentra una función que obtiene la corrección de profundidad correspondiente a cada píxel. Esta tesis mejora los métodos existentes en cuanto a precisión, eficiencia e idoneidad. La interferencia por multicamino surge debido a la superposición de la señal reflejada por diferentes caminos con la reflexión directa, produciendo distorsiones que se hacen más notables en superficies convexas. La MpI es la causa de importantes errores en la estimación de profundidad en cámaras CWM ToF. Esta tesis propone un método que elimina la MpI a partir de un solo mapa de profundidad. El enfoque propuesto no requiere más información acerca de la escena que las medidas ToF. El método se fundamenta en un modelo radio-métrico de las medidas que se emplea para estimar de manera muy precisa el mapa de profundidad sin distorsión. Una de las tecnologías líderes para la obtención de profundidad en imagen ToF está basada en Photonic Mixer Device (PMD), la cual obtiene la profundidad mediante el muestreado secuencial de la correlación entre la señal de modulación y la señal proveniente de la escena en diferentes desplazamientos de fase. Con movimiento, los píxeles PMD capturan profundidades diferentes en cada etapa de muestreo, produciendo artefactos de movimiento. El método propuesto en esta tesis para la corrección de dichos artefactos destaca por su velocidad y sencillez, pudiendo ser incluido fácilmente en el hardware de la cámara. La profundidad de cada píxel se recupera gracias a la consistencia entre las muestras de correlación en el píxel PMD y de la vecindad local. Este método obtiene correcciones precisas, reduciendo los artefactos de movimiento enormemente. Además, como resultado de este método, puede obtenerse el flujo óptico en los contornos en movimiento a partir de una sola captura. A pesar de ser una alternativa muy prometedora para la obtención de profundidad, las cámaras ToF todavía tienen que resolver problemas desafiantes en relación a la corrección de errores sistemáticos y no sistemáticos. Esta tesis propone métodos eficaces para enfrentarse con estos errores

    꼬리날개 없는 날갯짓 초소형 비행체의 자세조절

    Get PDF
    학위논문 (박사) -- 서울대학교 대학원 : 공과대학 기계항공공학부, 2020. 8. 김현진.최근 생체모방에 대한 관심이 커지면서 생명체의 구조, 외형, 움직임, 행동을 분석하여 그들의 장점을 로봇에 적용시켜 기존의 로봇이 해결할 수 없거나 특별한 임무를 좀 더 효과, 효율적으로 해결하려는 시도가 늘어나고 있다. 이러한 시도는 무인비행체 개발에도 적용되고 있으며 날갯짓 비행체가 이에 해당된다. 날개짓 비행체는 날개의 반복운동을 통해 발생하는 힘을 통해 비행하는 비행체로 일반적으로 꼬리날개의 유무에 따라 새를 모방한 비행체(미익형 비행체)와 곤충을 모방한 비행체(무미익형 비행체)로 구분할 수 있다. 무미익형 비행체의 경우 제자리 비행을 할 수 있고, 크기가 작고 무게가 가벼워 공기저항도 줄일 수 있으며, 날렵한 비행이 가능하다는 장점이 있지만, 수동 안정성을 확보하기 위한 제어면이 충분하지 않고 추력 생성과 동시에 3축으로의 제어 모멘트를 만들 수 있는 복잡한 매커니즘을 가지고 있다는 특징을 가지고 있다. 본 논문에서는 저자의 미익형 비행체의 연구개발 사례를 토대로 자율 비행을 할 수 있는 무미익형 비행체를 개발하기 위한 요소기술들과 초기 비행체 개발을 목표로 한다. 해당 목표를 달성하기 위해 저자는 시중에서 판매되고 있는 RC장난감을 활용해 30 gram 이하의 무게를 가지고 30cm3 이내의 크기를 가지는 무미익형 날갯짓 비행체를 개발을 진행하였다. 비행체 내부에는 구동기로 DC 모터와 서보모터가 존재하며, DC 모터는 날갯짓을 일으키는 기어 박스를 작동시켜 비행체의 무게를 지탱하기 위한 thrust를 생성하며 roll축 방향으로의 moment 생성에 관여하며, 서보모터는 날갯짓에서 발생하는 좌우 thrust의 방향을 조절하여 pitch 와 yaw 축으로의 모멘트를 생성하는데 사용된다. 비행체 내부에는 아두이노 보드 기반의 마이크로프로세서가 탑재되어 있어 비행체를 제어하기 위한 신호를 생성할 수 있으며 블루투스 통신 모듈을 가지고 있기 때문에 외부와 통신 역시 가능하다. 비행체의 자세를 제어하기 위해서는 구동기의 상호작용으로 인해 발생하는 힘의 물리량을 파악하는 것이 중요하다. 이를 위해 날갯짓 메커니즘에서 발생하는 힘을 측정하는 실험을 수행하였다. 측정실험을 통해 DC모터 입력 대비 thrust 크기, 서보모터 command 입력 대비 moment 크기 등의 관계를 파악하였다. 또한 날갯짓 비행체를 공중에 띄울 수 있는 충분한 크기의 thrust를 발생하는 것을 확인하였으며 자세 제어를 위한 모멘트 생성 역시 가능하다는 것을 확인하였다. 비행체의 자세를 제어하기 위해서는 3축 방향으로의 운동방정식을 유도하는 것이 필요하다. 이를 위해 roll, pitch, yaw 축 방향으로 비행체에서 발생하는 힘과 회전 운동과 관련한 운동방정식을 유도했으며 이를 통해 비행체의 자세를 안정화시킬 수 있도록 하는 PID 제어기 형태의 제어기를 설계하였다. 뿐만 아니라, 비행체의 궤적추종 제어를 위해 내부의 자세 제어기에 비행체의 위치를 토대로 계산되는 추가적인 외부 제어기를 설계하여 이중루프 제어기 형태를 적용시켜 시뮬레이션을 통해 비행체의 자세 제어와 궤적 추종 제어가 이루어짐을 확인하였다. 개발한 비행체와 앞서 설계한 제어기가 사용자의 의도에 맞는 성능을 내는지 확인하기 위해 자이로 실험장치를 제작하여 자세 제어 실험을 수행하였다. 해당 실험장치는 roll, pitch, yaw 축으로 회전이 가능하도록 제작하였으며 실험장치 자체의 무게를 줄이기 위해 MDF 소재를 사용하여 구조물를 만들었다. roll, pitch, yaw 3축이 각각 독립적으로 제어하는 것과 3축을 동시에 제어하는 2가지 상황을 고려하였으며 앞서 설계한 제어기가 해당 실험 장치 내부에서 사용자의 의도에 맞게 제어 성능을 보이는지 확인할 수 있었다. 궤적 추종제어를 위해서는 2가지 비행 상황을 설정하였다. 첫 번째 경우, 천장과 비행체 상단부에 실을 연결하여 2D 평면상에서 비행체가 주워진 궤적에 따라 움직이는지, 두 번째 경우, 비행체 상단부에 헬륨이 주입된 풍선을 연결시켜 3D 공간상에서 주워진 궤적을 따라 추종 비행하는지를 확인할 수 있는 상황이다. 두 가지 상황에서 모두 다양한 형태의 궤적을 비행체가 잘 추종하는지를 확인할 수 있었다. 끝으로, 외부 장치(실, 풍선)를 제거하여 공중에서 비행체가 제자리 비행을 할 수 있는지를 검증하는 실험을 진행하였으며, 15초가량 1m3 공간 내에서 제자리 비행이 이루어지는 것을 확인하였다.Flapping wing micro air vehicles (FWMAVs) that generate thrust and lift by flapping their wings are regarded as promising flight vehicles because of their advantages in terms of similar appearance and maneuverability to natural creatures. Reducing weight and air resistance, insect-inspired tailless FWMAVs are an attractive aerial vehicle rather than bird-inspired FWMAVs. However, they are challenging platforms to achieve autonomous flight because they have insufficient control surfaces to secure passive stability and a complicated wing mechanism for generating three-axis control moments simultaneously. In this thesis, as preliminary autonomous flight research, I present the study of an attitude regulation and trajectory tracking control of a tailless FWMAV developed. For these tasks, I develop my platform, which includes two DC motors for generating thrust to support its weight and servo motors for generating three-axis control moments to regulate its flight attitude. First, I conduct the force and moment measurement experiment to confirm the magnitude and direction of the lift and moment generated from the wing mechanism. From the measurement test, it is confirmed that the wing mechanism generates enough thrust to float the vehicle and control moments for attitude regulation. Through the dynamic equations in the three-axis direction of the vehicle, a controller for maintaining a stable attitude of the vehicle can be designed. To this end, a dynamic equation related to the rotational motion in the roll, pitch, and yaw axes is derived. Based on the derived dynamic equations, we design a proportional-integral-differential controller (PID) type controller to compensate for the attitude of the vehicle. Besides, we use a multi-loop control structure (inner-loop: attitude control, outer-loop: position control) to track various trajectories. Simulation results show that the designed controller is effective in regulating the platforms attitude and tracking a trajectory. To check whether the developed vehicle and the designed controller are operating effectively to regulate its attitude, I design a lightweight gyroscope apparatus using medium-density-fiberboard (MDF) material. The rig is capable of freely rotating in the roll, pitch, and yaw axes. I consider two situations in which each axis is controlled independently, and all axes are controlled simultaneously. In both cases, attitude regulation is properly performed. Two flight situations are considered for the trajectory tracking experiment. In the first case, a string connects between the ceiling and the top of the platform. In the second case, the helium-filled balloon is connected to the top of the vehicle. In both cases, the platform tracks various types of trajectories well in error by less than 10 cm. Finally, an experiment is conducted to check whether the tailless FWMAV could fly autonomously in place by removing external devices (string, balloon), and the tailless FWMAV flies within 1 m^3 space for about 15 seconds1.Introduction 1 1.1 Background & Motivation 1 1.2 Literature review 3 1.3 Thesis contribution 7 1.4 Thesis outline 8 2.Design of tailless FWMAV 13 2.1 Platform appearance 13 2.2 Flight control system 17 2.3 Principle of actuator mechanism 18 3.Force measurement experiment 28 3.1 Measurement setup 28 3.2 Measurement results 30 4.Dynamics & Controller design 37 4.1 Preliminary 37 4.2 Dynamics & Attitude control 39 4.2.1 Roll direction 41 4.2.2 Pitch direction 43 4.2.3 Yaw direction 45 4.2.4 PID control 47 4.3 Trajectory tracking control 48 5.Attitude regulation experiments 50 5.1 Design of gyroscope testbed 50 5.2 Experimental environment 52 5.3 Roll axis free 53 5.3.1 Simulation 54 5.3.2 Experiment 55 5.4 Pitch axis free 56 5.4.1 Simulation 57 5.4.2 Experiment 58 5.5 Yaw axis free 59 5.5.1 Simulation 59 5.5.2 Experiment 60 5.6 All axes free 60 5.6.1 Simulation 60 5.6.2 Experiment 61 5.7 Design of universal joint testbed & Experiment 64 6.Trajectory tracking 68 6.1 Simulation 68 6.2 Preliminary 69 6.3 Experiment: Tied-to-the-ceiling 70 6.4 Experiment: Hung-to-a-balloon 71 6.5 Summary 72 6.6 Hovering flight 73 7.Conclusion 83 A Appendix: Wing gearbox 85 A.1 4-bar linkage structure 85 B Appendix: Disturbance observer (DOB) 87 B.1 DOB controller 87 B.2 Simulation 89 B.2.1 Step input 89 B.2.2 Sinusoid input 91 B.3 Experiment 92 References 95Docto

    Combining Occupancy Grids with a Polygonal Obstacle World Model for Autonomous Flights

    Get PDF
    This chapter presents a mapping process that can be applied to autonomous systems for obstacle avoidance and trajectory planning. It is an improvement over commonly applied obstacle mapping techniques, such as occupancy grids. Problems encountered in large outdoor scenarios are tackled and a compressed map that can be sent on low-bandwidth networks is produced. The approach is real-time capable and works in full 3-D environments. The efficiency of the proposed approach is demonstrated under real operational conditions on an unmanned aerial vehicle using stereo vision for distance measurement

    Closed-Loop Control of Constrained Flapping Wing Micro Air Vehicles

    Get PDF
    Micro air vehicles are vehicles with a maximum dimension of 15 cm or less, so they are ideal in confined spaces such as indoors, urban canyons, and caves. Considerable research has been invested in the areas of unsteady and low Reynolds number aerodynamics, as well as techniques to fabricate small scale prototypes. Control of these vehicles has been less studied, and most control techniques proposed have only been implemented within simulations without concern for power requirements, sensors and observers, or actual hardware demonstrations. In this work, power requirements while using a piezo-driven, resonant flapping wing control scheme, Bi-harmonic Amplitude and Bias Modulation, were studied. In addition, the power efficiency versus flapping frequency was studied and shown to be maximized while flapping at the piezo-driven system\u27s resonance. Then prototype hardware of varying designs was used to capture the impact of a specific component of the flapping wing micro air vehicle, the passive rotation joint. Finally, closed-loop control of different constrained configurations was demonstrated using the resonant flapping Bi-harmonic Amplitude and Bias Modulation scheme with the optimized hardware. This work is important in the development and understanding of eventual free-flight capable flapping wing micro air vehicle

    SwarMAV: A Swarm of Miniature Aerial Vehicles

    Get PDF
    As the MAV (Micro or Miniature Aerial Vehicles) field matures, we expect to see that the platform's degree of autonomy, the information exchange, and the coordination with other manned and unmanned actors, will become at least as crucial as its aerodynamic design. The project described in this paper explores some aspects of a particularly exciting possible avenue of development: an autonomous swarm of MAVs which exploits its inherent reliability (through redundancy), and its ability to exchange information among the members, in order to cope with a dynamically changing environment and achieve its mission. We describe the successful realization of a prototype experimental platform weighing only 75g, and outline a strategy for the automatic design of a suitable controller

    Exploring Motion Signatures for Vision-Based Tracking, Recognition and Navigation

    Get PDF
    As cameras become more and more popular in intelligent systems, algorithms and systems for understanding video data become more and more important. There is a broad range of applications, including object detection, tracking, scene understanding, and robot navigation. Besides the stationary information, video data contains rich motion information of the environment. Biological visual systems, like human and animal eyes, are very sensitive to the motion information. This inspires active research on vision-based motion analysis in recent years. The main focus of motion analysis has been on low level motion representations of pixels and image regions. However, the motion signatures can benefit a broader range of applications if further in-depth analysis techniques are developed. In this dissertation, we mainly discuss how to exploit motion signatures to solve problems in two applications: object recognition and robot navigation. First, we use bird species recognition as the application to explore motion signatures for object recognition. We begin with study of the periodic wingbeat motion of flying birds. To analyze the wing motion of a flying bird, we establish kinematics models for bird wings, and obtain wingbeat periodicity in image frames after the perspective projection. Time series of salient extremities on bird images are extracted, and the wingbeat frequency is acquired for species classification. Physical experiments show that the frequency based recognition method is robust to segmentation errors and measurement lost up to 30%. In addition to the wing motion, the body motion of the bird is also analyzed to extract the flying velocity in 3D space. An interacting multi-model approach is then designed to capture the combined object motion patterns and different environment conditions. The proposed systems and algorithms are tested in physical experiments, and the results show a false positive rate of around 20% with a low false negative rate close to zero. Second, we explore motion signatures for vision-based vehicle navigation. We discover that motion vectors (MVs) encoded in Moving Picture Experts Group (MPEG) videos provide rich information of the motion in the environment, which can be used to reconstruct the vehicle ego-motion and the structure of the scene. However, MVs suffer from high noise level. To handle the challenge, an error propagation model for MVs is first proposed. Several steps, including MV merging, plane-at-infinity elimination, and planar region extraction, are designed to further reduce noises. The extracted planes are used as landmarks in an extended Kalman filter (EKF) for simultaneous localization and mapping. Results show that the algorithm performs localization and plane mapping with a relative trajectory error below 5:1%. Exploiting the fact that MVs encodes both environment information and moving obstacles, we further propose to track moving objects at the same time of localization and mapping. This enables the two critical navigation functionalities, localization and obstacle avoidance, to be performed in a single framework. MVs are labeled as stationary or moving according to their consistency to geometric constraints. Therefore, the extracted planes are separated into moving objects and the stationary scene. Multiple EKFs are used to track the static scene and the moving objects simultaneously. In physical experiments, we show a detection rate of moving objects at 96:6% and a mean absolute localization error below 3:5 meters
    corecore