5 research outputs found

    A Mobile Robot Localization using External Surveillance Cameras at Indoor

    Get PDF
    AbstractLocalization is a technique that is needed for the service robot to drive at indoors, and it has been studied in various ways. Most localization techniques let the robot measure environmental information to gain location information, but those require high costs as it use many equipment, and also complicate the robot development. But if an external device could calculate the location of the robot and transmit it to the robot, it will reduce the extra cost for the internal equipment needed to recognize the location, and it will also simplify the robot development. Therefore this study suggests an effective way to control the robot by using the location information of the robot included in a map made by visual information from the surveillance cameras installed at indoors. The object in a single image is difficult to tell its size because of the shadow components and occlusion. Therefore, combination of shadow removal technique using HSV image from indoors and images from different perspective using homography to create two- dimensional map with accurate object information is suggested. In the experiment, the effectiveness of the suggested method is shown by analyzing the movement result of the robot which applied the location information from the two-dimensional map that is based on the multi cameras, which its accuracy is measured in advance

    Camera geometry determination based on circular's shape for peg-in-hole task

    Get PDF
    A simple, inexpensive system and effective in performing required tasks is the most preferable in industry. The peg-in-hole task is widely used in manufacturing process by using vision system and sensors. However, it requires complex algorithm and high Degree of Freedom (DOF) mechanism with fine movement. Hence, it will increase the cost. Currently, a forklift-like robot controlled by an operator using wired controllers is used to pick up one by one of the copper wire spools arranged side by side on the shelf to be taken to the inspection area. The holder and puller attached to the robot is used to pick up the spool. It is difficult for the operator to ensure the stem is properly inserted into the hole (peg-in-hole problem) because of the structure of the robot. However, the holder design is not universal and not applicable to other companies. The spool can only be grasped and pulled out from the front side and cannot be grasped using robot arm and gripper. In this study, a vision system is developed to solve the peg-in-hole problem by enabling the robot to autonomously perform the insertion and pick up the spool without using any sensors except a low-cost camera. A low-cost camera is used to capture images of copper wire spool in real-time video. Inspired by how human perceive an object orientation based on its shape, a system is developed to determine camera orientation based on the spool image condition and yaw angle from the center of the camera (CFOV) to CHS. The performance of the proposed system is analyzed based on detection rate analysis. This project is developed by using MATLAB software. The analysis is done in controlled environment with 50-110 cm distance range of camera to the spool. In addition, the camera orientation is analyzed between -20º to 20º yaw angle range. In order to ensure the puller will not scratch the spool, a mathematical equation is derived to calculate the puller tolerance. By using this, the system can estimate the spool position based on the camera orientation and distance calculation. Application of this system is simple and costeffective. A Modified Circular Hough Transform (MCHT) method is proposed and tested with existing method which is Circular Hough Transform (CHT) method to eliminate false circles and outliers. The results of the analysis showed detection success rate of 96% compared to the CHT method. It shows the MCHT method is better than CHT method. The proposed system is able to calculate the distance and camera orientation based on spool image condition with low error rate. Hence, it solves the peg-in-hole problem without using Force/Torque sensor. In conclusion, a total of 7 analysis consist of image pre-processing, image segmentation, object classification, comparison between CHT and MCHT, illumination measurement, distance calculation and yaw angle analysis were experimentally tested including the comparison with the existing method. The proposed system was able to achieve all the objectives

    Vision-based SLAM for the aerial robot ErleCopter

    Get PDF
    El objetivo principal de este trabajo, es la implementación de distintos tipos de algoritmos SLAM (mapeado y localización simultáneos) de visión monocular en el robot aéreo ErleCopter, empleando la plataforma software ROS (Robotic Operating System). Para ello se han escogido un conjunto de tres algoritmos ampliamente utilizados en el campo de la visión artificial: PTAM, ORB-SLAM y LSD-SLAM. Así se llevará a cabo un estudio del funcionamiento de los mismos en el ErleCopter. Además empleando dichos algoritmos, y fusionando la información extraída por estos con la información de otros sensores presentes en la plataforma robótica, se realizará un EKF (Extended Kalman Filter), de forma que podamos predecir la localización del robot de una manera más exacta en entornos interiores, ante la ausencia de sistemas GPS. Para comprobar el funcionamiento del sistema se empleará la plataforma de simulación robótica Gazebo. Por último se realizarán pruebas con el robot real, de forma que podamos observar y extraer conclusiones del funcionamiento de estos algoritmos sobre el propio ErleCopter.The main objective of this thesis is the implementation of different SLAM (Simultaneous Localization and Mapping) algorithms within the aerial robot ErleCopter, using the software platform ROS (Robotic Operating System). To do so, a bunch of three widely known and used algorithms in the field of the artificial vision have been chosen: PTAM, ORB-SLAM y LSD-SALM. So a study of the performance of such algorithms will be carried out in this way. Besides, working with such algorithms and fusing their information with the one obtained by other sensors existing within the robotic platform, an EKF (Extended Kalman Filter) will be carried out, in order to localize the robot more accurately in indoor environments, given the lack of GPS. To test the performance of the system, the robotic platform Gazebo will be used in this project. Finally tests will be made with the real robot, in order to observe and draw conclusions from the performance of these algorithms within the ErleCopter itself.Máster Universitario en Ingeniería Industrial (M141

    Privacy-preserving human mobility and activity modelling

    Get PDF
    The exponential proliferation of digital trends and worldwide responses to the COVID-19 pandemic thrust the world into digitalization and interconnectedness, pushing increasingly new technologies/devices/applications into the market. More and more intimate data of users are collected for positive analysis purposes of improving living well-being but shared with/without the user's consent, emphasizing the importance of making human mobility and activity models inclusive, private, and fair. In this thesis, I develop and implement advanced methods/algorithms to model human mobility and activity in terms of temporal-context dynamics, multi-occupancy impacts, privacy protection, and fair analysis. The following research questions have been thoroughly investigated: i) whether the temporal information integrated into the deep learning networks can improve the prediction accuracy in both predicting the next activity and its timing; ii) how is the trade-off between cost and performance when optimizing the sensor network for multiple-occupancy smart homes; iii) whether the malicious purposes such as user re-identification in human mobility modelling could be mitigated by adversarial learning; iv) whether the fairness implications of mobility models and whether privacy-preserving techniques perform equally for different groups of users. To answer these research questions, I develop different architectures to model human activity and mobility. I first clarify the temporal-context dynamics in human activity modelling and achieve better prediction accuracy by appropriately using the temporal information. I then design a framework MoSen to simulate the interaction dynamics among residents and intelligent environments and generate an effective sensor network strategy. To relieve users' privacy concerns, I design Mo-PAE and show that the privacy of mobility traces attains decent protection at the marginal utility cost. Last but not least, I investigate the relations between fairness and privacy and conclude that while the privacy-aware model guarantees group fairness, it violates the individual fairness criteria.Open Acces

    Entwicklung und Evaluierung einer kommunikationsgestützten Lokalisierung

    Get PDF
    Mit dem Ziel, ein spurgeführtes fahrerloses Transportfahrzeug lokal hochflexibel und frei navigieren zu lassen, wurde im Rahmen dieser Dissertation ein neuartiges, auf Entfernungsmessungen basierendes, kommunikationsgestütztes Lokalisierungssystem entwickelt, erforscht, praktisch umgesetzt und evaluiert. Mit Hilfe einer Tiefenbildkamera, welche an der Decke einer Halle montiert ist, wird der Bereich unterhalb der Kamera erfasst und von einer Auswerteeinheit analysiert. Das Sensorsystem ist in der Lage Objekte innerhalb des digitalen Sichtfeldes zu erkennen, sie auf deren kinematische und geometrische Eigenschaften hin zu untersuchen und die gesammelten Informationen über eine Kommunikationsschnittstelle zu veröffentlichen. Da eine Identifizierung der einzelnen Fahrzeuge mit Hilfe des Kamerasystems schwierig umsetzbar ist, wurde eine neue Herangehensweise erforscht. Basierend auf einem probabilistischen Ansatz konnte eine Methode entwickelt und implementiert werden, die es den einzelnen Fahrzeugen erlaubt, auf die kommunizierten Eigenschaften zuzugreifen und sich selbst zu identifizieren. Ein Vorteil des neuen Ansatzes ist die Unabhängigkeit von Form, Farbe und Größe der Fahrzeuge. Darüber hinaus erhält jedes Fahrzeug nicht nur die Informationen über sich selbst, sondern über alle Objekte im Sichtbereich der Kamera
    corecore