28 research outputs found

    OpenCV WebCam Applications in an Arduino-based Rover

    Get PDF
    International audienceIn this work we design and implement Arduino-based Rovers with characteristics of re-programmability, modularity in terms of type and number of components, communication capability, equipped with motion support and capability to exploit information both from the surrounding and from other wireless devices. These latter can be homogeneous devices (i.e. others similar rovers) and heterogeneous devices (i.e. laptops, smartphones, etc.). We propose a Behavioral Algorithm that is implemented on our devices in order to supply a proof-of-concept of the e ectiveness of a Detection task. Speci cally, we implement the "Object Detection" and "Face Recognition" techniques based on OpenCV and we detail the modi cations necessary to work on distributed devices. We show the e ectiveness of the controlled mobility concept in order to accomplish a task, both in a centralized way (i.e. driven by a central computer that assign the task) and in a totally distributed fashion, in cooperation with other Rovers. We also highlight the limitations of similar devices required to accomplish speci c tasks and their potentiality

    Intelligent Vision-Driven Robot For Sample Detection And Return

    Get PDF
    This project explores various vision methodologies to locate and return a user-specified object. The project involves building an automatic robotic unit with an all-terrain chassis vehicle and integrated camera. The high level vision control system uses serial communication to direct the low level mechanical parts. The chosen approach for vision analysis is comparison of color thresholds. This solution provides generally accurate detection even in an environment which is noisy but has good color contrast

    Study, design, simulation and implementation of terrain detection algorithm for the navigation task of the European Rover Challenge

    Get PDF
    This project develops an algorithm for detecting the terrain and finding the position of the robot to be used in the Navigation task of the European Rover Challenge with the GRover robot from the UPC Space Program association. The localization is developed by detecting AR Tags placed at reference points with known coordinates using a custom marker dictionary in OpenCV and using ROS and gazebo to set up a simulation environment with the robot model. The marker detection is tested with the cameras, the movement of the robot on the real terrain is simulated, and in the simulation of the Navigation Task, results are obtained where the calculated positions have a deviation of 1.34 m by calculating the position at all angles with variations of π/4 radians. It can be concluded that the marker-based positioning algorithm is suitable, but it needs other sensors to reduce the deviation and the possibility of changing the arrangement of the cameras on the robot is presented

    ORYX 2.0: A Planetary Exploration Mobility Platform

    Get PDF
    This project involved the design, manufacturing, integration, and testing of ORYX 2.0, a modular mobility platform. ORYX 2.0 is a rover designed for operation on rough terrain to facilitate space related technology research and Earth exploration missions. Currently there are no low-cost rovers available to academia or industry, making it difficult to conduct research related to surface exploration. ORYX 2.0 fills this gap by serving as a ruggedized highly mobile research platform with many features aimed at simplifying payload integration. Multiple teleoperated field testing trials on a variety of terrains validated the rover’s ruggedness and ability to operate soundly. Lastly, a deployable pan-tilt camera was designed, built, and tested, as an example payload

    ORYX 2.0: A Planetary Exploration Mobility Platform

    Get PDF
    This project involved the design, manufacturing, integration, and testing of ORYX 2.0, a modular mobility platform. ORYX 2.0 is a rover designed for operation on rough terrain to facilitate space related technology research and Earth exploration missions. Currently there are no low-cost rovers available to academia or industry, making it difficult to conduct research related to surface exploration. ORYX 2.0 fills this gap by serving as a ruggedized highly mobile research platform with many features aimed at simplifying payload integration. Multiple teleoperated field testing trials on a variety of terrains validated the rover\u27s ruggedness and ability to operate soundly. Lastly, a deployable pan-tilt camera was designed, built, and tested, as an example payload

    A vision based multirotor aircraft for use in the security industry

    Get PDF
    This research consisted of developing a vision based multirotor aircraft that could be used in the security industry. A second-hand aircraft was purchased and modified. The aircraft made use of a Pixhawk flight controller and a Odroid XU4 companion computer, which resulted in the computer injecting commands into the flight controller. Robot Operating System was installed and used on the companion computer to integrate the vision system and the aircraft. The vision system was designed to help develop a landing system where the aircraft would land on an ArUco marker. The vision system also allowed the aircraft to detect and follow humans. A Software in the Loop (SITL) was run alongside Gazebo, allowing the developed landing system and the human detecting system to be simulated and tested. The developed landing system was implemented on the aircraft, where the developed landing system was tested and compared to the aircraft’s current GPS based landing system. The developed landing system obtained a better overall accuracy , while also taking longer to land the aircraft compared to the GPS based landing system. There were also numerous manual and autonomous test flights implemented on the aircraft

    Smile Detector Based on the Motion of Face Reference Points

    Get PDF
    Inimese ja arvuti suhtlus on kahtlemata tänapäeva ühiskonna väga tähtis osa. Et seda veelgi parandada on võimalik luua süsteeme, kus arvuti reageerib inimese liigutustele või näoilmetele. Naeratamine on ilmselt näoilme, mis annab inimese kohta kõige rohkem informatsiooni. Selles lõputöös kirjeldame algoritmi, mis suudab tuvastada seda, kui inimene naeratab. Selleks leiame kõigepealt Viola-Jones'i algoritmi abil näo asukoha. Seejärel leiame vajalikele näoosadele vastavad kontrollpunktid ning jälgime nende liikumist järgmiste videokaadrite jooksul. Tuvastatud liikumise järgi otsustab algoritm, kas inimene naeratab või mitte.Human and computer interaction is without doubt a really important part of our modern society. In order to improve it even further it is possible to develop computer systems that react to gestures or facial expressions of its user. Smiling is an expression that gives probably the most information about a person. In this thesis we describe an algorithm that understands when a person is smiling. To achieve that we first detect a face of a person using the Viola-Jones algorithm. After that several facial reference points are located and then tracked across several consequent frames using optical flow. The motion of these points is analyzed and the face is classified as smiling or not smiling

    Desenvolvimento de um sistema de visão estéreo com grande linha de base para a identifica cão de peões e outros alvos em estrada

    Get PDF
    Mestrado em Engenharia MecânicaOs veículos autónomos são uma tendência cada vez mais crescente nos dias de hoje com os grandes fabricantes da área automóvel, e não só, concentrados em desenvolver carros autónomos. As duas maiores vantagens que se destacam para os carros autónomos são maior conforto para o condutor e maior segurança, onde este trabalho se foca. São incontáveis as vezes que um condutor, por distração ou por outra razão, não vê um objeto na estrada e colide ou um peão na estrada que e atropelado. Esta e uma das questões que um sistema de apoio a condução (ADAS) ou um carro autónomo tenta solucionar e por ser uma questão tão relevante há cada vez mais investigação nesta área. Um dos sistemas mais usados para este tipo de aplicação são câmaras digitais, que fornecem informação muito completa sobre o meio circundante, para além de sistemas como sensores LIDAR, entre outros. Uma tendência que deriva desta e o uso de sistemas stereo, sistemas com duas câmaras, e neste contexto coloca-se uma pergunta a qual este trabalho tenta respoder: "qual e a distância ideal entre as câmaras num sistema stereo para deteção de objetos ou peões?". Esta tese apresenta todo o desenvolvimento de um sistema de visão stereo: desde o desenvolvimento de todo o software necessário para calcular a que distância estão peões e objetos usando duas câmaras até ao desenvolvimento de um sistema de xação das câmaras que permita o estudo da qualidade da deteção de peões para várias baselines. Foram realizadas experiências para estudar a influênci da baseline e da distância focal da lente que consistriam em gravar imagens com um peão em deslocamento a distâncias pré defenidas e marcadas no chão assim como um objeto xo, tudo em cenário exterior. A análise dos resultados foi feita comparando o valor calculado automáticamente pela aplicação com o valor medido. Conclui-se que com este sistema e com esta aplicação e possível detetar peões com exatidão razoável. No entanto, os melhores resultados foram obtidos para a baseline de 0.3m e para uma lente de 8mm.Nowadays, autonomous vehicles are an increasing trend as the major players of this sector, and not only, are focused in developing autonomous cars. The two main advantages of autonomous cars are the higher convenience for the passengers and more safety for the passengers and for the people around, which is what this thesis focus on. Sometimes, due to distraction or another reasons, the driver does not see an object on the road and crash or a pedestrian in the cross walk and the person is run over. This is one of the questions that an ADAS or an autonomous car tries to solve and due to the huge relevance of this more research have been done in this area. One of the most applied systems for ADAS are digital cameras, that provide complex information about the surrounding environment, in addition to LIDAR sensor and others. Following this trend, the use of stereo vision systems is increasing - systems with two cameras, and in this context a question comes up: "what is the ideal distance between the cameras in a stereo system for object and pedestrian detection?". This thesis shows all the development of a stereo vision system: from the development of the necessary software for calculating the objects and pedestrians distance form the setup using two cameras, to the design of a xing system for the cameras that allows the study of stereo for di erent baselines. In order to study the in uence of the baseline and the focal distance a pedestrian, walking through previously marked positions, and a xed object, were recorded, in an exterior scenario. The results were analyzed by comparing the automatically calculated distance, using the application, with the real value measured. It was concluded, in the end, that the distance of pedestrians and objects can be calculated, with minimal error, using the software developed and the xing support system. However, the best results were achieved for the 0.3m baseline and for the 8mm lens

    ENG4100 - USQ Project - Jason Pont

    Get PDF
    The aim of this project was to investigate navigation methods for supporting autonomous operations on Mars. To date there have been several robotic rover missions to Mars for the purpose of scientific exploration. These missions have relied heavily on human input for navigation due to the limited confidence in computer decision-making and the difficulty in localising an unknown environment with limited supporting infrastructure, such as satellite navigation. By increasing the confidence in the performance of an autonomous rover on Mars, this project will contribute to increasing the efficiency of future missions by reducing or removing humans from the control loop. Due to the signal propagation delay between Earth and Mars, a certain level of autonomy is required to ensure a rover can continue operating while awaiting instructions from a human on Earth. However, due to the level of risk in relying solely on automation, there is still considerable human intervention. This can result in significant downtime when awaiting a decision by a human operator on Earth. While acceptable for scientific missions, greater autonomy will be required for routine Mars operations. The project reviewed systems and sensors that have been used on previous robotic missions to Mars and other experiments on Earth. The most appropriate systems were assembled into a simulated test environment consisting of a small rover, an overhead camera that might be carried by a drone or balloon and wireless communications between the systems. A machine vision algorithm was developed to test the concept of an overhead camera mounted on a drone or balloon, while evaluating different path-planning algorithms for speed in navigating a previously unknown environment. An experimental system was built consisting of a rover, fixed overhead camera and communications between them. The machine vision algorithm was used to send instructions to the rover which could then follow a path through a test environment with different obstacle densities. Two different path-finding algorithms were tested with the system. The key outcomes of the project were the construction and testing of the system. The rover could navigate, rotate towards and travel to a target location, after receiving instructions via serial radio communications. The rover could also detect obstacles using an ultrasonic sensor and send this information back to the machine vision algorithm. The algorithm would then update the path with the new information received on obstacle locations and the rover would then follow the new path to the target location. By successfully testing the concept, the project showed that this system could be used to support future scientific missions, resource gathering and preparation for human exploration of Mars
    corecore