66,483 research outputs found

    Автоматическое управление оптической осью камеры на основе системы технического зрения с использованием метода идентификации объектов по цвету

    Get PDF
    Робота присвячена розробці системи технічного зору Розроблено лабораторний стенд для реалізації системи технічного зору, загальну архітектуру ПО для реалізації системи, а також описані об'єкт автоматизації і типова задача для технічного зору.Machine vision systems are used for the treatment and recognition of different images obtained from a camera. A vision system is a means of tracking, monitoring and automatic decision-making in certain situations. This work is dedicated to the development of a new system of automatic control pilotless vehicle using a machine vision system, namely its landing offline on a special sign / symbol, posted on the ground. The practical application of such systems: pilotless vehicle specified by coordinates reaches the final destination, using the camera (machine vision system) finds an appropriate symbol in place of the final position, recognizes it and goes to land; everything goes offline. Object detection and segmentation is the most important and challenging fundamental task of a computer (machine) vision. It is a critical part in many applications such as image search, scene understanding, etc. However, it is still an open problem due to the variety and complexity of object classes and backgrounds. The optimal way to detect and segment an object from an image is the color-based method. The object and the background should have a significant color difference in order to successfully segment objects using color based methods. This work for object detection based on color with using software Python 3, OpenCV 3, Web Camera, and microcontroller Raspberry Pi 2. On the microcontroller Raspberry Pi 2, we run a web server written in Python that provides a web interface, and which listens for commands over the WebSockets protocol. When it gets commands, it sends them onto the Mini Driver via serial USB. This is a custom Raspberry Pi camera program we’ve written that streams JPEG images over the network. It can also stream reduced size images (160×120) for computer vision. The efficiency of the system is confirmed by tests on the laboratory bench.Разработка системы технического зрения с целью автоматического управления оптической осью камеры с использованием метода идентификации объектов по цвету. Приведены алгоритмы обработки изображения. Разработана общая архитектура ПО для реализации системы, а также описаны объект автоматизации и типичная задача для технического зрения

    Linux user interface using camera

    Get PDF
    Cílem projektu bylo vytvořit plně funkční program v jazyce C++, který je schopen detekce objektů a ovládání kurzoru myši v operačním systému Linux. Tato detekce je založena na rozpoznávání objektů požadované barvy a tvaru ze vstupu webkamery, v tomto případě sledování červeného kruhu. Hlavní část kódu byla psaná v programu Harpia, který je pro účely zpracovávání obrazu speciálně vytvořen. Většina použitých funkcí je z knihovny OpenCV, která se zabývá počítačovým viděním. V mé práci naleznete informace o způsobech detekce hran, filtraci obrazu a vyhlazovacích filtrech. Program splňuje stanovené zadání, na základě zjištěné polohy detekovaného objektu v obraze ovládá pohyb kurzoru myši.The goal of this was to create a fully functional program coded in C++, which is capable of real time object detection and mouse positioning in operating system Linux. Object detection is based on recognizing desired color and shape from webcam input. In this case it was a red circle. The main part of source code was generated via application Harpia. This is an application especially created for purposes of object tracking, border detection and picture processing. Most of used functions belong to OpenCV library. This library, as well as Harpia application, was created for computer vision, so it has many functions especially for purposes of my program. You can find many information about edge detection, color filtering and noise reduction in this document. I have also managed to control mouse cursor according to data that program detects. My program fulfils its purpose.

    Vision-based Real-Time Aerial Object Localization and Tracking for UAV Sensing System

    Get PDF
    The paper focuses on the problem of vision-based obstacle detection and tracking for unmanned aerial vehicle navigation. A real-time object localization and tracking strategy from monocular image sequences is developed by effectively integrating the object detection and tracking into a dynamic Kalman model. At the detection stage, the object of interest is automatically detected and localized from a saliency map computed via the image background connectivity cue at each frame; at the tracking stage, a Kalman filter is employed to provide a coarse prediction of the object state, which is further refined via a local detector incorporating the saliency map and the temporal information between two consecutive frames. Compared to existing methods, the proposed approach does not require any manual initialization for tracking, runs much faster than the state-of-the-art trackers of its kind, and achieves competitive tracking performance on a large number of image sequences. Extensive experiments demonstrate the effectiveness and superior performance of the proposed approach.Comment: 8 pages, 7 figure

    Online Feature Selection for Visual Tracking

    Get PDF
    Object tracking is one of the most important tasks in many applications of computer vision. Many tracking methods use a fixed set of features ignoring that appearance of a target object may change drastically due to intrinsic and extrinsic factors. The ability to dynamically identify discriminative features would help in handling the appearance variability by improving tracking performance. The contribution of this work is threefold. Firstly, this paper presents a collection of several modern feature selection approaches selected among filter, embedded, and wrapper methods. Secondly, we provide extensive tests regarding the classification task intended to explore the strengths and weaknesses of the proposed methods with the goal to identify the right candidates for online tracking. Finally, we show how feature selection mechanisms can be successfully employed for ranking the features used by a tracking system, maintaining high frame rates. In particular, feature selection mounted on the Adaptive Color Tracking (ACT) system operates at over 110 FPS. This work demonstrates the importance of feature selection in online and realtime applications, resulted in what is clearly a very impressive performance, our solutions improve by 3% up to 7% the baseline ACT while providing superior results compared to 29 state-of-the-art tracking methods

    Online Visual Robot Tracking and Identification using Deep LSTM Networks

    Full text link
    Collaborative robots working on a common task are necessary for many applications. One of the challenges for achieving collaboration in a team of robots is mutual tracking and identification. We present a novel pipeline for online visionbased detection, tracking and identification of robots with a known and identical appearance. Our method runs in realtime on the limited hardware of the observer robot. Unlike previous works addressing robot tracking and identification, we use a data-driven approach based on recurrent neural networks to learn relations between sequential inputs and outputs. We formulate the data association problem as multiple classification problems. A deep LSTM network was trained on a simulated dataset and fine-tuned on small set of real data. Experiments on two challenging datasets, one synthetic and one real, which include long-term occlusions, show promising results.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, Canada, 2017. IROS RoboCup Best Paper Awar
    corecore