11 research outputs found

    Do-It-Yourself Single Camera 3D Pointer Input Device

    Full text link
    We present a new algorithm for single camera 3D reconstruction, or 3D input for human-computer interfaces, based on precise tracking of an elongated object, such as a pen, having a pattern of colored bands. To configure the system, the user provides no more than one labelled image of a handmade pointer, measurements of its colored bands, and the camera's pinhole projection matrix. Other systems are of much higher cost and complexity, requiring combinations of multiple cameras, stereocameras, and pointers with sensors and lights. Instead of relying on information from multiple devices, we examine our single view more closely, integrating geometric and appearance constraints to robustly track the pointer in the presence of occlusion and distractor objects. By probing objects of known geometry with the pointer, we demonstrate acceptable accuracy of 3D localization.Comment: 8 pages, 6 figures, 2018 15th Conference on Computer and Robot Visio

    Discriminating Color Faces For Recognition

    Get PDF

    Coloring local feature extraction

    Get PDF
    International audienceAlthough color is commonly experienced as an indispensable quality in describing the world around us, state-of-the art local feature-based representations are mostly based on shape description, and ignore color information. The description of color is hampered by the large amount of variations which causes the measured color values to vary significantly. In this paper we aim to extend the description of local features with color information. To accomplish a wide applicability of the color descriptor, it should be robust to : 1. photometric changes commonly encountered in the real world, 2. varying image quality, from high quality images to snap-shot photo quality and compressed internet images. Based on these requirements we derive a set of color descriptors. The set of proposed descriptors are compared by extensive testing on multiple applications areas, namely, matching, retrieval and classification, and on a wide variety of image qualities. The results show that color descriptors remain reliable under photometric and geometrical changes, and with decreasing image quality. For all experiments a combination of color and shape outperforms a pure shape-based approach

    Estudi de la transformaciĂł de l'espai de color RGB a l'espai de color HSV

    Get PDF
    S’apliquen les tècniques clàssiques de propagació de l’error a la transformació de l’espai de color RGB en l’espai de color HSV a un conjunt de 1098 imatges test. El conjunt d’imatges test són 183 paletes de color i sis nivells d’il·luminació diferents. Els resultats que es presenten indiquen com varien la mitjana i la variància per la transformació.Preprin

    Robust Histogram Construction from Color Invariants For Object Recognition

    Get PDF
    An effective object recognition scheme is to represent and match images on the basis of histograms derived from photometric color invariants. A drawback, however, is that certain color invariant values become very unstable in the presence of sensor noise. To suppress the effect of noise for unstable color invariant values, in this paper, histograms are computed by variable kernel density estimators. To apply variable kernel density estimation in a principled way, models are proposed for the propagation of sensor noise through color invariant variables. As a result, the associated uncertainty is obtained for each color invariant value. The associated uncertainty is used to derive the parameterization of the variable kernel for the purpose of robust histogram construction. It is empirically verified that the proposed density estimator compares favorably to traditional histogram schemes for the purpose of object recognition

    Analysis of the Performance of HOG and CNNs for Detecting Construction Equipment and Personal Protective Equipment

    Get PDF
    The construction industry remains one of the most dangerous working environments in terms of fatalities and accidents. High numbers of accidents and loss-time injuries, leads to a decrease in productivity in this industry. Therefore, new technologies are being developed to improve the safety of construction sites. Object detection on construction sites has a huge impact on the construction industry. Many researchers studied productivity, safety, and project progress. However, few efforts have been made to improve the robustness of the related datasets for detection purposes. In the meantime, it is noticed that the lack of a custom dataset leads to low accuracy and also an increase in the cost and time of training dataset preparation. In this research, we first investigated the generation of synthetic images using 3D models of construction equipment to use them as the datasets for training purposes, namely: excavators, loaders and trucks, and then sensitivity analysis is applied. We compared the performance of CNNs and other conventional methods for classifying construction equipment. In the second part, the detection of personal protective equipment for construction workers was studied. For this purpose, several object detection architectures from the TensorFlow object detection model zoo have been evaluated to find the best and most robust detection model. The dataset used in this study contains real images from construction sites. The performance evaluation of trained object detectors are measured in terms of mean average precision. The test results from this study showed that (1) synthetic images have a significant effect on the final detection results; and (2) comparing various object detection architectures, Faster_rcnn_resnet101 was the most suitable model in terms of accuracy of detection

    Robust histogram construction from color invariants for object recognition

    No full text

    Sistema de visão para aterragem automática de UAV

    Get PDF
    Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica IndustrialNeste estudo é proposto um sistema de visão para aterrar automaticamente um avião não tripulado (Unmanned Aerial Vehicle - UAV) comercialmente existente chamado AR4 num navio, sendo este sistema composto por uma simples câmara RGB (espectro visível). A aplicação prevê a sua colocação no convés de um navio para estimar a pose do UAV (posição 3D e orientação) durante o processo de aterragem. Ao utilizar um sistema de visão localizado no navio permite a utilização de um UAV com menos poder de processamento, reduzindo assim o seu tamanho e peso. O método proposto utiliza uma abordagem baseada no modelo 3D do objeto em que é necessária a utilização do modelo CAD 3D do UAV. A pose é estimada utilizando uma arquitetura baseada num filtro de partículas. A implementação utilizada é baseada nas estratégias de evolução presentes nos algoritmos genéticos, evitando assim perda de diversidade nas possibilidades criadas. Também é implementada filtragem temporal entre frames - filtro de Kalman unscented - por forma a obter uma melhor estimativa de pose. Os resultados mostram erros angulares e de posição compatíveis com o sistema de aterragem automática. O algoritmo é apropriado para aplicações em tempo real em standard workstations, com unidades de processamento gráfico. O UAV vai operar de um navio patrulha pertencente à Marinha de Guerra Portuguesa, o que implica a capacidade de aterrar num navio de 27 metros de comprimento, 5,9 metros de boca, com uma zona de aterragem pequena e irregular de 5x6 metros localizada na proa do navio. A implementação de um sistema completamente autónomo é muito importante em cenários reais, uma vez que estes navios têm uma guarnição limitada e os pilotos de UAV nem sempre se encontram disponíveis. Além disso, um sistema de visão é mais robusto em ambientes onde pode ocorrer empastelamento ao sinal GPS.Abstract: In this study a vision system for autonomous landing of an existing commercial aerial vehicle (UAV) named AR4 aboard a ship, based on a single standard RGB digital camera is proposed. The envisaged application is of ground-based automatic landing, where the vision system is located on the ship’s deck and is used to estimate the UAV pose (3D position and orientation) during the landing process. Using a vision system located on the ship makes it possible to use an UAV with less processing power, decreasing its size and weight. The proposed method uses a 3D model based pose estimation approach that requires the 3D CAD model of the UAV. Pose is estimated using a particle filtering framework. The implemented particle filter is inspired in the evolution strategies present in the genetic algorithms avoiding sample impoverishment. Temporal filtering is also implemented between frames – unscented Kalman filter – in order to get a better pose estimation. Results show that position and angular errors are compatible with automatic landing system requirements. The algorithm is suitable for real time implementation in standard workstations with graphical processing units. The UAV will operate from the Portuguese Navy fast patrol boats (FPB), which implies the capability of landing in 27 m length, 5.9 m breadth vessels, with a 5x6 m small and irregular landing zone located at the boat´s stern. The implementation of a completely autonomous system is very important in real scenarios,since this ships have only a small crew and UAV pilots are not usually available. Moreover a vision based system is more robust in an environment where GPS jamming can occur
    corecore