49 research outputs found

    Quadcopter drone formation control via onboard visual perception

    Full text link
    Quadcopter drone formation control is an important capability for fields like area surveillance, search and rescue, agriculture, and reconnaissance. Of particular interest is formation control in environments where radio communications and/or GPS may be either denied or not sufficiently accurate for the desired application. To address this, we focus on vision as the sensing modality. We train an Hourglass Convolutional Neural Network (CNN) to discriminate between quadcopter pixels and non-quadcopter pixels in a live video feed and use it to guide a formation of quadcopters. The CNN outputs "heatmaps" - pixel-by-pixel likelihood estimates of the presence of a quadcopter. These heatmaps suffer from short-lived false detections. To mitigate these, we apply a version of the Siamese networks technique on consecutive frames for clutter mitigation and to promote temporal smoothness in the heatmaps. The heatmaps give an estimate of the range and bearing to the other quadcopter(s), which we use to calculate flight control commands and maintain the desired formation. We implement the algorithm on a single-board computer (ODROID XU4) with a standard webcam mounted to a quadcopter drone. Flight tests in a motion capture volume demonstrate successful formation control with two quadcopters in a leader-follower setup

    Fully-Automated Packaging Structure Recognition of Standardized Logistics Assets on Images

    Get PDF
    Innerhalb einer logistischen Lieferkette müssen vielfältige Transportgüter an zahlreichen Knotenpunkten bearbeitet, wiedererkannt und kontrolliert werden. Dabei ist oft ein großer manueller Aufwand erforderlich, um die Paketidentität oder auch die Packstruktur zu erkennen oder zu verifizieren. Solche Schritte sind notwendig, um beispielsweise eine Lieferung auf ihre Vollständigkeit hin zu überprüfen. Wir untersuchen die Konzeption und Implementierung eines Verfahrens zur vollständigen Automatisierung der Erkennung der Packstruktur logistischer Sendungen. Ziel dieses Verfahrens ist es, basierend auf einem einzigen Farbbild, eine oder mehrere Transporteinheiten akkurat zu lokalisieren und relevante Charakteristika, wie beispielsweise die Gesamtzahl oder die Anordnung der enthaltenen Packstücke, zu erkennen. Wir stellen eine aus mehreren Komponenten bestehende Bildverarbeitungs-Pipeline vor, die diese Aufgabe der Packstrukturerkennung lösen soll. Unsere erste Implementierung des Verfahrens verwendet mehrere Deep Learning Modelle, genauer gesagt Convolutional Neural Networks zur Instanzsegmentierung, sowie Bildverarbeitungsmethoden und heuristische Komponenten. Wir verwenden einen eigenen Datensatz von Echtbildern aus einer Logistik-Umgebung für Training und Evaluation unseres Verfahrens. Wir zeigen, dass unsere Lösung in der Lage ist, die korrekte Packstruktur in etwa 85% der Testfälle unseres Datensatzes zu erkennen, und sogar eine höhere Genauigkeit erzielt wird, wenn nur die meist vorkommenden Packstücktypen betrachtet werden. Für eine ausgewählte Bilderkennungs-Komponente unseres Algorithmus vergleichen wir das Potenzial der Verwendung weniger rechenintensiver, eigens designter Bildverarbeitungsmethoden mit den zuvor implementierten Deep Learning Verfahren. Aus dieser Untersuchung schlussfolgern wir die bessere Eignung der lernenden Verfahren, welche wir auf deren sehr gute Fähigkeit zur Generalisierung zurückführen. Außerdem formulieren wir das Problem der Objekt-Lokalisierung in Bildern anhand selbst gewählter Merkmalspunkte, wie beispielsweise Eckpunkte logistischer Transporteinheiten. Ziel hiervon ist es, Objekte präziser zu lokalisieren, als dies insbesondere im Vergleich zur Verwendung herkömmlicher umgebender Rechtecke möglich ist, während gleichzeitig die Objektform durch bekanntes Vorwissen zur Objektgeometrie forciert wird. Wir stellen ein spezifisches Deep Learning Modell vor, welches die beschriebene Aufgabe löst im Fall von Objekten, welche durch vier Eckpunkte beschrieben werden können. Das dabei entwickelte Modell mit Namen TetraPackNet wird evaluiert mittels allgemeiner und anwendungsbezogener Metriken. Wir belegen die Anwendbarkeit der Lösung im Falle unserer Bilderkennungs-Pipeline und argumentieren die Relevanz für andere Anwendungsfälle, wie beispielweise Kennzeichenerkennung

    Extracting structured information from 2D images

    Get PDF
    Convolutional neural networks can handle an impressive array of supervised learning tasks while relying on a single backbone architecture, suggesting that one solution fits all vision problems. But for many tasks, we can directly make use of the problem structure within neural networks to deliver more accurate predictions. In this thesis, we propose novel deep learning components that exploit the structured output space of an increasingly complex set of problems. We start from Optical Character Recognition (OCR) in natural scenes and leverage the constraints imposed by a spatial outline of letters and language requirements. Conventional OCR systems do not work well in natural scenes due to distortions, blur, or letter variability. We introduce a new attention-based model, equipped with extra information about the neuron positions to guide its focus across characters sequentially. It beats the previous state-of-the-art benchmark by a significant margin. We then turn to dense labeling tasks employing encoder-decoder architectures. We start with an experimental study that documents the drastic impact that decoder design can have on task performance. Rather than optimizing one decoder per task separately, we propose new robust layers for the upsampling of high-dimensional encodings. We show that these better suit the structured per pixel output across the board of all tasks. Finally, we turn to the problem of urban scene understanding. There is an elaborate structure in both the input space (multi-view recordings, aerial and street-view scenes) and the output space (multiple fine-grained attributes for holistic building understanding). We design new models that benefit from a relatively simple cuboidal-like geometry of buildings to create a single unified representation from multiple views. To benchmark our model, we build a new multi-view large-scale dataset of buildings images and fine-grained attributes and show systematic improvements when compared to a broad range of strong CNN-based baselines

    Detección rápida de puntos de referencia faciales y aplicaciones: estudio de la bibliografía

    Get PDF
    Dense facial landmark detection is one of the key elements of face processing pipeline. It is used in virtual face reenactment, emotion recognition, driver status tracking, etc. Early approaches were suitable for facial landmark detection in controlled environments only, which is clearly insufficient. Neural networks have shown an astonishing qualitative improvement for in-the-wild face landmark detection problem, and are now being studied by many researchers in the field. Numerous bright ideas are proposed, often complimentary to each other. However, exploration of the whole volume of novel approaches is quite challenging. Therefore, we present this survey, where we summarize state-of-the-art algorithms into categories, provide a comparison of recently introduced in-the-wild datasets (e.g., 300W, AFLW, COFW, WFLW) that contain images with large pose, face occlusion, taken in unconstrained conditions. In addition to quality, applications require fast inference, and preferably on mobile devices. Hence, we include information about algorithm inference speed both on desktop and mobile hardware, which is rarely studied. Importantly, we highlight problems of algorithms, their applications, vulnerabilities, and briefly touch on established methods. We hope that the reader will find many novel ideas, will see how the algorithms are used in applications, which will enable further research.La detección de puntos de referenda faciales densos es uno de los elementos clave del proceso de procesamiento de rostros. Se utiliza en la anünación de rostros virtuales, el reconocüniento de emociones, el seguimiento del estado del conductor, etc. Los prüneros enfoques eran adecuados para la detección de puntos de referencia faciales solo en entornos controlados, lo que claramente es insuficiente. Las redes neuronales han mostrado una asombrosa mejora cualitativa para el problema de detección de puntos de referencia faciales en condiciones del mundo real, y ahora están siendo estudiadas por muchos investigadores en el campo. Se proponen numerosas ideas brillantes, a menudo complementarias. Sin embargo, la exploración de todo el volumen de enfoques novedosos es bastante desafiante. Por lo tanto, presentamos esta encuesta, donde resumimos los algoritmos de última generación en categorías, brindamos una comparación de los conjuntos de datos introducidos recientemente (por ejemplo, 300W, AFLW, COFW, WFLW) que contienen imágenes con pose grande, oclusión facial, tomadas en condiciones sin restricciones. Además de calidad, las aplicaciones requieren una inferencia rápida y preferentemente en dispositivos móviles. Por lo tanto, incluimos información sobre la velocidad de inferencia de algoritmos tanto en hardware de escritorio como móvil, que rara vez se estudia. Es importante destacar que destacamos los problemas de los algoritmos, sus aplicaciones, vulnerabilidades y mencionamos brevemente los métodos establecidos. Esperamos que el lector encuentre muchas ideas novedosas, vea cómo se utilizan los algoritmos en las aplicaciones, lo que permitirá futuras investigaciones.Facultad de Informátic

    Learning Equivariant Representations

    Get PDF
    State-of-the-art deep learning systems often require large amounts of data and computation. For this reason, leveraging known or unknown structure of the data is paramount. Convolutional neural networks (CNNs) are successful examples of this principle, their defining characteristic being the shift-equivariance. By sliding a filter over the input, when the input shifts, the response shifts by the same amount, exploiting the structure of natural images where semantic content is independent of absolute pixel positions. This property is essential to the success of CNNs in audio, image and video recognition tasks. In this thesis, we extend equivariance to other kinds of transformations, such as rotation and scaling. We propose equivariant models for different transformations defined by groups of symmetries. The main contributions are (i) polar transformer networks, achieving equivariance to the group of similarities on the plane, (ii) equivariant multi-view networks, achieving equivariance to the group of symmetries of the icosahedron, (iii) spherical CNNs, achieving equivariance to the continuous 3D rotation group, (iv) cross-domain image embeddings, achieving equivariance to 3D rotations for 2D inputs, and (v) spin-weighted spherical CNNs, generalizing the spherical CNNs and achieving equivariance to 3D rotations for spherical vector fields. Applications include image classification, 3D shape classification and retrieval, panoramic image classification and segmentation, shape alignment and pose estimation. What these models have in common is that they leverage symmetries in the data to reduce sample and model complexity and improve generalization performance. The advantages are more significant on (but not limited to) challenging tasks where data is limited or input perturbations such as arbitrary rotations are present

    Note Taking in the Digital Age – Towards a Ubiquitous Pen Interface

    Get PDF
    The cultural technique of writing helped humans to express, communicate, think, and memorize throughout history. With the advent of human-computer-interfaces, pens as command input for digital systems became popular. While current applications allow carrying out complex tasks with digital pens, they lack the ubiquity and directness of pen and paper. This dissertation models the note taking process in the context of scholarly work, motivated by an understanding of note taking that surpasses mere storage of knowledge. The results, together with qualitative empirical findings about contemporary scholarly workflows that alternate between the analog and the digital world, inspire a novel pen interface concept. This concept proposes the use of an ordinary pen and unmodified writing surfaces for interacting with digital systems. A technological investigation into how a camera-based system can connect physical ink strokes with digital handwriting processing delivers artificial neural network-based building blocks towards that goal. Using these components, the technological feasibility of in-air pen gestures for command input is explored. A proof-of-concept implementation of a prototype system reaches real-time performance and demonstrates distributed computing strategies for realizing the interface concept in an end-user setting

    Artificial Intelligence in the Creative Industries: A Review

    Full text link
    This paper reviews the current state of the art in Artificial Intelligence (AI) technologies and applications in the context of the creative industries. A brief background of AI, and specifically Machine Learning (ML) algorithms, is provided including Convolutional Neural Network (CNNs), Generative Adversarial Networks (GANs), Recurrent Neural Networks (RNNs) and Deep Reinforcement Learning (DRL). We categorise creative applications into five groups related to how AI technologies are used: i) content creation, ii) information analysis, iii) content enhancement and post production workflows, iv) information extraction and enhancement, and v) data compression. We critically examine the successes and limitations of this rapidly advancing technology in each of these areas. We further differentiate between the use of AI as a creative tool and its potential as a creator in its own right. We foresee that, in the near future, machine learning-based AI will be adopted widely as a tool or collaborative assistant for creativity. In contrast, we observe that the successes of machine learning in domains with fewer constraints, where AI is the `creator', remain modest. The potential of AI (or its developers) to win awards for its original creations in competition with human creatives is also limited, based on contemporary technologies. We therefore conclude that, in the context of creative industries, maximum benefit from AI will be derived where its focus is human centric -- where it is designed to augment, rather than replace, human creativity
    corecore