83 research outputs found

    On realistic target coverage by autonomous drones

    Get PDF
    Low-cost mini-drones with advanced sensing and maneuverability enable a new class of intelligent sensing systems. To achieve the full potential of such drones, it is necessary to develop new enhanced formulations of both common and emerging sensing scenarios. Namely, several fundamental challenges in visual sensing are yet to be solved including (1) fitting sizable targets in camera frames; (2) positioning cameras at effective viewpoints matching target poses; and (3) accounting for occlusion by elements in the environment, including other targets. In this article, we introduce Argus, an autonomous system that utilizes drones to collect target information incrementally through a two-tier architecture. To tackle the stated challenges, Argus employs a novel geometric model that captures both target shapes and coverage constraints. Recognizing drones as the scarcest resource, Argus aims to minimize the number of drones required to cover a set of targets. We prove this problem is NP-hard, and even hard to approximate, before deriving a best-possible approximation algorithm along with a competitive sampling heuristic which runs up to 100× faster according to large-scale simulations. To test Argus in action, we demonstrate and analyze its performance on a prototype implementation. Finally, we present a number of extensions to accommodate more application requirements and highlight some open problems

    The floodlight problem

    Get PDF
    Given three angles summing to 2, given n points in the plane and a tripartition k1 + k2 + k3 = n, we can tripartition the plane into three wedges of the given angles so that the i-th wedge contains ki of the points. This new result on dissecting point sets is used to prove that lights of specied angles not exceeding can be placed at n xed points in the plane to illuminate the entire plane if and only if the angles sum to at least 2. We give O(n log n) algorithms for both these problems

    Why Don't You Clean Your Glasses? Perception Attacks with Dynamic Optical Perturbations

    Full text link
    Camera-based autonomous systems that emulate human perception are increasingly being integrated into safety-critical platforms. Consequently, an established body of literature has emerged that explores adversarial attacks targeting the underlying machine learning models. Adapting adversarial attacks to the physical world is desirable for the attacker, as this removes the need to compromise digital systems. However, the real world poses challenges related to the "survivability" of adversarial manipulations given environmental noise in perception pipelines and the dynamicity of autonomous systems. In this paper, we take a sensor-first approach. We present EvilEye, a man-in-the-middle perception attack that leverages transparent displays to generate dynamic physical adversarial examples. EvilEye exploits the camera's optics to induce misclassifications under a variety of illumination conditions. To generate dynamic perturbations, we formalize the projection of a digital attack into the physical domain by modeling the transformation function of the captured image through the optical pipeline. Our extensive experiments show that EvilEye's generated adversarial perturbations are much more robust across varying environmental light conditions relative to existing physical perturbation frameworks, achieving a high attack success rate (ASR) while bypassing state-of-the-art physical adversarial detection frameworks. We demonstrate that the dynamic nature of EvilEye enables attackers to adapt adversarial examples across a variety of objects with a significantly higher ASR compared to state-of-the-art physical world attack frameworks. Finally, we discuss mitigation strategies against the EvilEye attack.Comment: 15 pages, 11 figure

    The design and development of woven textile solar panels

    Get PDF
    Over the past few years, alternative power supplies to either supplement or replace batteries for electronic textile and wearable applications have been sought, with the development of wearable solar energy harvesting systems gaining significant interest. In a previous publication the authors reported a novel concept to craft a yarn capable of harvesting solar energy by embedding miniature solar cells within the fibers of a yarn (solar electronic yarns). The aim of this publication is to report the development of a large-area textile solar panel. This study first characterized the solar electronic yarns, and then analyzed the solar electronic yarns once woven into double cloth woven textiles; as part of this study, the effect of different numbers of covering warp yarns on the performance of the embedded solar cells was explored. Finally, a larger woven textile solar panel (510 mm × 270 mm) was constructed and tested under different light intensities. It was observed that a PMAX = 335.3 ± 22.4 mW of energy could be harvested on a sunny day (under 99,000 lux lighting conditions)

    Visual Tracking and Motion Estimation for an On-orbit Servicing of a Satellite

    Get PDF
    This thesis addresses visual tracking of a non-cooperative as well as a partially cooperative satellite, to enable close-range rendezvous between a servicer and a target satellite. Visual tracking and estimation of relative motion between a servicer and a target satellite are critical abilities for rendezvous and proximity operation such as repairing and deorbiting. For this purpose, Lidar has been widely employed in cooperative rendezvous and docking missions. Despite its robustness to harsh space illumination, Lidar has high weight and rotating parts and consumes more power, thus undermines the stringent requirements of a satellite design. On the other hand, inexpensive on-board cameras can provide an effective solution, working at a wide range of distances. However, conditions of space lighting are particularly challenging for image based tracking algorithms, because of the direct sunlight exposure, and due to the glossy surface of the satellite that creates strong reflection and image saturation, which leads to difficulties in tracking procedures. In order to address these difficulties, the relevant literature is examined in the fields of computer vision, and satellite rendezvous and docking. Two classes of problems are identified and relevant solutions, implemented on a standard computer are provided. Firstly, in the absence of a geometric model of the satellite, the thesis presents a robust feature-based method with prediction capability in case of insufficient features, relying on a point-wise motion model. Secondly, we employ a robust model-based hierarchical position localization method to handle change of image features along a range of distances, and localize an attitude-controlled (partially cooperative) satellite. Moreover, the thesis presents a pose tracking method addressing ambiguities in edge-matching, and a pose detection algorithm based on appearance model learning. For the validation of the methods, real camera images and ground truth data, generated with a laboratory tet bed similar to space conditions are used. The experimental results indicate that camera based methods provide robust and accurate tracking for the approach of malfunctioning satellites in spite of the difficulties associated with specularities and direct sunlight. Also exceptional lighting conditions associated to the sun angle are discussed, aimed at achieving fully reliable localization system in a certain mission

    Powder Bed Surface Quality and Particle Size Distribution for Metal Additive Manufacturing and Comparison with Discrete Element Model

    Get PDF
    Metal additive manufacturing (AM) can produce complex parts that were once considered impossible or too costly to fabricate using conventional machining techniques, making AM machines an exceptional tool for rapid prototyping, one-off parts, and labor-intensive geometries. Due to the growing popularity of this technology, especially in the defense and medical industries, more researchers are looking into the physics and mechanics behind the AM process. Many factors and parameters contribute to the overall quality of a part, one of them being the powder bed itself. So far, little investigation has been dedicated to the behavior of the powder in the powder bed during the lasering process. A powder spreading machine that simulates the powder bed fusion process without the laser was designed by Lawrence Livermore National Laboratory and was built as a platform to observe powder characteristics. The focus for this project was surface roughness and particle size distribution (PSD), and how dose rate and coating speed affect the results. Images of the 316L stainless steel powder on the spreading device at multiple layers were taken and processed and analyzed in MATLAB to access surface quality of each region. Powder from nine regions of the build plate were also sampled and counted to determine regional particle size distribution. As a comparison, a simulation was developed to mimic the adhesive behavior of the powder, and to observe how powder distributes powder when spread
    • …
    corecore