18 research outputs found

    Learning Privacy Preserving Encodings through Adversarial Training

    Full text link
    We present a framework to learn privacy-preserving encodings of images that inhibit inference of chosen private attributes, while allowing recovery of other desirable information. Rather than simply inhibiting a given fixed pre-trained estimator, our goal is that an estimator be unable to learn to accurately predict the private attributes even with knowledge of the encoding function. We use a natural adversarial optimization-based formulation for this---training the encoding function against a classifier for the private attribute, with both modeled as deep neural networks. The key contribution of our work is a stable and convergent optimization approach that is successful at learning an encoder with our desired properties---maintaining utility while inhibiting inference of private attributes, not just within the adversarial optimization, but also by classifiers that are trained after the encoder is fixed. We adopt a rigorous experimental protocol for verification wherein classifiers are trained exhaustively till saturation on the fixed encoders. We evaluate our approach on tasks of real-world complexity---learning high-dimensional encodings that inhibit detection of different scene categories---and find that it yields encoders that are resilient at maintaining privacy.Comment: To appear in WACV 201

    Design of an Adaptive Lightweight LiDAR to Decouple Robot-Camera Geometry

    Full text link
    A fundamental challenge in robot perception is the coupling of the sensor pose and robot pose. This has led to research in active vision where robot pose is changed to reorient the sensor to areas of interest for perception. Further, egomotion such as jitter, and external effects such as wind and others affect perception requiring additional effort in software such as image stabilization. This effect is particularly pronounced in micro-air vehicles and micro-robots who typically are lighter and subject to larger jitter but do not have the computational capability to perform stabilization in real-time. We present a novel microelectromechanical (MEMS) mirror LiDAR system to change the field of view of the LiDAR independent of the robot motion. Our design has the potential for use on small, low-power systems where the expensive components of the LiDAR can be placed external to the small robot. We show the utility of our approach in simulation and on prototype hardware mounted on a UAV. We believe that this LiDAR and its compact movable scanning design provide mechanisms to decouple robot and sensor geometry allowing us to simplify robot perception. We also demonstrate examples of motion compensation using IMU and external odometry feedback in hardware.Comment: This paper is published in IEEE Transactions on Robotic

    Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking

    Get PDF
    The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking

    Pre-Capture Privacy for Small Vision Sensors

    No full text
    corecore