20 research outputs found

    Estimating Residential Solar Potential Using Aerial Data

    Full text link
    Project Sunroof estimates the solar potential of residential buildings using high quality aerial data. That is, it estimates the potential solar energy (and associated financial savings) that can be captured by buildings if solar panels were to be installed on their roofs. Unfortunately its coverage is limited by the lack of high resolution digital surface map (DSM) data. We present a deep learning approach that bridges this gap by enhancing widely available low-resolution data, thereby dramatically increasing the coverage of Sunroof. We also present some ongoing efforts to potentially improve accuracy even further by replacing certain algorithmic components of the Sunroof processing pipeline with deep learning

    Lower bounds for the complexity of the Hausdorff distance

    No full text
    We describe new lower bounds for the complexity of the directed Hausdorff distance under translation and rigid motion. We exhibit lower bound constructions of \Omega\Gamma n 3 ) for point sets under translation, for the L 1 , L 2 and L1 norms, \Omega\Gamma n 4 ) for line segments under translation, for any L p norm, \Omega\Gamma n 5 ) for point sets under rigid motion and \Omega\Gamma n 6 ) for line segments under rigid motion, both for the L 2 norm. The results for point sets can also be extended to the undirected Hausdorff distance

    A Multi-Resolution Technique for Comparing Images Using the Hausdorff Distance

    Full text link
    The Hausdorff distance measures the extent to which each point of a "model" set lies near some point of an "image" set and vice versa. In this paper we describe an efficient method of computing this distance, based on a multi-resolution tessallation of the space of possible transformations of the model set. We focus on the case in which the model is allowed to translate and scale with respect to the image. This four-dimensional transformation space (two translation and two scale dimensions) is searched rapidly, while guaranteeing that no match will be missed. We present some examples of identifying an object in a cluttered scene, including cases where the object is partially hidden from view

    Comparing Images Using the Hausdorff Distance

    No full text
    The Hausdorff distance measures the extent to which each point of a `model' set lies near some point of an `image' set and vice versa. Thus this distance can be used to determine the degree of resemblance between two objects that are superimposed on one another. In this paper we provide efficient algorithms for computing the Hausdorff distance between all possible relative positions of a binary image and a model. We focus primarily on the case in which the model is only allowed to translate with respect to the image. Then we consider how to extend the techniques to rigid motion (translation and rotation). The Hausdorff distance computation differs from many other shape comparison methods in that no correspondence between the model and the image is derived. The method is quite tolerant of small position errors as occur with edge detectors and other feature extraction methods. Moreover, we show how the method extends naturally to the problem of comparing a portion of a model against an i..

    Tracking Non-Rigid Objects in Complex Scenes

    Full text link
    We consider the problem of tracking non-rigid objects moving in a complex scene. We describe a model-based tracking method, in which two-dimensional geometric models are used to localize an object in each frame of an image sequence. The basic idea is to decompose the image of a solid object moving in space into two components: a two-dimensional motion and a two-dimensional shape change. The motion component is factored out, and the shape change is represented by explicitly storing a sequence of two-dimensional models, one corresponding to each image frame. The major assumption underlying the method is that the two-dimensional shape of an object will change slowly from one frame to the next. There is no assumption, however, that the two-dimensional image motion in successive frames will be small. Thus, the method can track objects that move arbitrarily far in the image from one frame to the next

    Visually-Guided Navigation by Comparing Two-Dimensional Edge Images

    Full text link
    We present a method for navigating a robot from an initial position to a specified landmark in its visual field, using a sequence of monocular images. The location of the landmark with respect to the robot is determined using the change in size and location of the landmark in the image, as a function of the motion of the robot. The landmark location is estimated after the first three images are taken, and this estimate is refined as the robot moves. The method can correct for errors in the robot motion, as well as navigate around obstacles. The obstacle avoidance is done using bump sensors, sonar and dead reckoning, rather than visual servoing. The method does not require prior calibration of the camera. We show some examples of the operation of the system
    corecore