3 research outputs found

    An improved photographic method to estimate the shading effect of obstructions

    Get PDF
    A new photographic method is presented to evaluate the shading effects of obstructions on surfaces exposed to the sun. The method overcomes the difficulties caused by the need to accurately describe the surrounding objects to estimate the shading effects by means of the usual tools that use the spatial reconstruction of obstructions or cylindrical or polar suncharts. The photographs of the surrounding objects are used as the background on which the solar disc is depicted at the various hours of the day. In this way it is easily detectable if the sun is visible from the place where the photographs were taken or if the surrounding obstructions obscure the sun. In spite of the complex mathematical background of the new method, the practical application of the procedure is very simple, and only requires the measurements of three angles for each photograph. The procedure permits to verify the suitability of a generic site for solar exploitation; its main benefit is the simplicity of use and the transparency of the obtained results. This method is particularly useful to evaluate the technical feasibility of small solar systems installed on the buildings of densely urbanised cities. The accuracy of the method was tested by performing an experimental verification in the field. For this purpose, the sun was photographed at different hours of the day. The photographed solar discs and the calculated sun’s positions were compared. The differences between the photographed and calculated sun’s positions corresponded to small time lags that do not exceed few minutes in the worst case. To further investigate the reliability of the proposed method, the impact of image distortion, which always affects all methods that use cameras to get information about the photographed reality, was also examined

    The Extraction and Use of Image Planes for Three-dimensional Metric Reconstruction

    Get PDF
    The three-dimensional (3D) metric reconstruction of a scene from two-dimensional images is a fundamental problem in Computer Vision. The major bottleneck in the process of retrieving such structure lies in the task of recovering the camera parameters. These parameters can be calculated either through a pattern-based calibration procedure, which requires an accurate knowledge of the scene, or using a more flexible approach, known as camera autocalibration, which exploits point correspondences across images. While pattern-based calibration requires the presence of a calibration object, autocalibration constraints are often cast into nonlinear optimization problems which are often sensitive to both image noise and initialization. In addition, autocalibration fails for some particular motions of the camera. To overcome these problems, we propose to combine scene and autocalibration constraints and address in this thesis (a) the problem of extracting geometric information of the scene from uncalibrated images, (b) the problem of obtaining a robust estimate of the affine calibration of the camera, and (c) the problem of upgrading and refining the affine calibration into a metric one. In particular, we propose a method for identifying the major planar structures in a scene from images and another method to recognize parallel pairs of planes whenever these are available. The identified parallel planes are then used to obtain a robust estimate of both the affine and metric 3D structure of the scene without resorting to the traditional error prone calculation of vanishing points. We also propose a refinement method which, unlike existing ones, is capable of simultaneously incorporating plane parallelism and perpendicularity constraints in the autocalibration process. Our experiments demonstrate that the proposed methods are robust to image noise and provide satisfactory results
    corecore