908,283 research outputs found

    Building an environment model using depth information

    Get PDF
    Modeling the environment is one of the most crucial issues for the development and research of autonomous robot and tele-perception. Though the physical robot operates (navigates and performs various tasks) in the real world, any type of reasoning, such as situation assessment, planning or reasoning about action, is performed based on information in its internal world. Hence, the robot's intentional actions are inherently constrained by the models it has. These models may serve as interfaces between sensing modules and reasoning modules, or in the case of telerobots serve as interface between the human operator and the distant robot. A robot operating in a known restricted environment may have a priori knowledge of its whole possible work domain, which will be assimilated in its World Model. As the information in the World Model is relatively fixed, an Environment Model must be introduced to cope with the changes in the environment and to allow exploring entirely new domains. Introduced here is an algorithm that uses dense range data collected at various positions in the environment to refine and update or generate a 3-D volumetric model of an environment. The model, which is intended for autonomous robot navigation and tele-perception, consists of cubic voxels with the possible attributes: Void, Full, and Unknown. Experimental results from simulations of range data in synthetic environments are given. The quality of the results show great promise for dealing with noisy input data. The performance measures for the algorithm are defined, and quantitative results for noisy data and positional uncertainty are presented

    Residential building damage from hurricane storm surge: proposed methodologies to describe, assess and model building damage

    Get PDF
    Although hydrodynamic models are used extensively to quantify the physical hazard of hurricane storm surge, the connection between the physical hazard and its effects on the built environment has not been well addressed. The focus of this dissertation research is the improvement of our understanding of the interaction of hurricane storm surge with the built environment. This is accomplished through proposed methodologies to describe, assess and model residential building damage from hurricane storm surge. Current methods to describe damage from hurricane events rely on the initiating mechanism. To describe hurricane damage to residential buildings, a combined wind and flood damage scale is developed that categorizes hurricane damage on a loss-consistent basis, regardless of the primary damage mechanism. The proposed Wind and Flood (WF) Damage Scale incorporates existing damage and loss assessment methodologies for wind and flood events and describes damage using a seven-category discrete scale. Assessment of hurricane damage has traditionally been conducted through field reconnaissance deployments where damage information is captured and cataloged. The increasing availability of high resolution satellite and aerial imagery in the last few years has led to damage assessments that rely on remotely sensed information. Existing remote sensing damage assessment methodologies are reviewed for high velocity flood events at the regional, neighborhood and per-building levels. The suitability of using remote sensing in assessing residential building damage from hurricane storm surge at the neighborhood and per-building levels is investigated using visual analysis of damage indicators. Existing models for flood damage in the United States generally quantify the economic loss that results from flooding as a function of depth, rather than assessing a level of physical damage. To serve as a first work in this area, a framework for the development of an analytical damage model for residential structures is presented. Input conditions are provided by existing hydrodynamic storm surge models and building performance is determined through a comparison of physical hazard and building resistance parameters in a geospatial computational environment. The proposed damage model consists of a two-tier framework, where overall structural response and the performance of specific components are evaluated

    Modeling environment using multi-view stereo

    Get PDF
    In this work, we study the potential of a two-camera system in building an understanding of the environment. We investigate, if stereo camera as the sole sensor can be trusted in real time environment analysis and modeling to enable movement and interaction in a general setting. We propose a complete pipeline from the sensor setup to the final environment model, evaluate currently available algorithms for each step, and make our own implementation of the pipeline. To assess real world performance, we record our own stereo dataset in a laboratory environment in good lighting conditions. The dataset contains stereo recordings using different camera angles concerning the movement, and ground truth for the environment model and the camera trajectory recorded with external sensors. The steps of our proposed pipeline are as follows. 1) We calibrate two cameras using de facto method to form the stereo camera system. 2) We calculate depth from the stereo images by finding dense correspondences using semi global block matching and compare results to a recent data driven convolutional neural network algorithm. 3) We estimate camera trajectory using temporal feature tracking. 4) We form a global point cloud from the depth maps and the camera poses and analyze drivability in indoors and outdoors environments by fitting a plane or a spline model, respectively, to the global cloud. 5) We segment objects based on connectivity in the drivability model and mesh rough object models on top of the segmented clouds. 6) We refine the object models by picking keyframes containing the object, re-estimating camera poses using structure from motion, and building an accurate dense cloud using multi-view stereo. We use a patch-based algorithm that optimizes the photo consistency of the patches in the visible cameras. We conclude that with current state of the art algorithms, a stereo camera system is capable of reliably estimating drivability in real time and can be used as the sole sensor to enable autonomous movement. Building accurate object models for interaction purposes is more challenging and requires substantial view coverage and computation with the current multi-view algorithms. Our pipeline has limitations in long-term modeling: drift accumulates, which can be dealt with by implementing loop closure, and using external information such as GPS. Data wise, we inefficiently conserve complete information, while storing compressed presentations such as octrees or the built model can be considered. Finally, environments with insufficient texture and lighting are problematic for camera-based systems and require complementary solutions

    Development and characterization of the OSIRIS USASK Obsevatory

    Get PDF
    The OSIRIS instrument on board the Odin satellite uses limb viewing techniques to measure scattered sunlight and so determine the vertically resolved concentrations of atmospheric constituents including ozone. Initially, a proof of concept instrument was built and tested. This instrument, the Developmental Model, is now housed at the third floor clean room of the Physics Building on the University of Saskatchewan campus. The Developmental Model was incorporated into a system designed to monitor scattered sunlight above Saskatoon. The system was set up to transmit skylight to the Developmental Model using a fiber optic cable and to perform all measurements automatically and with minimal user interaction. The system was calibrated to determine the pixel to wavelength response. Characterizations of the point spread function and relative intensity response of the detector were also made. A shutter system was designed and constructed to measure the detector dark current. An enclosure was built on the top of the Physics Building to provide a weather proof environment and so allow data collection throughout the year. Zenith sky measurements were taken during twilight hours to provide information on the depth of absorption in the Chappuis band, an indicator of the total ozone column. The absorption depth was converted to a Dobson Unit measurement for the ozone column. Analysis of collected data provides two conclusions. The first is that a measurement set in the presence of clouds shows different signatures than a clear measurement set. The second conclusion is the detection of a diurnal trend in the total ozone column with greater amounts measured in the morning. The OSIRIS USASK Observatory is now operational and collecting data for future analysis of scattered sunlight measurements above Saskatoon

    Calculating Staircase Slope from a Single Image

    Get PDF
    Realistic modeling of a 3D environment has grown in popularity due to the increasing realm of practical applications. Whether for practical navigation purposes, entertainment value, or architectural standardization, the ability to determine the dimensions of a room is becoming more and more important. One of the trickier, but critical, features within any multistory environment is the staircase. Staircases are difficult to model because of their uneven surface and various depth aspects. Coupling this need is a variety of ways to reach this goal. Unfortunately, many such methods rely upon specialized sensory equipment, multiple calibrated cameras, or other such impractical setups. Here, we propose a simpler approach. This paper outlines a method for extracting the slope dimensions of a staircase using a single monocular image. By relying on only a single image, we negate the need for extraneous accessories and glean as much information from common pictures. We do not hope to achieve the high level of accuracy seen from laser scanning methods but seek to produce a viable result that can both be helpful for current applications and serve as a building block that contributes to later development. When constructing our pipeline, we take into account several options. Each step can be achieved with different techniques which we evaluate and compare on either a qualitative or quantitative level. This leads to our final result which can accurately determine the slope of a staircase with an error rate of 31.1%. With a small amount of previous knowledge or preprocessing, this drops down to an average of 18.7% Overall, we deem this an acceptable and optimal result given the limited information and processing resources which the program was allowed to utilize

    A review of daylighting design and implementation in buildings

    Get PDF
    • …
    corecore