1,664 research outputs found

    Methods for Detecting Floodwater on Roadways from Ground Level Images

    Get PDF
    Recent research and statistics show that the frequency of flooding in the world has been increasing and impacting flood-prone communities severely. This natural disaster causes significant damages to human life and properties, inundates roads, overwhelms drainage systems, and disrupts essential services and economic activities. The focus of this dissertation is to use machine learning methods to automatically detect floodwater in images from ground level in support of the frequently impacted communities. The ground level images can be retrieved from multiple sources, including the ones that are taken by mobile phone cameras as communities record the state of their flooded streets. The model developed in this research processes these images in multiple levels. The first detection model investigates the presence of flood in images by developing and comparing image classifiers with various feature extractors. Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), and pretrained convolutional neural networks are used as feature extractors. Then, decision trees, logistic regression, and K-Nearest Neighbors (K-NN) models are trained and tested for making predictions on floodwater presence in the image. Once the model detects flood in an image, it moves to the second layer to detect the presence of floodwater at a pixel level in each image. This pixel-level identification is achieved by semantic segmentation by using a super-pixel based prediction method and Fully Convolutional Neural Networks (FCNs). First, SLIC super-pixel method is used to create the super-pixels, then the same types of classifiers as the initial classification method are trained to predict the class of each super-pixel. Later, the FCN is trained end-to-end without any additional classifiers. Once these processes are done, images are segmented into regions of floodwater at pixel level. In both of the classification and semantic segmentation tasks, deep learning-based methods showed the best results. Once the model receives the confirmation of flood detection in image and pixel layers, it moves to the final task of finding the floodwater depth in images. This third and final layer of the model is critical as it can help officials deduce the severity of the flood at a given area. In order to detect the depth of the water and the severity of the flooding, the model processes the cars on streets that are in water and calculates the percentage of tires that are under water. This calculation is achieved with a mixture of deep learning and classical computer vision techniques. There are four main processes in this task: (i)-Semantic segmentation of the image into pixels that belong to background, floodwater, and wheels of vehicles. The segmentation is done by multiple FCN models that are trained with various base models. (ii)-Object detection models for detecting tires. The tires are identified by a You Only Look Once (YOLO) object detector. (iii)- Improvement of initial segmentation results. A U-Net like semantic segmentation network is proposed. It uses the tire patches from the object detector and the corresponding initial segmentation results, and it learns to fix the errors of the initial segmentation results. (iv)-Calculation of water depth as a ratio of the tire wheel under the water. This final task uses the improved segmentation results to identify the ellipses that correspond to the wheel parts of vehicles and utilizes two approaches listed below as part of a hybrid method: (i)-Using the improved segmentation results as they return the pixels belonging to the wheels. Boundaries of the wheels are found from this and used. (ii)-Finding arcs that belong to elliptical objects by applying a series of image processing methods. This method connects the arcs found to build larger structures such as two-piece (half ellipse), three-piece or four-piece (full) ellipses. Once the ellipse boundary is calculated using both methods, the ratio of the ellipse under floodwater can be calculated. This novel multi-model system allows us to attribute potential prediction errors to the different parts of the model such as semantic segmentation of the image or the calculation of the elliptical boundary. To verify the applicability of the proposed methods and to train the models, extensive hand-labeled datasets were created as part of this dissertation. The initial images were collected from the web, then the datasets were enriched by images created from virtual environments, simulations of neighborhoods under flood, using the Unity software. In conclusion, the proposed methods in this dissertation, as validated on the labeled datasets, can successfully classify images as a flood scene, semantically segment the regions of flood, and predict the depth of water to indicate severit

    Deep CCD Surface Photometry of Galaxy Clusters I: Methods and Initial Studies of Intracluster Starlight

    Full text link
    We report the initial results of a deep imaging survey of galaxy clusters. The primary goals of this survey are to quantify the amount of intracluster light as a function of cluster properties, and to quantify the frequency of tidal debris. We outline the techniques needed to perform such a survey, and we report findings for the first two galaxy clusters in the survey: Abell 1413, and MKW 7 . These clusters vary greatly in richness and structure. We show that our surface photometry reliably reaches to a surface brightness of \mu_v = 26.5 mags per arcsec. We find that both clusters show clear excesses over a best-fitting r^{1/4} profile: this was expected for Abell 1413, but not for MKW 7. Both clusters also show evidence of tidal debris in the form of plumes and arc-like structures, but no long tidal arcs were detected. We also find that the central cD galaxy in Abell 1413 is flattened at large radii, with an ellipticity of 0.8\approx 0.8, the largest measured ellipticity of any cD galaxy to date.Comment: 58 pages, 24 figures, accepted for publication in the Astrophysical Journal. Version has extremely low resolution figures to comply with 650k limit. High resolution version is available at http://burro.astr.cwru.edu/johnf/icl1.ps.gz Obtaining high resolution version is strongly reccomende

    Soccer line mark segmentation and classification with stochastic watershed transform

    Full text link
    Augmented reality applications are beginning to change the way sports are broadcast, providing richer experiences and valuable insights to fans. The first step of augmented reality systems is camera calibration, possibly based on detecting the line markings of the playing field. Most existing proposals for line detection rely on edge detection and Hough transform, but radial distortion and extraneous edges cause inaccurate or spurious detections of line markings. We propose a novel strategy to automatically and accurately segment and classify line markings. First, line points are segmented thanks to a stochastic watershed transform that is robust to radial distortions, since it makes no assumptions about line straightness, and is unaffected by the presence of players or the ball. The line points are then linked to primitive structures (straight lines and ellipses) thanks to a very efficient procedure that makes no assumptions about the number of primitives that appear in each image. The strategy has been tested on a new and public database composed by 60 annotated images from matches in five stadiums. The results obtained have proven that the proposed strategy is more robust and accurate than existing approaches, achieving successful line mark detection even in challenging conditions.Comment: 18 pages, 11 figure

    Research on a modifeied RANSAC and its applications to ellipse detection from a static image and motion detection from active stereo video sequences

    Get PDF
    制度:新 ; 報告番号:甲3091号 ; 学位の種類:博士(国際情報通信学) ; 授与年月日:2010/2/24 ; 早大学位記番号:新535

    Robust Detection of Non-overlapping Ellipses from Points with Applications to Circular Target Extraction in Images and Cylinder Detection in Point Clouds

    Full text link
    This manuscript provides a collection of new methods for the automated detection of non-overlapping ellipses from edge points. The methods introduce new developments in: (i) robust Monte Carlo-based ellipse fitting to 2-dimensional (2D) points in the presence of outliers; (ii) detection of non-overlapping ellipse from 2D edge points; and (iii) extraction of cylinder from 3D point clouds. The proposed methods were thoroughly compared with established state-of-the-art methods, using simulated and real-world datasets, through the design of four sets of original experiments. It was found that the proposed robust ellipse detection was superior to four reliable robust methods, including the popular least median of squares, in both simulated and real-world datasets. The proposed process for detecting non-overlapping ellipses achieved F-measure of 99.3% on real images, compared to F-measures of 42.4%, 65.6%, and 59.2%, obtained using the methods of Fornaciari, Patraucean, and Panagiotakis, respectively. The proposed cylinder extraction method identified all detectable mechanical pipes in two real-world point clouds, obtained under laboratory, and industrial construction site conditions. The results of this investigation show promise for the application of the proposed methods for automatic extraction of circular targets from images and pipes from point clouds

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems
    corecore