9 research outputs found

    Segmentation of Structured Objects in Image 1

    Get PDF
    Abstract Detection of foreground structured objects in the images is an essential task in many image processing applications. This paper presents a region merging and region growing approach for automatic detection of the foreground objects in the image. The proposed approach identifies objects in the given image based on general properties of the objects without depending on the prior knowledge about specific objects. The region contrast information is used to separate the regions of the structured objects from the background regions. The perceptual organization laws are used in the region merging process to group the various regions i.e. parts of the object. The system is adaptive to the image content. The results of the experiments show that the proposed scheme can efficiently extract object boundary from the background

    Combining crowdsourcing and google street view to identify street-level accessibility problems

    Get PDF
    ABSTRACT Poorly maintained sidewalks, missing curb ramps, and other obstacles pose considerable accessibility challenges; however, there are currently few, if any, mechanisms to determine accessible areas of a city a priori. In this paper, we investigate the feasibility of using untrained crowd workers from Amazon Mechanical Turk (turkers) to find, label, and assess sidewalk accessibility problems in Google Street View imagery. We report on two studies: Study 1 examines the feasibility of this labeling task with six dedicated labelers including three wheelchair users; Study 2 investigates the comparative performance of turkers. In all, we collected 13,379 labels and 19,189 verification labels from a total of 402 turkers. We show that turkers are capable of determining the presence of an accessibility problem with 81% accuracy. With simple quality control methods, this number increases to 93%. Our work demonstrates a promising new, highly scalable method for acquiring knowledge about sidewalk accessibility

    Combining crowdsourcing and Google street view to identify street-level accessibility problems

    Get PDF
    ABSTRACT Poorly maintained sidewalks, missing curb ramps, and other obstacles pose considerable accessibility challenges; however, there are currently few, if any, mechanisms to determine accessible areas of a city a priori. In this paper, we investigate the feasibility of using untrained crowd workers from Amazon Mechanical Turk (turkers) to find, label, and assess sidewalk accessibility problems in Google Street View imagery. We report on two studies: Study 1 examines the feasibility of this labeling task with six dedicated labelers including three wheelchair users; Study 2 investigates the comparative performance of turkers. In all, we collected 13,379 labels and 19,189 verification labels from a total of 402 turkers. We show that turkers are capable of determining the presence of an accessibility problem with 81% accuracy. With simple quality control methods, this number increases to 93%. Our work demonstrates a promising new, highly scalable method for acquiring knowledge about sidewalk accessibility

    Developing a Machine Learning Algorithm for Outdoor Scene Image Segmentation

    Get PDF
    Image segmentation is one of the major problems in image processing, computer vision and machine learning fields. The main reason for image segmentation existence is to reduce the gap between computer vision and human vision by training computers with different data. Outdoor image segmentation and classification has become very important in the field of computer vision with its applications in woodland-surveillance, defence and security. The task of assigning an input image to one class from a fixed set of categories seem to be a major problem in image segmentation. The main question that has been addressed in this research is how outdoor image classification algorithms can be improved using Region-based Convolutional Neural Network (R-CNN) architecture. There has been no one segmentation method that works best on any given problem. To determine the best segmentation method for a certain dataset, various tests have to be done in order to achieve the best performance. However deep learning models have often achieved increasing success due to the availability of massive datasets and the expanding model depth and parameterisation. In this research Convolutional Neural Network architecture is used in trying to improve the implementation of outdoor scene image segmentation algorithms, empirical research method was used to answer questions about existing image segmentation algorithms and the techniques used to achieve the best performance. Outdoor scene images were trained on a pre-trained region-based convolutional neural network with Visual Geometric Group-16 (VGG-16) architecture. A pre-trained R-CNN model was retrained on five different sample data, the samples had different sizes. Sample size increased from sample one to five, to increase the size on the last two samples the data was duplicated. 21 test images were used to evaluate all the models. Researchers has shown that deep learning methods perform better in image segmentation because of the increase and availability of datasets. The duplication of images did not yield the best results; however, the model performed well on the first three samples

    Fish4Knowledge: Collecting and Analyzing Massive Coral Reef Fish Video Data

    Get PDF
    This book gives a start-to-finish overview of the whole Fish4Knowledge project, in 18 short chapters, each describing one aspect of the project. The Fish4Knowledge project explored the possibilities of big video data, in this case from undersea video. Recording and analyzing 90 thousand hours of video from ten camera locations, the project gives a 3 year view of fish abundance in several tropical coral reefs off the coast of Taiwan. The research system built a remote recording network, over 100 Tb of storage, supercomputer processing, video target detection and

    Scalable Methods to Collect and Visualize Sidewalk Accessibility Data for People with Mobility Impairments

    Get PDF
    Poorly maintained sidewalks pose considerable accessibility challenges for people with mobility impairments. Despite comprehensive civil rights legislation of Americans with Disabilities Act, many city streets and sidewalks in the U.S. remain inaccessible. The problem is not just that sidewalk accessibility fundamentally affects where and how people travel in cities, but also that there are few, if any, mechanisms to determine accessible areas of a city a priori. To address this problem, my Ph.D. dissertation introduces and evaluates new scalable methods for collecting data about street-level accessibility using a combination of crowdsourcing, automated methods, and Google Street View (GSV). My dissertation has four research threads. First, we conduct a formative interview study to establish a better understanding of how people with mobility impairments currently assess accessibility in the built environment and the role of emerging location-based technologies therein. The study uncovers the existing methods for assessing accessibility of physical environment and identify useful features of future assistive technologies. Second, we develop and evaluate scalable crowdsourced accessibility data collection methods. We show that paid crowd workers recruited from an online labor marketplace can find and label accessibility attributes in GSV with accuracy of 81%. This accuracy improves to 93% with quality control mechanisms such as majority vote. Third, we design a system that combines crowdsourcing and automated methods to increase data collection efficiency. Our work shows that by combining crowdsourcing and automated methods, we can increase data collection efficiency by 13% without sacrificing accuracy. Fourth, we develop and deploy a web tool that lets volunteers to help us collect the street-level accessibility data from Washington, D.C. As of writing this dissertation, we have collected the accessibility data from 20% of the streets in D.C. We conduct a preliminary evaluation on how the said web tool is used. Finally, we implement proof-of-concept accessibility-aware applications with accessibility data collected with the help of volunteers. My dissertation contributes to the accessibility, computer science, and HCI communities by: (i) extending the knowledge of how people with mobility impairments interact with technology to navigate in cities; (ii) introducing the first work that demonstrates that GSV is a viable source for learning about the accessibility of the physical world; (iii) introducing the first method that combines crowdsourcing and automated methods to remotely collect accessibility information; (iv) deploying interactive web tools that allow volunteers to help populate the largest dataset about street-level accessibility of the world; and (v) demonstrating accessibility-aware applications that empower people with mobility impairments
    corecore