13 research outputs found

    Image-Based Collection and Measurements for Construction Pay Items

    Get PDF
    Prior to each payment to contractors and suppliers, measurements are made to document the actual amount of pay items placed at the site. This manual process has substantial risk for personnel, and could be made more efficient and less prone to human errors. In this project a customized software tool package has been developed to address these concerns. The Pay Item Measurement (PIM) tool package is provided as two complementary tools, the Orthophoto Generation Tool and the Graphical Measurement Tool. PIM has been developed in close cooperation with the advisory committee and field engineers from INDOT, and is specifically designed to incorporate the typical actions that INDOT personnel follow. PIM is intended to generate orthophotos for measurements on a planar surface. User guidelines explain the process of how to collect suitable high-quality images, which is a critical step for successful orthophoto construction. INDOT will also use PIM to identify features and make annotations, and to readily compute distances, perimeters, and areas for documenting and recording. This customized tool package will be most useful, and accurate, when the user guidelines are followed. Several examples are included to demonstrate the characteristics of high-quality image sets that will be successful, and to also provide examples of sets that would fail. Step-by-step instructions are provided to demonstrate the correct use of the tool. An instructional video and sample digital image sets complement this report and tool package

    Using approximate models in robot learning

    Get PDF
    Advisors: Brianno Coller.Committee members: Behrooz Fallahi; Ji-Chul Ryu.Includes bibliographical references.Includes illustrations.Trajectory following is one of the complicated control problems because its dynamics are nonlinear, stochastic and includes large number of parameters. The problem has major difficulties including large number of trials required for data collection, and huge volume of computations required to find a closed-loop controller for high dimensional and stochastic domains. For solving this type of problems, if we have an appropriate reward function and dynamics model, finding an optimal control policy is possible by using model-based reinforcement learning and optimal control algorithms. As defining an accurate dynamics is not possible for complicated problems, Pieter Abbeel and & Andrew Ng recently presented an algorithm that requires only an approximate model, and only a small number of real-life trials. This algorithm has wide applicability, however there are some problems regarding to convergence of the algorithm. In this research required modifications are presented that provide more powerful assurance for converging to an optimal control policy. Also updated algorithm implemented to evaluate the efficiency of the new algorithm by comparing the acquired results with human expert performance. We are using DDP (Differential Dynamic Programming) as the locally trajectory optimizer and a 2D dynamics and kinematics simulator is used to evaluate the accuracy of the presented algorithm.M.S. (Master of Science

    Developing Artificial Intelligence-Based Decision Support for Resilient Socio-Technical Systems

    No full text
    During 2017 and 2018, two of the costliest years on record regarding natural disasters, the U.S. experienced 30 events with total losses of $400 billion. These exuberant costs arise primarily from the lack of adequate planning spanning the breadth from pre-event pre- paredness to post-event response. It is imperative to start thinking about ways to make our built environment more resilient. However, empirically-calibrated and structure-specific vulnerability models, a critical input required to formulate decision-making problems, are not currently available. Here, the research objective is to improve the resilience of the built environment through an automated vision-based system that generates actionable information in the form of probabilistic pre-event prediction and post-event assessment of damage. The central hypothesis is that pre-event, e.g., street view images, along with the post-event image database, contain sufficient information to construct pre-event prob-abilistic vulnerability models for assets in the built environment. The rationale for this research stems from the fact that probabilistic damage prediction is the most critical input for formulating the decision-making problems under uncertainty targeting the mitigation, preparedness, response, and recovery efforts. The following tasks are completed towards the goal. First, planning for one of the bottleneck processes of the post-event recovery is formulated as a decision making problem considering the consequences imposed on the community (module 1). Second, a technique is developed to automate the process of extracting multiple street-view images of a given built asset, thereby creating a dataset that illustrates its pre-event state (module 2). Third, a system is developed that automati- cally characterizes the pre-event state of the built asset and quantifies the probability that it is damaged by fusing information from deep neural network (DNN) classifiers acting on pre-event and post-event images (module 3). To complete the work, a methodology is developed to enable associating each asset of the built environment with a structural probabilistic vulnerability model by correlating the pre-event structure characterization to the post-event damage state (module 4). The method is demonstrated and validated using field data collected from recent hurricanes within the US. The vision of this research is to enable the automatic extraction of information about exposure and risk to enable smarter and more resilient communities around the world

    Automated Indoor Image Localization to Support a Post-Event Building Assessment

    No full text
    Image data remains an important tool for post-event building assessment and documentation. After each natural hazard event, significant efforts are made by teams of engineers to visit the affected regions and collect useful image data. In general, a global positioning system (GPS) can provide useful spatial information for localizing image data. However, it is challenging to collect such information when images are captured in places where GPS signals are weak or interrupted, such as the indoor spaces of buildings. The inability to document the images’ locations hinders the analysis, organization, and documentation of these images as they lack sufficient spatial context. In this work, we develop a methodology to localize images and link them to locations on a structural drawing. A stream of images can readily be gathered along the path taken through a building using a compact camera. These images may be used to compute a relative location of each image in a 3D point cloud model, which is reconstructed using a visual odometry algorithm. The images may also be used to create local 3D textured models for building-components-of-interest using a structure-from-motion algorithm. A parallel set of images that are collected for building assessment is linked to the image stream using time information. By projecting the point cloud model to the structural drawing, the images can be overlaid onto the drawing, providing clear context information necessary to make use of those images. Additionally, components- or damage-of-interest captured in these images can be reconstructed in 3D, enabling detailed assessments having sufficient geospatial context. The technique is demonstrated by emulating post-event building assessment and data collection in a real building

    Prevalence of Short Peer Reviews in 3 Leading General Medical Journals

    No full text
    Importance: High-quality peer reviews are often thought to be essential to ensuring the integrity of the scientific publication process, but measuring peer review quality is challenging. Although imperfect, review word count could potentially serve as a simple, objective metric of review quality. Objective: To determine the prevalence of very short peer reviews and how often they inform editorial decisions on research articles in 3 leading general medical journals. Design, Setting, and Participants: This cross-sectional study compiled a data set of peer reviews from published, full-length original research articles from 3 general medical journals (The BMJ, PLOS Medicine, and BMC Medicine) between 2003 and 2022. Eligible articles were those with peer review data; all peer reviews used to make the first editorial decision (ie, accept vs revise and resubmit) were included. Main Outcomes and Measures: Prevalence of very short reviews was the primary outcome, which was defined as a review of fewer than 200 words. In secondary analyses, thresholds of fewer than 100 words and fewer than 300 words were used. Results were disaggregated by journal and year. The proportion of articles for which the first editorial decision was made based on a set of peer reviews in which very short reviews constituted 100%, 50% or more, 33% or more, and 20% or more of the reviews was calculated. Results: In this sample of 11466 reviews (including 6086 in BMC Medicine, 3816 in The BMJ, and 1564 in PLOS Medicine) corresponding to 4038 published articles, the median (IQR) word count per review was 425 (253-575) words, and the mean (SD) word count was 520.0 (401.0) words. The overall prevalence of very short (<200 words) peer reviews was 1958 of 11466 reviews (17.1%). Across the 3 journals, 843 of 4038 initial editorial decisions (20.9%) were based on review sets containing 50% or more very short reviews. The prevalence of very short reviews and share of editorial decisions based on review sets containing 50% or more very short reviews was highest for BMC Medicine (693 of 2585 editorial decisions [26.8%]) and lowest for The BMJ (76 of 1040 editorial decisions [7.3%]). Conclusion and Relevance: In this study of 3 leading general medical journals, one-fifth of initial editorial decisions for published articles were likely based at least partially on reviews of such short length that they were unlikely to be of high quality. Future research could determine whether monitoring peer review length improves the quality of peer reviews and which interventions, such as incentives and norm-based interventions, may elicit more detailed reviews..Peer reviewe
    corecore