8,241 research outputs found

    Predicting deterioration rate of culvert structures utilizing a Markov model

    Get PDF
    A culvert is typically a hydraulic passage, normally placed perpendicular to the road alignment, which connects the upstream and downstream sections underneath an embankment, while also providing structural support for earth and traffic loads. The structural condition of culverts continues to deteriorate due to aging, limited maintenance budgets, and increased traffic loads. Maintaining the performance of culverts at acceptable levels is a priority for the U.S. Department of Transportation (DOT), and an effective maintenance of culvert structures can be greatly improved by introducing asset management practices. A priority list generated by traditional condition assessment might not provide optimum solutions, and benefits of culvert asset management practices can be maximized by incorporating prediction of deterioration trends. This dissertation includes the development of a decision making chart for culvert inspection, the development of a culvert rating methodology using the Analytic Hierarchy Process (AFIP) based on an expert opinion survey and the development of a Markovian model to predict the deterioration rate of culvert structures at the network level. The literature review is presented in three parts: culvert asset management systems in the U.S.; Non-destructive Technologies (NDT) for culvert inspection (concrete, metal, and thermoplastic culvert structures); and statistical approaches for estimating the deterioration rate for infrastructure. A review of available NDT methods was performed to identify methods applicable for culvert inspection. To identify practices currently used for culvert asset management, culvert inventory data requests were sent to 34 DOTs. The responses revealed that a relatively small number of DOTs manage their culvert assets using formal asset management systems and, while a number of DOTs have inventory databases, many do not have a methodology in place to convert them to priority lists. In addition, when making decisions, DOTs do not incorporate future deterioration rate information into the decision making process. The objective of this work was to narrow the gap between research and application. The culvert inventory database provides basic information support for culvert asset management. Preliminary data analysis of datasets provided by selected DOTs was performed to demonstrate the differences among them. An expert opinion survey using AHP was performed to confirm the weight of 23 factors, which was believed to contribute to the hydraulic & structural performance of culvert structures, so as to establish the culvert rating methodology. A homogenous Markov model, which was calibrated using the Metropolis-Hastings Algorithm, was utilized in the computation of the deterioration rate of culverts at the network level. A real world case study consisting of datasets of three highways inspected regularly by Oregon DOT is also presented. The performance of the model was validated using Pearson\u27s chi-square test

    A Review of Harmful Algal Bloom Prediction Models for Lakes and Reservoirs

    Get PDF
    Anthropogenic activity has led to eutrophication in water bodies across the world. This eutrophication promotes blooms, cyanobacteria being among the most notorious bloom organisms. Cyanobacterial blooms (more commonly referred to as harmful algal blooms (HABs)) can devastate an ecosystem. Cyanobacteria are resilient microorganisms that have adapted to survive under a variety of conditions, often outcompeting other phytoplankton. Some species of cyanobacteria produce toxins that ward off predators. These toxins can negatively affect the health of the aquatic life, but also can impact animals and humans that drink or come in contact with these noxious waters. Although cyanotoxin’s effects on humans are not as well researched as the growth, behavior, and ecological niche of cyanobacteria, their health impacts are of large concern. It is important that research to mitigate and understand cyanobacterial blooms and cyanotoxin production continues. This project supports continued research by addressing an approach to collect and summarize published articles that focus on techniques and models to predict cyanobacterial blooms with the goal of understanding what research has been done to promote future work. The following report summarizes 34 articles from 2003 to 2020 that each describe a mechanistic or data driven model developed to predict the occurrence of cyanobacterial blooms or the presence of cyanotoxins in lakes or reservoirs with similar climates to Utah. These articles showed a shift from more mechanistic approaches to more data driven approaches with time. This resulted in a more individualistic approach to modeling, meaning that models are often produced for a single lake or reservoir and are not easily comparable to other models for different systems

    Data mining in soft computing framework: a survey

    Get PDF
    The present article provides a survey of the available literature on data mining using soft computing. A categorization has been provided based on the different soft computing tools and their hybridizations used, the data mining function implemented, and the preference criterion selected by the model. The utility of the different soft computing methodologies is highlighted. Generally fuzzy sets are suitable for handling the issues related to understandability of patterns, incomplete/noisy data, mixed media information and human interaction, and can provide approximate solutions faster. Neural networks are nonparametric, robust, and exhibit good learning and generalization capabilities in data-rich environments. Genetic algorithms provide efficient search algorithms to select a model, from mixed media data, based on some preference criterion/objective function. Rough sets are suitable for handling different types of uncertainty in data. Some challenges to data mining and the application of soft computing methodologies are indicated. An extensive bibliography is also included

    Are Online Consumer Reviews Credible? A Predictive Model based on Deep Learning

    Get PDF
    As the importance of online consumer reviews has grown, the concerns about their credibility being damaged by the presence of fake reviews have also grown. Extant literature reveals the importance of online reviews for consumers. Yet, there is a lack of research in the literature that considers consumer perception while developing a predictive model for the credibility of online reviews. This research aims to fill this gap by combining two different streams in the literature namely human-driven and data-driven approaches. To do so, we use two datasets with different labelling approaches to develop a predictive model, the first one is labelled based on the Yelp filtering algorithm and the second one is labelled based on the crowd’s perception towards credibility. Results from our predictive model reveal that it can predict credibility with a performance of 82% AUC, using reviews’ attributes namely, length, subjectivity, readability, extremity, external and internal consistency

    Action-oriented Scene Understanding

    Get PDF
    In order to allow robots to act autonomously it is crucial that they do not only describe their environment accurately but also identify how to interact with their surroundings. While we witnessed tremendous progress in descriptive computer vision, approaches that explicitly target action are scarcer. This cumulative dissertation approaches the goal of interpreting visual scenes “in the wild” with respect to actions implied by the scene. We call this approach action-oriented scene understanding. It involves identifying and judging opportunities for interaction with constituents of the scene (e.g. objects and their parts) as well as understanding object functions and how interactions will impact the future. All of these aspects are addressed on three levels of abstraction: elements, perception and reasoning. On the elementary level, we investigate semantic and functional grouping of objects by analyzing annotated natural image scenes. We compare object label-based and visual context definitions with respect to their suitability for generating meaningful object class representations. Our findings suggest that representations generated from visual context are on-par in terms of semantic quality with those generated from large quantities of text. The perceptive level concerns action identification. We propose a system to identify possible interactions for robots and humans with the environment (affordances) on a pixel level using state-of-the-art machine learning methods. Pixel-wise part annotations of images are transformed into 12 affordance maps. Using these maps, a convolutional neural network is trained to densely predict affordance maps from unknown RGB images. In contrast to previous work, this approach operates exclusively on RGB images during both, training and testing, and yet achieves state-of-the-art performance. At the reasoning level, we extend the question from asking what actions are possible to what actions are plausible. For this, we gathered a dataset of household images associated with human ratings of the likelihoods of eight different actions. Based on the judgement provided by the human raters, we train convolutional neural networks to generate plausibility scores from unseen images. Furthermore, having considered only static scenes previously in this thesis, we propose a system that takes video input and predicts plausible future actions. Since this requires careful identification of relevant features in the video sequence, we analyze this particular aspect in detail using a synthetic dataset for several state-of-the-art video models. We identify feature learning as a major obstacle for anticipation in natural video data. The presented projects analyze the role of action in scene understanding from various angles and in multiple settings while highlighting the advantages of assuming an action-oriented perspective. We conclude that action-oriented scene understanding can augment classic computer vision in many real-life applications, in particular robotics

    The practice of technological deception in videoconferencing systems for distance learning and ways to counter it

    Get PDF
    This article raises the problem of high-tech deception using video conferencing means during distance learning, which is of increased relevance due to the digitalization of the educational process, with the growth of digital literacy of young people. The article presents some methods of fraud, including a relatively new technology that is very popular: Deepfake. The article details two popular tools for replacing faces, each of which provides step-by-step instructions on how to create, configure and apply in life. The organizational methods that are presented will help teachers detect even the most disguised forgery, including future voice spoofing, and stop any attempt at deception. At the end of the article, the system is described with the help of the technologies that used to combat deepfakes, and an assessment of the danger of existing tools is given. The objectives of this article are to raise public interest in the problem of face substitution and high-tech deception in general, to create grounds for discussing the need to switch to a remote format of events, to broaden the horizons of the reader and provide him with an area for further work and research

    Reducing non-technical losses in electricity distribution networks: leveraging explainable AI and Three Lines of Defence Model to manage operational staff-related factors

    Get PDF
    This study presents a multidisciplinary approach involving Explainable Artificial Intelligence (ExAI) and operational risk management to reduce Non-Technical Losses (NTL) in electricity distribution. It empirically explores how the activities of employees of utility companies contribute to NTL, a phenomenon often overlooked in existing empirical research. An ensemble classification algorithm is used to analyse utility operations data, and the SHAP explainability technique establishes the predictive significance of staff activities for NTL. Subsequently, these staff activities are mapped into risk cells using the BASEL II and III operational risk definitions, and the Three Lines of Defence (3LoD) model is developed for optimizing electricity distribution. The paper makes three original contributions to the literature: first, it empirically links staff operations to NTL; second, it maps NTL causes to Basel II/III operational risk categories; and finally, to the best of the authors’ knowledge, it is the first study to use the 3LoD model for electricity distribution optimization

    Aerospace Medicine and Biology. A continuing bibliography with indexes

    Get PDF
    This bibliography lists 244 reports, articles, and other documents introduced into the NASA scientific and technical information system in February 1981. Aerospace medicine and aerobiology topics are included. Listings for physiological factors, astronaut performance, control theory, artificial intelligence, and cybernetics are included
    • …
    corecore