100 research outputs found

    A Quantitative Evaluation of the Nighttime Visual Sign Inspection Method

    Get PDF
    A research project to determine the appropriate sign inspection and replacement procedure was conducted at North Carolina State University and sponsored by the North Carolina DOT. The purpose was to determine the optimum strategy for sign inspection and replacement under different conditions to respond to the pending retroreflectivity requirements. This paper reports on a spreadsheet tool developed to quantitatively evaluate the effectiveness of different sign inspection and replacement scenarios. The spreadsheet was designed for yellow and red engineer-grade sign sheetings, and takes into account sign vandalism and knock-downs as well as normal sign aging. The spreadsheet provides estimates of the number of signs in place that would not meet the minimum retroreflectivity standard and the cost of the sign inspection and replacement program. The results from a number of trials of the spreadsheet show that agencies that generally conform to the key assumptions made to build the spreadsheet should consider replacing all signs every seven years, as that insures that no aged signs are in place at a relatively low cost. If total replacement is not possible, an inspection program using retroreflectometers every three years appears very competitive in its effectiveness with a program using typical visual inspection rates each year. The retroreflectometers appear to allow fewer deficient signs, while the typical visual inspection program costs are lower for a given vandalism rate. More conservative visual sign replacement rates do not appear to offer distinct advantages, because typical replacement rates with visual inspections every two or three years allow relatively high numbers of deficient signs to remain on the roads

    A Quantitative Evaluation of the Nighttime Visual Sign Inspection Method

    Get PDF
    A research project to determine the appropriate sign inspection and replacement procedure was conducted at North Carolina State University and sponsored by the North Carolina DOT. The purpose was to determine the optimum strategy for sign inspection and replacement under different conditions to respond to the pending retroreflectivity requirements. This paper reports on a spreadsheet tool developed to quantitatively evaluate the effectiveness of different sign inspection and replacement scenarios. The spreadsheet was designed for yellow and red engineer-grade sign sheetings, and takes into account sign vandalism and knock-downs as well as normal sign aging. The spreadsheet provides estimates of the number of signs in place that would not meet the minimum retroreflectivity standard and the cost of the sign inspection and replacement program. The results from a number of trials of the spreadsheet show that agencies that generally conform to the key assumptions made to build the spreadsheet should consider replacing all signs every seven years, as that insures that no aged signs are in place at a relatively low cost. If total replacement is not possible, an inspection program using retroreflectometers every three years appears very competitive in its effectiveness with a program using typical visual inspection rates each year. The retroreflectometers appear to allow fewer deficient signs, while the typical visual inspection program costs are lower for a given vandalism rate. More conservative visual sign replacement rates do not appear to offer distinct advantages, because typical replacement rates with visual inspections every two or three years allow relatively high numbers of deficient signs to remain on the roads

    Determining disaster data management needs in a multi-disaster context

    Get PDF
    In the last four decades, the economic loss from natural hazard disasters has increased ten-fold. The increasing human and economic impacts of disasters have intensified efforts on the global, national, state, and local levels to find ways to reduce these impacts. Improved collection and management of disaster data can help support planning and decision-making by first responders and emergency managers during all phases of the disaster cycle. The goal of this report is to establish what disaster-related data are needed in the planning, response, and recovery for multiple types of disasters, with a focus on the data needs in the state of North Carolina. There is a vast amount of information available from all phases of a disaster. Unfortunately, without proper collection, documentation, and storage, the information is either completely lost or is not transformed into functional data. Often, data that are critical for developing better mitigation efforts is not collected because much of it is short-lived and is lost prior to collection. Increased use of instrumentation, such as water level gauges and data collection and analysis software, can aid in collecting and disseminating real-time critical disaster data. The deployment of rapid-response data collection teams immediately after a disaster event can also improve the quantity and quality of data obtained during a disaster. Disaster management systems help first responders and emergency managers formulate and discriminate their decisions before, during, and after a disaster and therefore can serve as a way to organize, analyze, and disseminate critical disaster data. Groups of researchers and emergency management professionals in NC are trying to improve the collection and dissemination of disaster data in order to improve disaster preparation and response. Researchers at North Carolina State University (NCSU) were looking at all phases of data collection in a multi-disaster context. Another group, the North Carolina Institute of Disaster Studies, hosted two previous workshops to better coordinate collaboration between emergency responders and academics throughout the state. These efforts, as well as the disaster data collection research efforts of the North Carolina Emergency Management Division, resulted in a need to gather members of the academic and emergency management community together to obtain a more accurate picture of multi-disaster data collection and use, and to develop the foundation for a consensus on areas of disaster data management that needed improvement. A Disaster Data Workshop, held at NCSU November 4-5, 2004, was chosen as one way to address the data collection and dissemination issues in a context of broad, statewide participation. The workshop planning committee determined that the approximately 30-40 workshop participants would discuss four different disasters in-depth. The four disasters chosen by the workshop planning committee to discuss in the workshop were hurricane and tornado wind, flood, ice storm, and intentional explosion. The first three disasters chosen are the most frequent natural disasters in NC, while the intentional explosion disaster was chosen so that an intentional man-made disaster would be included in the workshop. The five objectives of the NCSU Disaster Data Workshop on “Determining Disaster Data Needs in a Multi-Disaster Context” were as follows. • Evaluate the applicability of a general multi-disaster model, • Understand local data needs and opportunities, • Establish clear models of organizational participation in collection and use, • Define a common data set for multiple disasters, and • Lay the groundwork for establishment of data collection teams. The workshop’s structure was based on meeting the five workshop objectives within the available time. The five sessions of the workshop were data needs, data resources, data dissemination, common data set, and data collection teams. From the participants’ discussions on disaster data during the workshop sessions, some common themes emerged. The emerging themes on data needs, resources, and data dissemination were used to create and implement a multi-disaster data model. The model was developed by the workshop planning committee. The discussions on data needs and resources also led to the identification of data items that participants in each of the four disaster groups indicated were needed for their assigned disaster. These needed data items form a common data set for the four disasters investigated by the workshop, as well as possibly for other disasters not investigated. Also generated from the workshop discussions were a set of disaster data collection and management priorities for NC. From this research study, from the NCSU Disaster Data Workshop results, and from previous workshops and disaster management systems efforts, several conclusions can be drawn about disaster data and its management. Existing data collection and management efforts focus primarily on inventory data, since this information is available regardless of a disaster event. The development of data collection teams and a data repository in NC is needed and would contribute to disaster research and emergency management efforts. The four areas model developed from the workshop allows all of the data items the workshop participants could think of to be assigned to a data area. The common data set model developed from the workshop is also biased toward the data needs for NC, and may need to be modified for application in other regions. Also, a disaster data collection and management cycle was developed from the workshop discussions. This cycle can serve as an agenda for the development and operations of both disaster data collection teams and a common disaster data repository. Recommendations from this study for NC include more research in the area of ice storms, an additional workshop to discuss the further development of data collection teams and coordinated data management in the state, and developing a common disaster data repository in NC. Broader recommendations in the area of disaster data management include prioritizing data set development based on how critical the data set is to a region’s disaster preparedness and response, ensuring that disaster data collection teams are self-reliant, investigating more disaster types to better understand their data needs and resources, and improving data collection efforts through increased use of instrumentation and cooperation between emergency management organizations and managers of infrastructure systems such as transportation and utilities

    Road safety evaluation through automatic extraction of road horizontal alignments from Mobile LiDAR System and inductive reasoning based on a decision tree

    Get PDF
    13 p.Safe roads are a necessity for any society because of the high social costs of traffic accidents. This challenge is addressed by a novel methodology that allows us to evaluate road safety from Mobile LiDAR System data, taking advantage of the road alignment due to its influence on the accident rate. Automation is obtained through an inductive reasoning process based on a decision tree that provides a potential risk assessment. To achieve this, a 3D point cloud is classified by an iterative and incremental algorithm based on a 2.5D and 3D Delaunay triangulation, which apply different algorithms sequentially. Next, an automatic extraction process of road horizontal alignment parameters is developed to obtain geometric consistency indexes, based on a joint triple stability criterion. Likewise, this work aims to provide a powerful and effective preventive and/or predictive tool for road safety inspections. The proposed methodology was implemented on three stretches of Spanish roads, each with different traffic conditions that represent the most common road types. The developed methodology was successfully validated through as-built road projects, which were considered as “ground truth.”S

    Causes of delay and cost overrun in Malaysian construction industry

    Get PDF
    The construction industry in Malaysia drives the economic growth and development of the country. However, the industry is plagued with delays and cost overrun which transforms what should have been successful projects to projects incurring additional costs, disagreements, litigation and in some cases abandonment of projects. This research studied the causes of delays and cost overrun in the industry and ranked them according to their perceived importance to the contractors, with a view to establishing those to be addressed by the contractors. Online questionnaires were used for data collection for this research. A total of 69 responses were analysed using principal component analysis (PCA) (factor analysis) to identify the main causes. The result of the analysis showed that delay in preparation of design document, poor schedule and control of time, delay in delivery of material to site, lack of knowledge about the different defined execution methods, shortage of labour and material in market, and changes in scope of work were the main causes of delay and cost overrun. The identified causes if properly addressed would reduce the rate of delays and cost overrun in construction projects, thus enhancing the economic growth and development of the country

    Extending Database Management Systems for Engineering Applications

    No full text
    During the design of a manufactured component, large amounts of information pertaining to all aspects of the design must be stored, accessed, and operated upon. A database management system (DBMS), composed of a central repository of data and the associated software for controlling accesses to it and operations on it, provides one way to uniformly store, manage, and use this information. This paper presents a framework for an extension to relational database management systems that combines a set of engineering constraints with a database of engineering data items. The representation requires a database that is able to store all of the data normally associated with engineering design as well as the constraints imposed upon the engineering design process. A powerful and flexible constraint processing system is needed to adequately ensure that engineering data conforms to the limitations imposed upon it by the design process. Such a system must be capable of allowing constraints to be invoked at a variety of times, and provide numerous options for the user when violations are detected. This paper introduces a concept called structured constraints that integrates state- of-the-art advances in DBMSs and current research in engineering constraint processing to further enhance CAD system capabilities. It discusses the extensions to relational database theory that are needed to achieve such a constraint handling capability for mechanical engineering applications. The goal sought is a managed repository of data supporting interfaces to a wide variety of application programs and supporting processing capabilities for maintaining data integrity by incorporating engineering constraints. The Structured Constraint model is a general method for classifying semantic integrity constraints. It is based on the structure of the relational model and is therefore independent of any particular query language. In addition, it is a formalism that possesses conceptual clarity and generality which make it useful for representing and communicating arbitrary constraints. The key contribution of this formalism is its basis for a completely definable implementation of an engineering integrity syste

    Computing in Civil Engineering Curriculum: Needs and Issues

    No full text
    corecore