269,129 research outputs found

    Master of Science

    Get PDF
    thesisFor more than twenty years, the introduction of reliability-based analysis into roadway geometric design has been investigated. This type of probabilistic geometric design analysis is well suited to explicitly address the level of variability and randomness associated with design inputs when compared to a more deterministic design approach. In this study, reliability analysis was used to estimate the probability distribution of operational performance that might result from basic number of lanes decisions made to achieve a design level of service on a freeway. The concept is demonstrated using data from Interstate 15 and Interstate 80 in Utah. The basic traffic count data used for analysis were obtained from Utah Department of Transportation (UDOT). To account for the uncertainty in the design inputs, statistical distributions were developed and reliability analysis was carried out using Monte Carlo simulation. A statistical software Minitab was used to develop statistical distributions of design inputs involving variability from the traffic count data. Minitab was also used to run Monte Carlo simulation by generating random samples of the design inputs. The outcome of this probabilistic analysis is a distribution of vehicle density for a given number of lanes during the design hour. The main benefit of reliability analysis is that it enables designers to explicitly consider uncertainties in their decision-making and to illustrate specific values of the distributions that correspond their target level of service (e.g., the 65th through 85th percentile density corresponds to the design level of service). The results demonstrate how uncertainty in estimates of K (i.e., the percent of daily traffic in the design hour), directional distribution, percent heavy-vehicles, and free-flow speed significantly contribute to the variation in the vehicle density on a freeway

    Comparative Durability Analysis of CFRP Strengthened RC Highway Bridges

    Get PDF
    The paper presents parametric analysis of durability factors of RC highway bridges strengthened with CFRP laminates during their service life. Durability factors considered are concrete cover and CFRP laminate thickness. Three deterioration factors were considered. First, growth of live load with time. Second, resistance reduction due to chloride-attack corrosion which causes reduction in steel properties. Corrosion losses are evaluated through a time–temperature dependent corrosion current. Two types of corrosion are considered; uniform and pitting corrosion. Third, deterioration due to aging of CFRP. The reliability analysis is controlled by three failure modes; concrete crushing, CFRP mid span debonding and CFRP rupture. Monte-Carlo simulation is used to develop time dependent statistical models for rebar steel area and live load extreme effect. Reliability is estimated in term of reliability index using FORM algorithm. For illustrative purpose, a RC bridge is assumed as an example. The reliability of interior beam of the bridge is evaluated under various traffic volumes and different corrosion environments. The bridge design options follow AASHTO-LRFD specifications. The present work also extends to calibrate CFRP resistance safety factor corresponds to three target reliability levels, β = 3.5, 3.85, and 4.2. The results of the analysis have shown that corrosion has the most significant effect on bridge life time followed by live load growth. Pitting corrosion type is more hazardous than uniform. Also, initial safety index is proved to be traffic dependent. AASHTO design equation (that corresponds βtarget = 3.5) seems to be overestimated for strengthening purpose. Strengthening with (βtarget = 4.2) provide better reliability than βtarget proposed by AASHTO provision with no significant differences in CFRP amounts required

    Reliability improvement of electronic circuits based on physical failure mechanisms in components

    Get PDF
    Traditionally the position of reliability analysis in the design and production process of electronic circuits is a position of reliability verification. A completed design is checked on reliability aspects and either rejected or accepted for production. This paper describes a method to model physical failure mechanisms within components in such a way that they can be used for reliability optimization, not after, but during the early phase of the design process. Furthermore a prototype of a CAD software tool is described, which can highlight components likely to fail and automatically adjust circuit parameters to improve product reliability

    Formal Verification of Probabilistic SystemC Models with Statistical Model Checking

    Full text link
    Transaction-level modeling with SystemC has been very successful in describing the behavior of embedded systems by providing high-level executable models, in which many of them have inherent probabilistic behaviors, e.g., random data and unreliable components. It thus is crucial to have both quantitative and qualitative analysis of the probabilities of system properties. Such analysis can be conducted by constructing a formal model of the system under verification and using Probabilistic Model Checking (PMC). However, this method is infeasible for large systems, due to the state space explosion. In this article, we demonstrate the successful use of Statistical Model Checking (SMC) to carry out such analysis directly from large SystemC models and allow designers to express a wide range of useful properties. The first contribution of this work is a framework to verify properties expressed in Bounded Linear Temporal Logic (BLTL) for SystemC models with both timed and probabilistic characteristics. Second, the framework allows users to expose a rich set of user-code primitives as atomic propositions in BLTL. Moreover, users can define their own fine-grained time resolution rather than the boundary of clock cycles in the SystemC simulation. The third contribution is an implementation of a statistical model checker. It contains an automatic monitor generation for producing execution traces of the model-under-verification (MUV), the mechanism for automatically instrumenting the MUV, and the interaction with statistical model checking algorithms.Comment: Journal of Software: Evolution and Process. Wiley, 2017. arXiv admin note: substantial text overlap with arXiv:1507.0818

    Report : review of the literature : maintenance and rehabilitation costs for roads (Risk-based Analysis)

    Get PDF
    Realistic estimates of short- and long-term (strategic) budgets for maintenance and rehabilitation of road assessment management should consider the stochastic characteristics of asset conditions of the road networks so that the overall variability of road asset data conditions is taken into account. The probability theory has been used for assessing life-cycle costs for bridge infrastructures by Kong and Frangopol (2003), Zayed et.al. (2002), Kong and Frangopol (2003), Liu and Frangopol (2004), Noortwijk and Frangopol (2004), Novick (1993). Salem 2003 cited the importance of the collection and analysis of existing data on total costs for all life-cycle phases of existing infrastructure, including bridges, road etc., and the use of realistic methods for calculating the probable useful life of these infrastructures (Salem et. al. 2003). Zayed et. al. (2002) reported conflicting results in life-cycle cost analysis using deterministic and stochastic methods. Frangopol et. al. 2001 suggested that additional research was required to develop better life-cycle models and tools to quantify risks, and benefits associated with infrastructures. It is evident from the review of the literature that there is very limited information on the methodology that uses the stochastic characteristics of asset condition data for assessing budgets/costs for road maintenance and rehabilitation (Abaza 2002, Salem et. al. 2003, Zhao, et. al. 2004). Due to this limited information in the research literature, this report will describe and summarise the methodologies presented by each publication and also suggest a methodology for the current research project funded under the Cooperative Research Centre for Construction Innovation CRC CI project no 2003-029-C

    Cross-layer system reliability assessment framework for hardware faults

    Get PDF
    System reliability estimation during early design phases facilitates informed decisions for the integration of effective protection mechanisms against different classes of hardware faults. When not all system abstraction layers (technology, circuit, microarchitecture, software) are factored in such an estimation model, the delivered reliability reports must be excessively pessimistic and thus lead to unacceptably expensive, over-designed systems. We propose a scalable, cross-layer methodology and supporting suite of tools for accurate but fast estimations of computing systems reliability. The backbone of the methodology is a component-based Bayesian model, which effectively calculates system reliability based on the masking probabilities of individual hardware and software components considering their complex interactions. Our detailed experimental evaluation for different technologies, microarchitectures, and benchmarks demonstrates that the proposed model delivers very accurate reliability estimations (FIT rates) compared to statistically significant but slow fault injection campaigns at the microarchitecture level.Peer ReviewedPostprint (author's final draft
    • …
    corecore