16,373 research outputs found

    Towards a Smart World: Hazard Levels for Monitoring of Autonomous Vehicles’ Swarms

    Get PDF
    This work explores the creation of quantifiable indices to monitor the safe operations and movement of families of autonomous vehicles (AV) in restricted highway-like environments. Specifically, this work will explore the creation of ad-hoc rules for monitoring lateral and longitudinal movement of multiple AVs based on behavior that mimics swarm and flock movement (or particle swarm motion). This exploratory work is sponsored by the Emerging Leader Seed grant program of the Mineta Transportation Institute and aims at investigating feasibility of adaptation of particle swarm motion to control families of autonomous vehicles. Specifically, it explores how particle swarm approaches can be augmented by setting safety thresholds and fail-safe mechanisms to avoid collisions in off-nominal situations. This concept leverages the integration of the notion of hazard and danger levels (i.e., measures of the “closeness” to a given accident scenario, typically used in robotics) with the concept of safety distance and separation/collision avoidance for ground vehicles. A draft of implementation of four hazard level functions indicates that safety thresholds can be set up to autonomously trigger lateral and longitudinal motion control based on three main rules respectively based on speed, heading, and braking distance to steer the vehicle and maintain separation/avoid collisions in families of autonomous vehicles. The concepts here presented can be used to set up a high-level framework for developing artificial intelligence algorithms that can serve as back-up to standard machine learning approaches for control and steering of autonomous vehicles. Although there are no constraints on the concept’s implementation, it is expected that this work would be most relevant for highly-automated Level 4 and Level 5 vehicles, capable of communicating with each other and in the presence of a monitoring ground control center for the operations of the swarm

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    General Semiparametric Shared Frailty Model Estimation and Simulation with frailtySurv

    Get PDF
    The R package frailtySurv for simulating and fitting semi-parametric shared frailty models is introduced. Package frailtySurv implements semi-parametric consistent estimators for a variety of frailty distributions, including gamma, log-normal, inverse Gaussian and power variance function, and provides consistent estimators of the standard errors of the parameters' estimators. The parameters' estimators are asymptotically normally distributed, and therefore statistical inference based on the results of this package, such as hypothesis testing and confidence intervals, can be performed using the normal distribution. Extensive simulations demonstrate the flexibility and correct implementation of the estimator. Two case studies performed with publicly available datasets demonstrate applicability of the package. In the Diabetic Retinopathy Study, the onset of blindness is clustered by patient, and in a large hard drive failure dataset, failure times are thought to be clustered by the hard drive manufacturer and model

    A Platform for Proactive, Risk-Based Slope Asset Management, Phase II

    Get PDF
    INE/AUTC 15.0

    Assessment of optimal design methods of viscous dampers

    Get PDF
    Viscous dampers are often used for seismic protection and performance enhancement of building frames. The optimal design of such devices requires the modelling and propagation of the uncertainties related to the earthquake hazard. Different approaches are available for the seismic input characterisation and for the probabilistic response evaluation. This work analyzes the effect of different characterizations of the seismic input and of the response evaluation on the design of dampers for building frames. The seismic input is represented as a stochastic process and the optimal damper properties are found via a reliability-based design procedure aiming at controlling the frame performance while limiting the damper cost. Two simplified approaches are used to design the viscous damper of a multi-storey steel frame and the design results are compared with those obtained by considering a rigorous design approach resorting to advanced simulations for the response assessment. The first methodology evaluates the response through a prefixed probabilistic demand model, while the second approach considers the average response for a given hazard level only. The comparison allows to evaluate and quantify the effect of the seismic input uncertainty treatment on the system and damper performances

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance
    • …
    corecore