2,470 research outputs found

    A traffic classification method using machine learning algorithm

    Get PDF
    Applying concepts of attack investigation in IT industry, this idea has been developed to design a Traffic Classification Method using Data Mining techniques at the intersection of Machine Learning Algorithm, Which will classify the normal and malicious traffic. This classification will help to learn about the unknown attacks faced by IT industry. The notion of traffic classification is not a new concept; plenty of work has been done to classify the network traffic for heterogeneous application nowadays. Existing techniques such as (payload based, port based and statistical based) have their own pros and cons which will be discussed in this literature later, but classification using Machine Learning techniques is still an open field to explore and has provided very promising results up till now

    On Statistical Methods for Safety Validation of Automated Vehicles

    Get PDF
    Automated vehicles (AVs) are expected to bring safer and more convenient transport in the future. Consequently, before introducing AVs at scale to the general public, the required levels of safety should be shown with evidence. However, statistical evidence generated by brute force testing using safety drivers in real traffic does not scale well. Therefore, more efficient methods are needed to evaluate if an AV exhibits acceptable levels of risk.This thesis studies the use of two methods to evaluate the AV\u27s safety performance efficiently. Both methods are based on assessing near-collision using threat metrics to estimate the frequency of actual collisions. The first method, called subset simulation, is here used to search the scenario parameter space in a simulation environment to estimate the probability of collision for an AV under development. More specifically, this thesis explores how the choice of threat metric, used to guide the search, affects the precision of the failure rate estimation. The result shows significant differences between the metrics and that some provide precise and accurate estimates.The second method is based on Extreme Value Theory (EVT), which is used to model the behavior of rare events. In this thesis, near-collision scenarios are identified using threat metrics and then extrapolated to estimate the frequency of actual collisions. The collision frequency estimates from different types of threat metrics are assessed when used with EVT for AV safety validation. Results show that a metric relating to the point where a collision is unavoidable works best and provides credible estimates. In addition, this thesis proposes how EVT and threat metrics can be used as a proactive safety monitor for AVs deployed in real traffic. The concept is evaluated in a fictive development case and compared to a reactive approach of counting the actual events. It is found that the risk exposure of releasing a non-safe function can be significantly reduced by applying the proposed EVT monitor

    Stochastic Motion Planning as Gaussian Variational Inference: Theory and Algorithms

    Full text link
    We consider the motion planning problem under uncertainty and address it using probabilistic inference. A collision-free motion plan with linear stochastic dynamics is modeled by a posterior distribution. Gaussian variational inference is an optimization over the path distributions to infer this posterior within the scope of Gaussian distributions. We propose Gaussian Variational Inference Motion Planner (GVI-MP) algorithm to solve this Gaussian inference, where a natural gradient paradigm is used to iteratively update the Gaussian distribution, and the factorized structure of the joint distribution is leveraged. We show that the direct optimization over the state distributions in GVI-MP is equivalent to solving a stochastic control that has a closed-form solution. Starting from this observation, we propose our second algorithm, Proximal Gradient Covariance Steering Motion Planner (PGCS-MP), to solve the same inference problem in its stochastic control form with terminal constraints. We use a proximal gradient paradigm to solve the linear stochastic control with nonlinear collision cost, where the nonlinear cost is iteratively approximated using quadratic functions and a closed-form solution can be obtained by solving a linear covariance steering at each iteration. We evaluate the effectiveness and the performance of the proposed approaches through extensive experiments on various robot models. The code for this paper can be found in https://github.com/hzyu17/VIMP.Comment: 19 page

    Some Historical Aspects of Error Calculus by Dirichlet Forms

    Get PDF
    We discuss the main stages of development of the error calculation since the beginning of XIX-th century by insisting on what prefigures the use of Dirichlet forms and emphasizing the mathematical properties that make the use of Dirichlet forms more relevant and efficient. The purpose of the paper is mainly to clarify the concepts. We also indicate some possible future research.Comment: 18 page

    Statistical unfolding of elementary particle spectra: Empirical Bayes estimation and bias-corrected uncertainty quantification

    Full text link
    We consider the high energy physics unfolding problem where the goal is to estimate the spectrum of elementary particles given observations distorted by the limited resolution of a particle detector. This important statistical inverse problem arising in data analysis at the Large Hadron Collider at CERN consists in estimating the intensity function of an indirectly observed Poisson point process. Unfolding typically proceeds in two steps: one first produces a regularized point estimate of the unknown intensity and then uses the variability of this estimator to form frequentist confidence intervals that quantify the uncertainty of the solution. In this paper, we propose forming the point estimate using empirical Bayes estimation which enables a data-driven choice of the regularization strength through marginal maximum likelihood estimation. Observing that neither Bayesian credible intervals nor standard bootstrap confidence intervals succeed in achieving good frequentist coverage in this problem due to the inherent bias of the regularized point estimate, we introduce an iteratively bias-corrected bootstrap technique for constructing improved confidence intervals. We show using simulations that this enables us to achieve nearly nominal frequentist coverage with only a modest increase in interval length. The proposed methodology is applied to unfolding the ZZ boson invariant mass spectrum as measured in the CMS experiment at the Large Hadron Collider.Comment: Published at http://dx.doi.org/10.1214/15-AOAS857 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: substantial text overlap with arXiv:1401.827
    • …
    corecore