213,093 research outputs found
Opportunistic Sensing in Train Safety Systems
Train safety systems are complex and expensive, and changing them requires huge investments. Changes are evolutionary and small. Current developments, like faster - high speed - trains and a higher train density on the railway network, have initiated research on safety systems that can cope with the new requirements. This paper presents a novel approach for a safety subsystem that checks the composition of a train, based on opportunistic sensing with a wireless sensor network. Opportunistic sensing systems consist of changing constellations sensors that, for a limited amount of time, work together to achieve a common goal. Such constellations are selforganizing and come into being spontaneously. The proposed opportunistic sensing system selects a subset of sensor nodes from a larger set based on a common context.We show that it is possible to use a wireless sensor network to make a distinction between carriages from different trains. The common context is acceleration, which is used to select the subset of carriages that belong to the same train out of all the carriages from several trains in close proximity. Simulations based on a realistic set of sensor data show that the method is valid, but that the algorithm is too complex for implementation on simple wireless sensor nodes. Downscaling the algorithm reduces the number of processor execution cycles as well as memory usage, and makes it suitable for implementation on a wireless sensor node with acceptable loss of precision. Actual implementation on wireless sensor nodes confirms the results obtained with the simulations
An investigation into the feasibility of constructing a mathematical model of ship safety : a thesis in partial fulfilment for the degree of Master of Science in Mathematics at Massey University
This thesis investigates the feasibility of developing a mathematical model to provide quantitative measures of total ship safety. Safety is an intuitive concept and is a subset of economic utility. There is economic pressure to transport goods at minimum cost and, without regulation, the frequency of shipping casualties could be unacceptably high. Mathematical methods associated with elements that influence ship safety are reviewed. Techniques for analysing ships' structures, stability, motions and engineering reliability are well established, but those for assessing the effect of human involvement, and operational and organisational influences on safety are less developed Data are available for winds, waves, currents and tidal movements, and their variability suggests that probabilistic models are appropriate Given the complexity of the international shipping industry, a simple computer model is developed in which 50 ships serve four ports. This allows safety to be assessed when input variables are adjusted. Obstacles to developing a mathematical model of ship safety are identified, and it is concluded that the feasibility of such a model depends on its required inclusiveness and utility
Do we need entire training data for adversarial training?
Deep Neural Networks (DNNs) are being used to solve a wide range of problems
in many domains including safety-critical domains like self-driving cars and
medical imagery. DNNs suffer from vulnerability against adversarial attacks. In
the past few years, numerous approaches have been proposed to tackle this
problem by training networks using adversarial training. Almost all the
approaches generate adversarial examples for the entire training dataset, thus
increasing the training time drastically. We show that we can decrease the
training time for any adversarial training algorithm by using only a subset of
training data for adversarial training. To select the subset, we filter the
adversarially-prone samples from the training data. We perform a simple
adversarial attack on all training examples to filter this subset. In this
attack, we add a small perturbation to each pixel and a few grid lines to the
input image.
We perform adversarial training on the adversarially-prone subset and mix it
with vanilla training performed on the entire dataset. Our results show that
when our method-agnostic approach is plugged into FGSM, we achieve a speedup of
3.52x on MNIST and 1.98x on the CIFAR-10 dataset with comparable robust
accuracy. We also test our approach on state-of-the-art Free adversarial
training and achieve a speedup of 1.2x in training time with a marginal drop in
robust accuracy on the ImageNet dataset.Comment: 6 pages, 4 figure
Reliability-based design optimization using kriging surrogates and subset simulation
The aim of the present paper is to develop a strategy for solving
reliability-based design optimization (RBDO) problems that remains applicable
when the performance models are expensive to evaluate. Starting with the
premise that simulation-based approaches are not affordable for such problems,
and that the most-probable-failure-point-based approaches do not permit to
quantify the error on the estimation of the failure probability, an approach
based on both metamodels and advanced simulation techniques is explored. The
kriging metamodeling technique is chosen in order to surrogate the performance
functions because it allows one to genuinely quantify the surrogate error. The
surrogate error onto the limit-state surfaces is propagated to the failure
probabilities estimates in order to provide an empirical error measure. This
error is then sequentially reduced by means of a population-based adaptive
refinement technique until the kriging surrogates are accurate enough for
reliability analysis. This original refinement strategy makes it possible to
add several observations in the design of experiments at the same time.
Reliability and reliability sensitivity analyses are performed by means of the
subset simulation technique for the sake of numerical efficiency. The adaptive
surrogate-based strategy for reliability estimation is finally involved into a
classical gradient-based optimization algorithm in order to solve the RBDO
problem. The kriging surrogates are built in a so-called augmented reliability
space thus making them reusable from one nested RBDO iteration to the other.
The strategy is compared to other approaches available in the literature on
three academic examples in the field of structural mechanics.Comment: 20 pages, 6 figures, 5 tables. Preprint submitted to Springer-Verla
Computing Weakest Strategies for Safety Games of Imperfect Information
CEDAR (Counter Example Driven Antichain Refinement) is a new symbolic algorithm for computing weakest strategies for safety games of imperfect information. The algorithm computes a fixed point over the lattice of contravariant antichains. Here contravariant antichains are antichains over pairs consisting of an information set and an allow set representing the associated move. We demonstrate how the richer structure of contravariant antichains for representing antitone functions, as opposed to standard antichains for representing sets of downward closed sets, allows CEDAR to apply a significantly less complex controllable predecessor step than previous algorithms
An Empirical Analysis of Vulnerabilities in Python Packages for Web Applications
This paper examines software vulnerabilities in common Python packages used
particularly for web development. The empirical dataset is based on the PyPI
package repository and the so-called Safety DB used to track vulnerabilities in
selected packages within the repository. The methodological approach builds on
a release-based time series analysis of the conditional probabilities for the
releases of the packages to be vulnerable. According to the results, many of
the Python vulnerabilities observed seem to be only modestly severe; input
validation and cross-site scripting have been the most typical vulnerabilities.
In terms of the time series analysis based on the release histories, only the
recent past is observed to be relevant for statistical predictions; the
classical Markov property holds.Comment: Forthcoming in: Proceedings of the 9th International Workshop on
Empirical Software Engineering in Practice (IWESEP 2018), Nara, IEE
Towards Assume-Guarantee Profiles for Autonomous Vehicles
Rules or specifications for autonomous vehicles are currently formulated on a case-by-case basis, and put together in a rather ad-hoc fashion. As a step towards eliminating this practice, we propose a systematic procedure for generating a set of supervisory specifications for self-driving cars that are 1) associated with a distributed assume-guarantee structure and 2) characterizable by the notion of consistency and completeness. Besides helping autonomous vehicles make better decisions on the road, the assume-guarantee contract structure also helps address the notion of blame when undesirable events occur. We give several game-theoretic examples to demonstrate applicability of our framework
- …