4,767 research outputs found
Who watches the watchers: Validating the ProB Validation Tool
Over the years, ProB has moved from a tool that complemented proving, to a
development environment that is now sometimes used instead of proving for
applications, such as exhaustive model checking or data validation. This has
led to much more stringent requirements on the integrity of ProB. In this paper
we present a summary of our validation efforts for ProB, in particular within
the context of the norm EN 50128 and safety critical applications in the
railway domain.Comment: In Proceedings F-IDE 2014, arXiv:1404.578
Symbolic Reachability Analysis of B through ProB and LTSmin
We present a symbolic reachability analysis approach for B that can provide a
significant speedup over traditional explicit state model checking. The
symbolic analysis is implemented by linking ProB to LTSmin, a high-performance
language independent model checker. The link is achieved via LTSmin's PINS
interface, allowing ProB to benefit from LTSmin's analysis algorithms, while
only writing a few hundred lines of glue-code, along with a bridge between ProB
and C using ZeroMQ. ProB supports model checking of several formal
specification languages such as B, Event-B, Z and TLA. Our experiments are
based on a wide variety of B-Method and Event-B models to demonstrate the
efficiency of the new link. Among the tested categories are state space
generation and deadlock detection; but action detection and invariant checking
are also feasible in principle. In many cases we observe speedups of several
orders of magnitude. We also compare the results with other approaches for
improving model checking, such as partial order reduction or symmetry
reduction. We thus provide a new scalable, symbolic analysis algorithm for the
B-Method and Event-B, along with a platform to integrate other model checking
improvements via LTSmin in the future
Stochastic RUL calculation enhanced with TDNN-based IGBT failure modeling
Power electronics are widely used in the transport and energy sectors. Hence, the reliability of these power electronic components is critical to reducing the maintenance cost of these assets. It is vital that the health of these components is monitored for increasing the safety and availability of a system. The aim of this paper is to develop a prognostic technique for estimating the remaining useful life (RUL) of power electronic components. There is a need for an efficient prognostic algorithm that is embeddable and able to support on-board real-time decision-making. A time delay neural network (TDNN) is used in the development of failure modes for an insulated gate bipolar transistor (IGBT). Initially, the time delay neural network is constructed from training IGBTs' ageing samples. A stochastic process is performed for the estimation results to compute the probability of the health state during the degradation process. The proposed TDNN fusion with a statistical approach benefits the probability distribution function by improving the accuracy of the results of the TDDN in RUL prediction. The RUL (i.e., mean and confidence bounds) is then calculated from the simulation of the estimated degradation states. The prognostic results are evaluated using root mean square error (RMSE) and relative accuracy (RA) prognostic evaluation metrics
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
Gradients in urban material composition: A new concept to map cities with spaceborne imaging spectroscopy data
To understand processes in urban environments, such as urban energy fluxes or surface temperature patterns, it is important to map urban surface materials. Airborne imaging spectroscopy data have been successfully used to identify urban surface materials mainly based on unmixing algorithms. Upcoming spaceborne Imaging Spectrometers (IS), such as the Environmental Mapping and Analysis Program (EnMAP), will reduce the time and cost-critical limitations of airborne systems for Earth Observation (EO). However, the spatial resolution of all operated and planned IS in space will not be higher than 20 to 30 m and, thus, the detection of pure Endmember (EM) candidates in urban areas, a requirement for spectral unmixing, is very limited. Gradient analysis could be an alternative method for retrieving urban surface material compositions in pixels from spaceborne IS. The gradient concept is well known in ecology to identify plant species assemblages formed by similar environmental conditions but has never been tested for urban materials. However, urban areas also contain neighbourhoods with similar physical, compositional and structural characteristics. Based on this assumption, this study investigated (1) whether cover fractions of surface materials change gradually in urban areas and (2) whether these gradients can be adequately mapped and interpreted using imaging spectroscopy data (e.g. EnMAP) with 30 m spatial resolution.
Similarities of material compositions were analysed on the basis of 153 systematically distributed samples on a detailed surface material map using Detrended Correspondence Analysis (DCA). Determined gradient scores for the first two gradients were regressed against the corresponding mean reflectance of simulated EnMAP spectra using Partial Least Square regression models. Results show strong correlations with R2 = 0.85 and R2 = 0.71 and an RMSE of 0.24 and 0.21 for the first and second axis, respectively. The subsequent mapping of the first gradient reveals patterns that correspond to the transition from predominantly vegetation classes to the dominance of artificial materials. Patterns resulting from the second gradient are associated with surface material compositions that are related to finer structural differences in urban structures. The composite gradient map shows patterns of common surface material compositions that can be related to urban land use classes such as Urban Structure Types (UST). By linking the knowledge of typical material compositions with urban structures, gradient analysis seems to be a powerful tool to map characteristic material compositions in 30 m imaging spectroscopy data of urban areas
Complex railway systems: capacity and utilisation of interconnected networks
Introduction Worldwide the transport sector faces several issues related to the rising of traffic demand such as congestion, energy consumption, noise, pollution, safety, etc. Trying to stem the problem, the European Commission is encouraging a modal shift towards railway, considered as one of the key factors for the development of a more sustainable European transport system. The coveted increase in railway share of transport demand for the next decades and the attempt to open up the rail market (for freight, international and recently also local services) strengthen the attention to capacity usage of the system. This contribution proposes a synthetic methodology for the capacity and utilisation analysis of complex interconnected rail networks; the procedure has a dual scope since it allows both a theoretically robust examination of suburban rail systems and a solid approach to be applied, with few additional and consistent assumptions, for feasibility or strategic analysis of wide networks (by efficiently exploiting the use of Big Data and/or available Open Databases). Method In particular the approach proposes a schematization of typical elements of a rail network (stations and line segments) to be applied in case of lack of more detailed data; in the authors’ opinion the strength points of the presented procedure stem from the flexibility of the applied synthetic methods and from the joint analysis of nodes and lines. The article, after building a quasiautomatic model to carry out several analyses by changing the border conditions or assumptions, even presents some general abacuses showing the variability of capacity/utilization of the network’s elements in function of basic parameters. Results This has helped in both the presented case studies: one focuses on a detailed analysis of the Naples’ suburban node, while the other tries to broaden the horizon by examining the whole European rail network with a more specific zoom on the Belgium area. The first application shows how the procedure can be applied in case of availability of fine-grained data and for metropolitan/regional analysis, allowing a precise detection of possible bottlenecks in the system and the individuation of possible interventions to relieve the high usage rate of these elements. The second application represents an on-going attempt to provide a broad analysis of capacity and related parameters for the entire European railway system. It explores the potentiality of the approach and the possible exploitation of different ‘Open and Big Data’ sources, but the outcomes underline the necessity to rely on proper and adequate information; the accuracy of the results significantly depend on the design and precision of the input database. Conclusion In conclusion, the proposed methodology aims to evaluate capacity and utilisation rates of rail systems at different geographical scales and according to data availability; the outcomes might provide valuable information to allow efficient exploitation and deployment of railway infrastructure, better supporting policy (e.g. investment prioritization, rail infrastructure access charges) and helping to minimize costs for users.The presented case studies show that the method allows indicative evaluations on the use of the system and comparative analysis between different elementary components, providing a first identification of ‘weak’ links or nodes for which,
then, specific and detailed analyses should be carried out, taking into account more in depth their actual configuration, the technical characteristics and the real composition of the traffic (i.e. other elements influencing the rail capacity, such as: the adopted operating systems, the station traffic/route control & safety system, the elastic release of routes, the overlap of block sections, etc.)
Evolving macro-actions for planning
Domain re-engineering through macro-actions (i.e. macros) provides one potential avenue for research into learning for planning. However, most existing work learns macros that are reusable plan fragments and so observable from planner behaviours online or plan characteristics offline. Also, there are learning methods that learn macros from domain analysis. Nevertheless, most of these methods explore restricted macro spaces and exploit specific features of planners or domains. But, the learning examples, especially that are used to acquire previous experiences, might not cover many aspects of the system, or might not always reflect that better choices have been made during the search. Moreover, any specific properties are not likely to be common with many planners or domains. This paper presents an offline evolutionary method that learns macros for arbitrary planners and domains. Our method explores a wider macro space and learns macros that are somehow not observable from the examples. Our method also represents a generalised macro learning framework as it does not discover or utilise any specific structural properties of planners or domains
- …