103,702 research outputs found

    Change Impact Analysis for SysML Requirements Models based on Semantics of Trace Relations

    Get PDF
    Change impact analysis is one of the applications of requirements traceability in software engineering community. In this paper, we focus on requirements and requirements relations from traceability perspective. We provide formal definitions of the requirements relations in SysML for change impact analysis. Our approach aims at keeping the model synchronized with what stakeholders want to be modeled, and possibly implemented as well, which we called as the domain. The differences between the domain and model are defined as external inconsistencies. The inconsistencies are propagated for the whole model by using the formalization of relations, and mapped to proposed model changes. We provide tool support which is a plug-in of the commercial visual software modeler BluePrint

    Reliability of Mobile Agents for Reliable Service Discovery Protocol in MANET

    Full text link
    Recently mobile agents are used to discover services in mobile ad-hoc network (MANET) where agents travel through the network, collecting and sometimes spreading the dynamically changing service information. But it is important to investigate how reliable the agents are for this application as the dependability issues(reliability and availability) of MANET are highly affected by its dynamic nature.The complexity of underlying MANET makes it hard to obtain the route reliability of the mobile agent systems (MAS); instead we estimate it using Monte Carlo simulation. Thus an algorithm for estimating the task route reliability of MAS (deployed for discovering services) is proposed, that takes into account the effect of node mobility in MANET. That mobility pattern of the nodes affects the MAS performance is also shown by considering different mobility models. Multipath propagation effect of radio signal is considered to decide link existence. Transient link errors are also considered. Finally we propose a metric to calculate the reliability of service discovery protocol and see how MAS performance affects the protocol reliability. The experimental results show the robustness of the proposed algorithm. Here the optimum value of network bandwidth (needed to support the agents) is calculated for our application. However the reliability of MAS is highly dependent on link failure probability

    Squeeziness: An information theoretic measure for avoiding fault masking

    Get PDF
    Copyright @ 2012 ElsevierFault masking can reduce the effectiveness of a test suite. We propose an information theoretic measure, Squeeziness, as the theoretical basis for avoiding fault masking. We begin by explaining fault masking and the relationship between collisions and fault masking. We then define Squeeziness and demonstrate by experiment that there is a strong correlation between Squeeziness and the likelihood of collisions. We conclude with comments on how Squeeziness could be the foundation for generating test suites that minimise the likelihood of fault masking

    High resolution fire hazard index based on satellite images

    Get PDF
    In December 2015, after 3 year of activity, the FP7 project PREFER (Space-based Information Support for Prevention and REcovery of Forest Fires Emergency in the MediteRranean Area) came to an end. The project was designed to respond to the need to improve the use of satellite images in applications related to the emergency services, in particular, to forest fires. The project aimed at developing, validating and demonstrating information products based on optical and SAR (Synthetic Aperture Radar) imagery for supporting the prevention of forest fires and the recovery/damage assessment of burnt area. The present paper presents an improved version of one of the products developed under the PREFER project, which is the Daily Fire Hazard Index (DFHI)

    Statistical framework for estimating GNSS bias

    Full text link
    We present a statistical framework for estimating global navigation satellite system (GNSS) non-ionospheric differential time delay bias. The biases are estimated by examining differences of measured line integrated electron densities (TEC) that are scaled to equivalent vertical integrated densities. The spatio-temporal variability, instrumentation dependent errors, and errors due to inaccurate ionospheric altitude profile assumptions are modeled as structure functions. These structure functions determine how the TEC differences are weighted in the linear least-squares minimization procedure, which is used to produce the bias estimates. A method for automatic detection and removal of outlier measurements that do not fit into a model of receiver bias is also described. The same statistical framework can be used for a single receiver station, but it also scales to a large global network of receivers. In addition to the Global Positioning System (GPS), the method is also applicable to other dual frequency GNSS systems, such as GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema). The use of the framework is demonstrated in practice through several examples. A specific implementation of the methods presented here are used to compute GPS receiver biases for measurements in the MIT Haystack Madrigal distributed database system. Results of the new algorithm are compared with the current MIT Haystack Observatory MAPGPS bias determination algorithm. The new method is found to produce estimates of receiver bias that have reduced day-to-day variability and more consistent coincident vertical TEC values.Comment: 18 pages, 5 figures, submitted to AM

    Short interval control for the cost estimate baseline of novel high value manufacturing products – a complexity based approach

    Get PDF
    Novel high value manufacturing products by default lack the minimum a priori data needed for forecasting cost variance over of time using regression based techniques. Forecasts which attempt to achieve this therefore suffer from significant variance which in turn places significant strain on budgetary assumptions and financial planning. The authors argue that for novel high value manufacturing products short interval control through continuous revision is necessary until the context of the baseline estimate stabilises sufficiently for extending the time intervals for revision. Case study data from the United States Department of Defence Scheduled Annual Summary Reports (1986-2013) is used to exemplify the approach. In this respect it must be remembered that the context of a baseline cost estimate is subject to a large number of assumptions regarding future plausible scenarios, the probability of such scenarios, and various requirements related to such. These assumptions change over time and the degree of their change is indicated by the extent that cost variance follows a forecast propagation curve that has been defined in advance. The presented approach determines the stability of this context by calculating the effort required to identify a propagation pattern for cost variance using the principles of Kolmogorov complexity. Only when that effort remains stable over a sufficient period of time can the revision periods for the cost estimate baseline be changed from continuous to discrete time intervals. The practical implication of the presented approach for novel high value manufacturing products is that attention is shifted from the bottom up or parametric estimation activity to the continuous management of the context for that cost estimate itself. This in turn enables a faster and more sustainable stabilisation of the estimating context which then creates the conditions for reducing cost estimate uncertainty in an actionable and timely manner

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Detecting Floating-Point Errors via Atomic Conditions

    Get PDF
    This paper tackles the important, difficult problem of detecting program inputs that trigger large floating-point errors in numerical code. It introduces a novel, principled dynamic analysis that leverages the mathematically rigorously analyzed condition numbers for atomic numerical operations, which we call atomic conditions, to effectively guide the search for large floating-point errors. Compared with existing approaches, our work based on atomic conditions has several distinctive benefits: (1) it does not rely on high-precision implementations to act as approximate oracles, which are difficult to obtain in general and computationally costly; and (2) atomic conditions provide accurate, modular search guidance. These benefits in combination lead to a highly effective approach that detects more significant errors in real-world code (e.g., widely-used numerical library functions) and achieves several orders of speedups over the state-of-the-art, thus making error analysis significantly more practical. We expect the methodology and principles behind our approach to benefit other floating-point program analysis tasks such as debugging, repair and synthesis. To facilitate the reproduction of our work, we have made our implementation, evaluation data and results publicly available on GitHub at https://github.com/FP-Analysis/atomic-condition.ISSN:2475-142

    Expert System-Based Exploratory Approach to Cost Modeling of Reinforced Concrete Office Building

    Get PDF
    Expert system is a conventional method that is in use in cost modeling, considering its advantage over traditional regression method. It is based on this fact, that this study aimed at deploying neural network in cost modeling of reinforced concrete office building. One hundred (100) samples were selected at random and divided into two; one part was used to develop network algorithm while the second part was used for model validation. Neural network was used to generate the model algorithm; the model is divided into 3 modules: the data optimization module, criteria selection with initializing and terminating modules. Regression analysis was carried out and model validated with Jackknife re-sampling technique. The colinearity analysis indicates high level of tolerance and -0.07403 lowest variation prediction quotients to 0.66639 highest variation quotients. Also the Regression coefficient (R-square) value for determining the model fitness is 0.034 with standard error of 0.048 this attest to the fitness of the model generated. The model is flexible in accommodating new data and variables, thus, it allows for regular updating
    corecore