60,663 research outputs found

    Quality assessment technique for ubiquitous software and middleware

    Get PDF
    The new paradigm of computing or information systems is ubiquitous computing systems. The technology-oriented issues of ubiquitous computing systems have made researchers pay much attention to the feasibility study of the technologies rather than building quality assurance indices or guidelines. In this context, measuring quality is the key to developing high-quality ubiquitous computing products. For this reason, various quality models have been defined, adopted and enhanced over the years, for example, the need for one recognised standard quality model (ISO/IEC 9126) is the result of a consensus for a software quality model on three levels: characteristics, sub-characteristics, and metrics. However, it is very much unlikely that this scheme will be directly applicable to ubiquitous computing environments which are considerably different to conventional software, trailing a big concern which is being given to reformulate existing methods, and especially to elaborate new assessment techniques for ubiquitous computing environments. This paper selects appropriate quality characteristics for the ubiquitous computing environment, which can be used as the quality target for both ubiquitous computing product evaluation processes ad development processes. Further, each of the quality characteristics has been expanded with evaluation questions and metrics, in some cases with measures. In addition, this quality model has been applied to the industrial setting of the ubiquitous computing environment. These have revealed that while the approach was sound, there are some parts to be more developed in the future

    Self-Healing Protocols for Connectivity Maintenance in Unstructured Overlays

    Full text link
    In this paper, we discuss on the use of self-organizing protocols to improve the reliability of dynamic Peer-to-Peer (P2P) overlay networks. Two similar approaches are studied, which are based on local knowledge of the nodes' 2nd neighborhood. The first scheme is a simple protocol requiring interactions among nodes and their direct neighbors. The second scheme adds a check on the Edge Clustering Coefficient (ECC), a local measure that allows determining edges connecting different clusters in the network. The performed simulation assessment evaluates these protocols over uniform networks, clustered networks and scale-free networks. Different failure modes are considered. Results demonstrate the effectiveness of the proposal.Comment: The paper has been accepted to the journal Peer-to-Peer Networking and Applications. The final publication is available at Springer via http://dx.doi.org/10.1007/s12083-015-0384-

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Structural Vulnerability Analysis of Electric Power Distribution Grids

    Full text link
    Power grid outages cause huge economical and societal costs. Disruptions in the power distribution grid are responsible for a significant fraction of electric power unavailability to customers. The impact of extreme weather conditions, continuously increasing demand, and the over-ageing of assets in the grid, deteriorates the safety of electric power delivery in the near future. It is this dependence on electric power that necessitates further research in the power distribution grid security assessment. Thus measures to analyze the robustness characteristics and to identify vulnerabilities as they exist in the grid are of utmost importance. This research investigates exactly those concepts- the vulnerability and robustness of power distribution grids from a topological point of view, and proposes a metric to quantify them with respect to assets in a distribution grid. Real-world data is used to demonstrate the applicability of the proposed metric as a tool to assess the criticality of assets in a distribution grid

    Smart Grid Security: Threats, Challenges, and Solutions

    Get PDF
    The cyber-physical nature of the smart grid has rendered it vulnerable to a multitude of attacks that can occur at its communication, networking, and physical entry points. Such cyber-physical attacks can have detrimental effects on the operation of the grid as exemplified by the recent attack which caused a blackout of the Ukranian power grid. Thus, to properly secure the smart grid, it is of utmost importance to: a) understand its underlying vulnerabilities and associated threats, b) quantify their effects, and c) devise appropriate security solutions. In this paper, the key threats targeting the smart grid are first exposed while assessing their effects on the operation and stability of the grid. Then, the challenges involved in understanding these attacks and devising defense strategies against them are identified. Potential solution approaches that can help mitigate these threats are then discussed. Last, a number of mathematical tools that can help in analyzing and implementing security solutions are introduced. As such, this paper will provide the first comprehensive overview on smart grid security
    • …
    corecore