504,882 research outputs found

    TRUSTED SERVICE COMPOSITION FOR DISTRIBUTED REAL-TIME AND EMBEDDED SYSTEMS

    Get PDF
    poster abstractDistributed real-time and embedded (DRE) software systems are expected to provide high quality-of-service (QoS) attributes, e.g., scalability, reliability, and security, in conjunction with correct functionality built atop of infrastructure with limited capabilities. Given the many complex and conflicting QoS and functional attributes of DRE systems, a major challenge in developing such software systems is to guaranteeing it trustworthiness, i.e., the degree of confidence that the system adheres to its specification. Current state-of-the-art methods use service orientation to compose systems from reusable and trusted services, and validate the trustworthiness of the end system using runtime evidences. The major shortcoming of this approach is that trust is considered an afterthought (i.e., not an integral part of the software development lifecycle). Trustworthiness of a system should be evaluated based on the trustworthiness of different properties of the system, including its functionality and QoS attributes. Our research extends current state-of-the-art methods for developing trusted DRE systems by considering development time factors of the composition (e.g., properties of individual services, interaction patterns, and compatibility with other services). It is a major research challenge to evaluate the composition of trustworthiness for different system properties with different composition patterns. Our current and future research work to address this challenge includes identifying trust composition operators for different types of compositions, deriving a formal model of trust composition, and validating our approach with a case study using a distributed tracking system

    New Fuzzy Performance Indices for Reliability Analysis of Water Supply Systems

    Get PDF
    Large and complex engineering systems are subject to wide range of possible future loads and conditions. Uncertainty associated with the quantification of these potential conditions is imposing a great challenge to systems‘ design, planning and management. Therefore, the assurance of satisfactory and reliable system performance cannot be simply achieved. Water supply systems, as typical example of these engineering systems, include collections of different types of facilities. These facilities are connected in complicated networks that extend over and serve broad geographical regions. As a result, water supply systems are at risk of temporary disruption in service due to natural hazards or anthropogenic causes, whether unintentional (operational errors and mistakes) or intentional (terrorist act). Quantification of risk is a pivotal step in the engineering risk and reliability analysis. In this analysis, uncertainty is measured using different system performance indices and figures of merit to evaluate its consequences for the safety of engineering systems. The probabilistic reliability analysis has been extensively used to deal with the problem of uncertainty in many engineering systems. However, application of probabilistic reliability analysis is invariably affected by the well-known engineering problem of data insufficiency. Bayesian approach and subjective probability estimation are used to evaluate, express, and communicate uncertainty that stems from lack of information or data unavailability. They introduce a formal procedure for incorporating subjective belief and engineering understanding together with the available data. Fuzzy set theory, on the other hand, was developed to try to capture people judgmental believes, or as mentioned before, the uncertainty that is caused by the lack of knowledge. Fuzzy set theory and fuzzy logic contributed successfully to the technological development in different application in real-world problems of different kinds, (Zimmermann, 1996). This study explores the utility of the fuzzy set theory in the field of engineering system reliability analysis. Three new fuzzy reliability measures are suggested: (i) reliability index, (ii) robustness index, and (iii) resiliency index. These measures are evaluated, together with fuzzy reliability measure developed by Shrestha and Duckstein (1998), using two simple hypothetical cases. The new suggested indices are proven to be able to handle different fuzzy representations. In addition, these reliability measures comply with the conceptual approach of the fuzzy sets.https://ir.lib.uwo.ca/wrrr/1007/thumbnail.jp

    SDL - The IoT Language

    Get PDF
    Interconnected smart devices constitute a large and rapidly growing element of the contemporary Internet. A smart thing can be as simple as a web-enabled device that collects and transmits sensor data to a repository for analysis, or as complex as a web-enabled system to monitor and manage a smart home. Smart things present marvellous opportunities, but when they participate in complex systems, they challenge our ability to manage risk and ensure reliability. SDL, the ITU Standard Specification and Description Language, provides many advantages for modelling and simulating communicating agents – such as smart things – before they are deployed. The potential for SDL to enhance reliability and safety is explored with respect to existing smart things below. But SDL must advance if it is to become the language of choice for developing the next generation of smart things. In particular, it must target emerging IoT platforms, it must support simulation of interactions between pre-existing smart things and new smart things, and it must facilitate deployment of large numbers of similar things. Moreover, awareness of the potential benefits of SDL must be raised if those benefits are to be realized in the current and future Internet of Things.Peer ReviewedPostprint (published version

    Cross layer reliability estimation for digital systems

    Get PDF
    Forthcoming manufacturing technologies hold the promise to increase multifuctional computing systems performance and functionality thanks to a remarkable growth of the device integration density. Despite the benefits introduced by this technology improvements, reliability is becoming a key challenge for the semiconductor industry. With transistor size reaching the atomic dimensions, vulnerability to unavoidable fluctuations in the manufacturing process and environmental stress rise dramatically. Failing to meet a reliability requirement may add excessive re-design cost to recover and may have severe consequences on the success of a product. %Worst-case design with large margins to guarantee reliable operation has been employed for long time. However, it is reaching a limit that makes it economically unsustainable due to its performance, area, and power cost. One of the open challenges for future technologies is building ``dependable'' systems on top of unreliable components, which will degrade and even fail during normal lifetime of the chip. Conventional design techniques are highly inefficient. They expend significant amount of energy to tolerate the device unpredictability by adding safety margins to a circuit's operating voltage, clock frequency or charge stored per bit. Unfortunately, the additional cost introduced to compensate unreliability are rapidly becoming unacceptable in today's environment where power consumption is often the limiting factor for integrated circuit performance, and energy efficiency is a top concern. Attention should be payed to tailor techniques to improve the reliability of a system on the basis of its requirements, ending up with cost-effective solutions favoring the success of the product on the market. Cross-layer reliability is one of the most promising approaches to achieve this goal. Cross-layer reliability techniques take into account the interactions between the layers composing a complex system (i.e., technology, hardware and software layers) to implement efficient cross-layer fault mitigation mechanisms. Fault tolerance mechanism are carefully implemented at different layers starting from the technology up to the software layer to carefully optimize the system by exploiting the inner capability of each layer to mask lower level faults. For this purpose, cross-layer reliability design techniques need to be complemented with cross-layer reliability evaluation tools, able to precisely assess the reliability level of a selected design early in the design cycle. Accurate and early reliability estimates would enable the exploration of the system design space and the optimization of multiple constraints such as performance, power consumption, cost and reliability. This Ph.D. thesis is devoted to the development of new methodologies and tools to evaluate and optimize the reliability of complex digital systems during the early design stages. More specifically, techniques addressing hardware accelerators (i.e., FPGAs and GPUs), microprocessors and full systems are discussed. All developed methodologies are presented in conjunction with their application to real-world use cases belonging to different computational domains

    Framework for a spatial Decision Support Tool for policy and decision making

    Get PDF
    The main challenge of developing of a spatial DST (Decision Support Tool) to support the decision making on future livestock production will not be a technical one, but instead a challenge of meeting the con-text requirements of the tool, such as the characteristics of the country-specific spatial plan-ning and decision-making process, the wishes of the potential users of the tool and its output as well as the country-specific policies and regulations. The spatial DST which is being pro-posed in this report therefore does not include complex and state-of-the-art GIS techniques, but instead tries to be as clear and simple as possible, in order to give the potential users a full understanding during the analysis process and with using the output of the tool. A spatial DST can easily become a ‘black box’ if the users do not fully understand the limita-tions of the tool and its output. Despite the fact that output maps of GIS systems may look very detailed and suggest a high degree of accuracy, they are often not. This will entirely de-pend on the availability of reliable and detailed input data. Most likely, many of the produced output maps should be used in an indicative way only. Therefore, the output of the spatial DST needs to be accompanied by supporting information on the reliability of the output and the shortcomings due to unreliable or missing input data, as well as the consequences for use of the output. Therefore, a comprehensive meta-data assessment system is proposed as an in-tegrated part of the spatial DST. The distribution of the output will also require tools to pro-duce more sketch-like presentations, e.g. using fuzzy borders and aggregated maps, which are another important feature of the spatial DST

    Comparison of Groundwater Rights in the United States: Lessons for Texas

    Get PDF
    The history of water rights in the United States is rich with conflicts over critical water sources. Surface water and groundwater rights are seen as primarily State responsibilities, except for federal lands and selected interstate and international waters. Establishment of a single set of legal rules and regulations for groundwater is a great challenge because of site-specific conditions. Different rules and regulations based on different doctrines may be adopted by any State. The degree of attention given to the particular groundwater resource depends upon its availability and value in various economic applications. In Texas, management of groundwater resources is a complex challenge. Today, groundwater reliability is facing serious limitations in many areas because of excessive pumping and water quality issues. Critics have long argued over the Rule of Capture. Some wish to maintain the rule of capture to protect individual groundwater ownership, and to give groundwater districts greater power and funding to manage pumping. Others prefer state ownership and control of groundwater similar to surface water, but any removal of individual ownership rights would likely lead to long judicial challenges. As the Texas legislature and various water-related agencies consider possible updates to the State’s approach to groundwater rights, it is worthwhile to consider the varied approaches taken in other states. This presentation provides a summary of the different groundwater rights systems in the fifty states, with special attention to lessons learned in complex situations. Recommendations for alternative future steps in Texas are discussed

    First-principles design of next-generation nuclear fuels

    Get PDF
    The behavior of nuclear fuel in a reactor is a complex phenomenon that is influenced by a large number of materials properties, which include thermomechanical strength, chemical stability, microstructure, and defects. As a consequence, a comprehensive understanding of the fuel material behavior presents a significant modeling challenge, which must be mastered to improve the efficiency and reliability of current nuclear reactors. It is also essential to the development of advanced fuel materials for next-generation reactors. Over the last two decades, the use of density functional theory (DFT) has greatly contributed to our understanding by providing profound information on nuclear fuel materials, ranging from fundamental properties of f-electron systems to thermomechanical materials properties. This article briefly summarizes the main achievements of this first-principles computational methodology as it applies to nuclear fuel materials. Also, the current status of first-principles modeling is discussed, considering existing limitations and drawbacks such as size limitation and the added complexity associated with high temperature analysis. Finally, the future role of DFT modeling in the nuclear fuels industry is put into perspectiv

    Software reliability and dependability: a roadmap

    Get PDF
    Shifting the focus from software reliability to user-centred measures of dependability in complete software-based systems. Influencing design practice to facilitate dependability assessment. Propagating awareness of dependability issues and the use of existing, useful methods. Injecting some rigour in the use of process-related evidence for dependability assessment. Better understanding issues of diversity and variation as drivers of dependability. Bev Littlewood is founder-Director of the Centre for Software Reliability, and Professor of Software Engineering at City University, London. Prof Littlewood has worked for many years on problems associated with the modelling and evaluation of the dependability of software-based systems; he has published many papers in international journals and conference proceedings and has edited several books. Much of this work has been carried out in collaborative projects, including the successful EC-funded projects SHIP, PDCS, PDCS2, DeVa. He has been employed as a consultant t

    A framework for effective management of condition based maintenance programs in the context of industrial development of E-Maintenance strategies

    Get PDF
    CBM (Condition Based Maintenance) solutions are increasingly present in industrial systems due to two main circumstances: rapid evolution, without precedents, in the capture and analysis of data and significant cost reduction of supporting technologies. CBM programs in industrial systems can become extremely complex, especially when considering the effective introduction of new capabilities provided by PHM (Prognostics and Health Management) and E-maintenance disciplines. In this scenario, any CBM solution involves the management of numerous technical aspects, that the maintenance manager needs to understand, in order to be implemented properly and effectively, according to the company’s strategy. This paper provides a comprehensive representation of the key components of a generic CBM solution, this is presented using a framework or supporting structure for an effective management of the CBM programs. The concept “symptom of failure”, its corresponding analysis techniques (introduced by ISO 13379-1 and linked with RCM/FMEA analysis), and other international standard for CBM open-software application development (for instance, ISO 13374 and OSA-CBM), are used in the paper for the development of the framework. An original template has been developed, adopting the formal structure of RCM analysis templates, to integrate the information of the PHM techniques used to capture the failure mode behaviour and to manage maintenance. Finally, a case study describes the framework using the referred template.Gobierno de Andalucía P11-TEP-7303 M
    corecore