33,084 research outputs found

    MultiPARTES: Multicore Virtualization for Mixed-Criticality Systems

    Full text link
    Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. Without appropriate preconditions, the integration of mixed-criticality subsystems can lead to a significant and potentially unacceptable increase of engineering and certification costs. A promising solution is to incorporate mechanisms that establish multiple partitions with strict temporal and spatial separation between the individual partitions. In this approach, subsystems with different levels of criticality can be placed in different partitions and can be verified and validated in isolation. The MultiPARTES FP7 project aims at supporting mixed- criticality integration for embedded systems based on virtualization techniques for heterogeneous multicore processors. A major outcome of the project is the MultiPARTES XtratuM, an open source hypervisor designed as a generic virtualization layer for heterogeneous multicore. MultiPARTES evaluates the developed technology through selected use cases from the offshore wind power, space, visual surveillance, and automotive domains. The impact of MultiPARTES on the targeted domains will be also discussed. In a number of ongoing research initiatives (e.g., RECOMP, ARAMIS, MultiPARTES, CERTAINTY) mixed-criticality integration is considered in multicore processors. Key challenges are the combination of software virtualization and hardware segregation and the extension of partitioning mechanisms to jointly address significant non-functional requirements (e.g., time, energy and power budgets, adaptivity, reliability, safety, security, volume, weight, etc.) along with development and certification methodology

    Scheduling policies and system software architectures for mixed-criticality computing

    Get PDF
    Mixed-criticality model of computation is being increasingly adopted in timing-sensitive systems. The model not only ensures that the most critical tasks in a system never fails, but also aims for better systems resource utilization in normal condition. In this report, we describe the widely used mixed-criticality task model and fixed-priority scheduling algorithms for the model in uniprocessors. Because of the necessity by the mixed-criticality task model and scheduling policies, isolation, both temporal and spatial, among tasks is one of the main requirements from the system design point of view. Different virtualization techniques have been used to design system software architecture with the goal of isolation. We discuss such a few system software architectures which are being and can be used for mixed-criticality model of computation

    Assessing the critical material constraints on low carbon infrastructure transitions

    No full text
    We present an assessment method to analyze whether the disruption in supply of a group of materials endangers the transition to low-carbon infrastructure. We define criticality as the combination of the potential for supply disruption and the exposure of the system of interest to that disruption. Low-carbon energy depends on multiple technologies comprised of a multitude of materials of varying criticality. Our methodology allows us to assess the simultaneous potential for supply disruption of a range of materials. Generating a specific target level of low-carbon energy implies a dynamic roll-out of technology at a specific scale. Our approach is correspondingly dynamic, and monitors the change in criticality during the transition towards a low-carbon energy goal. It is thus not limited to the quantification of criticality of a particular material at a particular point in time. We apply our method to criticality in the proposed UK energy transition as a demonstration, with a focus on neodymium use in electric vehicles. Although we anticipate that the supply disruption of neodymium will decrease, our results show the criticality of low carbon energy generation increases, as a result of increasing exposure to neodymium-reliant technologies. We present a number of potential responses to reduce the criticality through a reduction in supply disruption potential of the exposure of the UK to that disruption

    Criticality Analysis for Maintenance Purposes: A Study for Complex In‐service Engineering Assets

    Get PDF
    The purpose of this paper is to establish a basis for a criticality analysis, considered here as a prerequisite, a first required step to review the current maintenance programs, of complex in‐service engineering assets. Review is understood as a reality check, a testing of whether the current maintenance activities are well aligned to actual business objectives and needs. This paper describes an efficient and rational working process and a model resulting in a hierarchy of assets, based on risk analysis and cost–benefit principles, which will be ranked according to their importance for the business to meet specific goals. Starting from a multicriteria analysis, the proposed model converts relevant criteria impacting equipment criticality into a single score presenting the criticality level. Although detailed implementation of techniques like Root Cause Failure Analysis and Reliability Centered Maintenance will be recommended for further optimization of the maintenance activities, the reasons why criticality analysis deserves the attention of engineers and maintenance and reliability managers are precisely explained here. A case study is presented to help the reader understand the process and to operationalize the mode

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Quality of medicines commonly used in the treatment of soil transmitted helminths and Giardia in Ethiopia: a nationwide survey

    Get PDF
    Background: The presence of poor quality medicines in the market is a global threat on public health, especially in developing countries. Therefore, we assessed the quality of two commonly used anthelminthic drugs [mebendazole (MEB) and albendazole (ALB)] and one antiprotozoal drug [tinidazole (TNZ)] in Ethiopia. Methods/Principal Findings: A multilevel stratified random sampling, with as strata the different levels of supply chain system in Ethiopia, geographic areas and government/privately owned medicines outlets, was used to collect the drug samples using mystery shoppers. The three drugs (106 samples) were collected from 38 drug outlets (government/privately owned) in 7 major cities in Ethiopia between January and March 2012. All samples underwent visual and physical inspection for labeling and packaging before physico-chemical quality testing and evaluated based on individual monographs in Pharmacopoeias for identification, assay/content, dosage uniformity, dissolution, disintegration and friability. In addition, quality risk was analyzed using failure mode effect analysis (FMEA) and a risk priority number (RPN) was assigned to each quality attribute. A clinically rationalized desirability function was applied in quantification of the overall quality of each medicine. Overall, 45.3% (48/106) of the tested samples were substandard, i.e. not meeting the pharmacopoeial quality specifications claimed by their manufacturers. Assay was the quality attribute most often out-of-specification, with 29.2% (31/106) failure of the total samples. The highest failure was observed for MEB (19/42, 45.2%), followed by TNZ (10/39, 25.6%) and ALB (2/25, 8.0%). The risk analysis showed that assay (RPN = 512) is the most critical quality attribute, followed by dissolution (RPN = 336). Based on Derringer's desirability function, samples were classified into excellent (14/106,13%), good (24/106, 23%), acceptable (38/106, 36%%), low (29/106, 27%) and bad (1/106,1%) quality. Conclusions/Significance: This study evidenced that there is a relatively high prevalence of poor quality MEB, ALB and TNZ in Ethiopia: up to 45% if pharmacopoeial acceptance criteria are used in the traditional, dichotomous approach, and 28% if the new risk-based desirability approach was applied. The study identified assay as the most critical quality attributes. The country of origin was the most significant factor determining poor quality status of the investigated medicines in Ethiopia

    Gateway Modeling and Simulation Plan

    Get PDF
    This plan institutes direction across the Gateway Program and the Element Projects to ensure that Cross Program M&S are produced in a manner that (1) generate the artifacts required for NASA-STD-7009 compliance, (2) ensures interoperability of M&S exchanged and integrated across the program and, (3) drives integrated development efforts to provide cross-domain integrated simulation of the Gateway elements, space environment, and operational scenarios. This direction is flowed down via contractual enforcement to prime contractors and includes both the GMS requirements specified in this plan and the NASASTD- 7009 derived requirements necessary for compliance. Grounding principles for management of Gateway Models and Simulations (M&S) are derived from the Columbia Accident Investigation Board (CAIB) report and the Diaz team report, A Renewed Commitment to Excellence. As an outcome of these reports, and in response to Action 4 of the Diaz team report, the NASA Standard for Models and Simulations, NASA-STD-7009 was developed. The standard establishes M&S requirements for development and use activities to ensure proper capture and communication of M&S pedigree and credibility information to Gateway program decision makers. Through the course of the Gateway program life cycle M&S will be heavily relied upon to conduct analysis, test products, support operations activities, enable informed decision making and ultimately to certify the Gateway with an acceptable level of risk to crew and mission. To reduce risk associated with M&S influenced decisions, this plan applies the NASA-STD-7009 requirements to produce the artifacts that support credibility assessments and ensure the information is communicated to program management
    corecore