5,177 research outputs found

    A set of metrics for characterizing simulink model comprehension

    Get PDF
    Simulink is a powerful tool for Embedded Systems, playing a key role in dynamic systems modeling. However, far too little attention has been paid to quality of Simulink models. In addition, no research has been found linking the relationship between model complexity and its impact in the comprehension quality of Simulink models. The aim of this paper is to define a set of metrics to support the characterization of Simulink models and to investigate their relationship with the model comprehension property. For this study, we performed a controlled experiment using two versions of a robotic Simulink model — one of them was constructed through the ad hoc development approach and the other one through the re-engineered development approach. The results of the experiment show that the re-engineered model is more comprehensible than the ad hoc model. In summary, the set of metrics collected from each version of the Simulink model suggests an inverse relationship with the model comprehension, i.e., the lower the metrics, the greater the model comprehension.Facultad de Informátic

    A set of metrics for characterizing simulink model comprehension

    Get PDF
    Simulink is a powerful tool for Embedded Systems, playing a key role in dynamic systems modeling. However, far too little attention has been paid to quality of Simulink models. In addition, no research has been found linking the relationship between model complexity and its impact in the comprehension quality of Simulink models. The aim of this paper is to define a set of metrics to support the characterization of Simulink models and to investigate their relationship with the model comprehension property. For this study, we performed a controlled experiment using two versions of a robotic Simulink model — one of them was constructed through the ad hoc development approach and the other one through the re-engineered development approach. The results of the experiment show that the re-engineered model is more comprehensible than the ad hoc model. In summary, the set of metrics collected from each version of the Simulink model suggests an inverse relationship with the model comprehension, i.e., the lower the metrics, the greater the model comprehension.Facultad de Informátic

    Hybrid dynamic energy and thermal management in heterogeneous embedded multiprocessor SoCs

    Full text link

    Dependable Computing on Inexact Hardware through Anomaly Detection.

    Full text link
    Reliability of transistors is on the decline as transistors continue to shrink in size. Aggressive voltage scaling is making the problem even worse. Scaled-down transistors are more susceptible to transient faults as well as permanent in-field hardware failures. In order to continue to reap the benefits of technology scaling, it has become imperative to tackle the challenges risen due to the decreasing reliability of devices for the mainstream commodity market. Along with the worsening reliability, achieving energy efficiency and performance improvement by scaling is increasingly providing diminishing marginal returns. More than any other time in history, the semiconductor industry faces the crossroad of unreliability and the need to improve energy efficiency. These challenges of technology scaling can be tackled by categorizing the target applications in the following two categories: traditional applications that have relatively strict correctness requirement on outputs and emerging class of soft applications, from various domains such as multimedia, machine learning, and computer vision, that are inherently inaccuracy tolerant to a certain degree. Traditional applications can be protected against hardware failures by low-cost detection and protection methods while soft applications can trade off quality of outputs to achieve better performance or energy efficiency. For traditional applications, I propose an efficient, software-only application analysis and transformation solution to detect data and control flow transient faults. The intelligence of the data flow solution lies in the use of dynamic application information such as control flow, memory and value profiling. The control flow protection technique achieves its efficiency by simplifying signature calculations in each basic block and by performing checking at a coarse-grain level. For soft applications, I develop a quality control technique. The quality control technique employs continuous, light-weight checkers to ensure that the approximation is controlled and application output is acceptable. Overall, I show that the use of low-cost checkers to produce dependable results on commodity systems---constructed from inexact hardware components---is efficient and practical.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113341/1/dskhudia_1.pd

    The 1990 progress report and future plans

    Get PDF
    This document describes the progress and plans of the Artificial Intelligence Research Branch (RIA) at ARC in 1990. Activities span a range from basic scientific research to engineering development and to fielded NASA applications, particularly those applications that are enabled by basic research carried out at RIA. Work is conducted in-house and through collaborative partners in academia and industry. Our major focus is on a limited number of research themes with a dual commitment to technical excellence and proven applicability to NASA short, medium, and long-term problems. RIA acts as the Agency's lead organization for research aspects of artificial intelligence, working closely with a second research laboratory at JPL and AI applications groups at all NASA centers

    Small business innovation research. Abstracts of 1988 phase 1 awards

    Get PDF
    Non-proprietary proposal abstracts of Phase 1 Small Business Innovation Research (SBIR) projects supported by NASA are presented. Projects in the fields of aeronautical propulsion, aerodynamics, acoustics, aircraft systems, materials and structures, teleoperators and robots, computer sciences, information systems, data processing, spacecraft propulsion, bioastronautics, satellite communication, and space processing are covered

    A Survey of Fault-Tolerance Techniques for Embedded Systems from the Perspective of Power, Energy, and Thermal Issues

    Get PDF
    The relentless technology scaling has provided a significant increase in processor performance, but on the other hand, it has led to adverse impacts on system reliability. In particular, technology scaling increases the processor susceptibility to radiation-induced transient faults. Moreover, technology scaling with the discontinuation of Dennard scaling increases the power densities, thereby temperatures, on the chip. High temperature, in turn, accelerates transistor aging mechanisms, which may ultimately lead to permanent faults on the chip. To assure a reliable system operation, despite these potential reliability concerns, fault-tolerance techniques have emerged. Specifically, fault-tolerance techniques employ some kind of redundancies to satisfy specific reliability requirements. However, the integration of fault-tolerance techniques into real-time embedded systems complicates preserving timing constraints. As a remedy, many task mapping/scheduling policies have been proposed to consider the integration of fault-tolerance techniques and enforce both timing and reliability guarantees for real-time embedded systems. More advanced techniques aim additionally at minimizing power and energy while at the same time satisfying timing and reliability constraints. Recently, some scheduling techniques have started to tackle a new challenge, which is the temperature increase induced by employing fault-tolerance techniques. These emerging techniques aim at satisfying temperature constraints besides timing and reliability constraints. This paper provides an in-depth survey of the emerging research efforts that exploit fault-tolerance techniques while considering timing, power/energy, and temperature from the real-time embedded systems’ design perspective. In particular, the task mapping/scheduling policies for fault-tolerance real-time embedded systems are reviewed and classified according to their considered goals and constraints. Moreover, the employed fault-tolerance techniques, application models, and hardware models are considered as additional dimensions of the presented classification. Lastly, this survey gives deep insights into the main achievements and shortcomings of the existing approaches and highlights the most promising ones

    NASA SBIR abstracts of 1991 phase 1 projects

    Get PDF
    The objectives of 301 projects placed under contract by the Small Business Innovation Research (SBIR) program of the National Aeronautics and Space Administration (NASA) are described. These projects were selected competitively from among proposals submitted to NASA in response to the 1991 SBIR Program Solicitation. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 301, in order of its appearance in the body of the report. Appendixes to provide additional information about the SBIR program and permit cross-reference of the 1991 Phase 1 projects by company name, location by state, principal investigator, NASA Field Center responsible for management of each project, and NASA contract number are included

    A set of metrics for characterizing simulink model comprehension

    Get PDF
    Simulink is a powerful tool for Embedded Systems, playing a key role in dynamic systems modeling. However, far too little attention has been paid to quality of Simulink models. In addition, no research has been found linking the relationship between model complexity and its impact in the comprehension quality of Simulink models. The aim of this paper is to define a set of metrics to support the characterization of Simulink models and to investigate their relationship with the model comprehension property. For this study, we performed a controlled experiment using two versions of a robotic Simulink model — one of them was constructed through the ad hoc development approach and the other one through the re-engineered development approach. The results of the experiment show that the re-engineered model is more comprehensible than the ad hoc model. In summary, the set of metrics collected from each version of the Simulink model suggests an inverse relationship with the model comprehension, i.e., the lower the metrics, the greater the model comprehension.Facultad de Informátic
    • …
    corecore