5,888 research outputs found

    IEEE Standard 1500 Compliance Verification for Embedded Cores

    Get PDF
    Core-based design and reuse are the two key elements for an efficient system-on-chip (SoC) development. Unfortunately, they also introduce new challenges in SoC testing, such as core test reuse and the need of a common test infrastructure working with cores originating from different vendors. The IEEE 1500 Standard for Embedded Core Testing addresses these issues by proposing a flexible hardware test wrapper architecture for embedded cores, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Several intellectual property providers have already announced IEEE Standard 1500 compliance in both existing and future design blocks. In this paper, we address the problem of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE Standard 1500. This step is mandatory to fully trust the wrapper functionalities in applying the test sequences to the core. We present a systematic methodology to build a verification framework for IEEE Standard 1500 compliant cores, allowing core providers and/or integrators to verify the compliance of their products (sold or purchased) to the standar

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    High-level verification flow for a high-level synthesis-based digital logic design

    Get PDF
    Abstract. High-level synthesis (HLS) is a method for generating register-transfer level (RTL) hardware description of digital logic designs from high-level languages, such as C/C++/SystemC or MATLAB. The performance and productivity benefits of HLS stem from the untimed, high abstraction level input languages. Another advantage is that the design and verification can focus on the features and high-level architecture, instead of the low-level implementation details. The goal of this thesis was to define and implement a high-level verification (HLV) flow for an HLS design written in C++. The HLV flow takes advantage of the performance and productivity of C++ as opposed to hardware description languages (HDL) and minimises the required RTL verification work. The HLV flow was implemented in the case study of the thesis. The HLS design was verified in a C++ verification environment, and Catapult Coverage was used for pre-HLS coverage closure. Post-HLS verification and coverage closure were done in Universal Verification Methodology (UVM) environment. C++ tests used in the pre-HLS coverage closure were reimplemented in UVM, to get a high initial RTL coverage without manual RTL code analysis. The pre-HLS C++ design was implemented as a predictor into the UVM testbench to verify the equivalence of C++ versus RTL and to speed up post-HLS coverage closure. Results of the case study show that the HLV flow is feasible to implement in practice. The flow shows significant performance and productivity gains of verification in the C++ domain when compared to UVM. The UVM implementation of a somewhat incomplete set of pre-HLS tests and formal exclusions resulted in an initial post-HLS coverage of 96.90%. The C++ predictor implementation was a valuable tool in post-HLS coverage closure. A total of four weeks of coverage work in pre- and post-HLS phases was required to reach 99% RTL coverage. The total time does not include the time required to build both C++ and UVM verification environments.Korkean tason verifiointivuo korkean tason synteesiin perustuvalle digitaalilogiikkasuunnitelmalle. Tiivistelmä. Korkean tason synteesi (HLS) on menetelmä, jolla generoidaan rekisterisiirtotason (RTL) laitteistokuvausta digitaalisille logiikkasuunnitelmille käyttäen korkean tason ohjelmointikieliä, kuten C-pohjaisia kieliä tai MATLAB:ia. HLS:n suorituskykyyn ja tuottavuuteen liittyvät hyödyt perustuvat ohjelmointikielien tarjoamaan korkeampaan abstraktiotasoon. HLS:ää käyttäen suunnittelu- ja varmennustyö voi keskittyä ominaisuuksiin ja korkean tason arkkitehtuuriin matalan tason yksityiskohtien sijaan. Tämän diplomityön tavoite oli määritellä ja implementoida korkean tason verifiointivuo (HLV-vuo) C++:lla kirjoitetulle HLS-suunnitelmalle. HLV-vuo hyödyntää ohjelmointikielien tarjoamaa suorituskykyä ja korkeampaa abstraktion tasoa kovonkuvauskielien sijaan ja siten minimoi RTL:n varmennukseen vaadittavaa työtä. HLV vuo implementoitiin tapaustutkimuksessa. HLS-suunnitelma varmennettiin C++ -verifiointiympäristössä, ja Catapult Coveragea käytettiin kattavuuden analysointiin. RTL-kattavuutta mitattiin universaalilla verifiointimetodologialla (UVM) tehdyssä ympäristössä. C++ varmennuksessa käytetyt testivektorit implementoitiin uudelleen UVM-ympäristössä, jotta RTL-kattavuuden lähtötaso olisi korkea ilman manuaalista RTL-analyysiä. C++-suunnitelma implementoitiin prediktorina (referenssimallina) UVM-testipenkkiin koodikattavuuden parantamiseksi. Tapaustutkimuksen tulokset osoittavat, että määritelty HLV-vuo on toteutettavissa käytännössä. Vuota käyttämällä saavutetaan merkittäviä suorituskyky- ja tuottavuusetuja C++ -testiympäristössä verrattuna UVM-ympäristöön. 90.60% koodikattavuuden saavuttavien C++ testivektoreiden uudelleenimplementoiti UVM-ympäristössä tuotti 96.90% RTL-kattavuuden. C++-predictorin implementointi oli merkittävä työkalu RTL-kattavuustavoitteen saavuttamisessa

    Search based software engineering: Trends, techniques and applications

    Get PDF
    © ACM, 2012. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version is available from the link below.In the past five years there has been a dramatic increase in work on Search-Based Software Engineering (SBSE), an approach to Software Engineering (SE) in which Search-Based Optimization (SBO) algorithms are used to address problems in SE. SBSE has been applied to problems throughout the SE lifecycle, from requirements and project planning to maintenance and reengineering. The approach is attractive because it offers a suite of adaptive automated and semiautomated solutions in situations typified by large complex problem spaces with multiple competing and conflicting objectives. This article provides a review and classification of literature on SBSE. The work identifies research trends and relationships between the techniques applied and the applications to which they have been applied and highlights gaps in the literature and avenues for further research.EPSRC and E

    Autonomics: In Search of a Foundation for Next Generation Autonomous Systems

    Full text link
    The potential benefits of autonomous systems have been driving intensive development of such systems, and of supporting tools and methodologies. However, there are still major issues to be dealt with before such development becomes commonplace engineering practice, with accepted and trustworthy deliverables. We argue that a solid, evolving, publicly available, community-controlled foundation for developing next generation autonomous systems is a must. We discuss what is needed for such a foundation, identify a central aspect thereof, namely, decision-making, and focus on three main challenges: (i) how to specify autonomous system behavior and the associated decisions in the face of unpredictability of future events and conditions and the inadequacy of current languages for describing these; (ii) how to carry out faithful simulation and analysis of system behavior with respect to rich environments that include humans, physical artifacts, and other systems,; and (iii) how to engineer systems that combine executable model-driven techniques and data-driven machine learning techniques. We argue that autonomics, i.e., the study of unique challenges presented by next generation autonomous systems, and research towards resolving them, can introduce substantial contributions and innovations in system engineering and computer science

    ADGS-2100 Adaptive Display and Guidance System Window Manager Analysis

    Get PDF
    Recent advances in modeling languages have made it feasible to formally specify and analyze the behavior of large system components. Synchronous data flow languages, such as Lustre, SCR, and RSML-e are particularly well suited to this task, and commercial versions of these tools such as SCADE and Simulink are growing in popularity among designers of safety critical systems, largely due to their ability to automatically generate code from the models. At the same time, advances in formal analysis tools have made it practical to formally verify important properties of these models to ensure that design defects are identified and corrected early in the lifecycle. This report describes how these tools have been applied to the ADGS-2100 Adaptive Display and Guidance Window Manager being developed by Rockwell Collins Inc. This work demonstrates how formal methods can be easily and cost-efficiently used to remove defects early in the design cycle

    Exploring the adaptive capacity of emergency management using agent based modelling

    Get PDF
    This project aimed to explore the suitability of Agent Based Modelling and Simulation (ABMS) technology in assisting planners and policy makers to better understand complex situations with multiple interacting aspects. The technology supports exploration of the impact of different factors on potential outcomes of a scenario, thus building understanding to inform decision making. To concretise this exploration a specific simulation tool was developed to explore response capacity around flash flooding in an inner Melbourne suburb, with a focus on sandbag depots as an option to be considered.The three types of activities delivered by this project to achieve its objectives were the development of an agent-based simulation, data collection to inform the development of the simulation and communication and engagement activities to progress the work. Climate change is an area full of uncertainties, and yet sectors such as Emergency Management and many others need to develop plans and policy responses regarding adaptation to these uncertain futures. Agent Based Modelling and Simulation is a technology which supports modelling of a complex situation from the bottom up, by modelling the behaviours of individual agents (often representing humans) in various scenarios. By running simulations with different configurations it is possible to explore and analyse a very broad range of potential options, providing a detailed understanding of potential risks and outcomes, given particular alternatives. This project explored the suitability of this technology for use in assessing and developing the capacity of the emergency response sector, as it adapts to climate change. A simulation system was developed to explore a particular issue regarding protection of property in a suburb prone to flash flooding. In particular the option of providing sandbag depots was explored. Simulations indicated that sandbag depots provided by CoPP or VicSES were at this time not a viable option. The simulation tool was deemed to be very useful for demonstrating this to community members as well as to decision makers. An interactive game was also developed to assist in raising awareness of community members about how to sandbag their property using on-site sandbags. The technology was deemed to be of great potential benefit to the sector and areas for further work inorder to realise this benefit were identified. In addition to developing awareness of useful technology, this project also demonstrated the critical importance of interdisciplinary team work, and close engagement with stakeholders and end users, if valuable technology uptake is to be realised. &nbsp

    수문 모델을 위한 GLUE 우도함수 정의에 관한 비교 연구

    Get PDF
    학위논문(석사)--서울대학교 대학원 :공과대학 건설환경공학부,2019. 8. Van Thinh Nguyen.A comparative study on the formal and informal likelihood definitions in GLUE methodology was performed. In the case of informal likelihood, uncertainty interval for high siginificance level could not cover extreme values, resulting low statistical reliability. Compared to this, the uncertainty interval from formal likelihood covered extreme values well, resulting better statistical reliability. The reason for this is because formal likelihood case considered stable distribution which can include heavy-tailed distribution as a descriptor of model error term.본 연구에서는 GLUE 방법론에 등장하는 비형식 우도함수와 형식 우도함수에 대한 비교연구가 수행되었다. 비형식 우도함수의 경우, 불확실성 구간의 크기는 작지만 높은 유의 수준에서 극값을 포함하지 못해 통계적 신뢰성이 떨어지는 것으로 나타났다. 이에 반해 형식 우도함수의 경우, 높은 유의수준에서 극값들을 잘 포함함으로서 통계적 신뢰성이 있는 구간이 산정되었다. 극값을 잘 포함할 수 있었던 이유는 모델 오차항을 고려할 때 꼬리가 두꺼운 분포인 안정분포를 사용하였기 때문인 것으로 판단되었다.Abstract i Table of Contents iii Lists of Figures vi List of Tables ix Chapter 1. Introduction 1 1.1 Background and Necessity of Study 1 1.1.1 Watershed model 1 1.1.2 Uncertainty analysis 3 1.1.3 Necessity of study 7 1.2 Objectives 7 Chapter 2. Theoretical background 10 2.1 Modified Generalized Watershed Loading Function (MGWLF) 10 2.1.1 Water Balance 11 2.1.1.1 Runoff calculation 12 2.1.1.2 Evapotranspiration 14 2.1.1.3 Percolation 15 2.1.1.4 Groundwater discharge and deep seepage 15 2.1.1.5 Streamflow 16 2.1.2 Dissolved nutrient load 17 2.1.2.1 Rural runoff load (DR) 17 2.1.2.2 Groundwater load (DG) 18 2.1.2.3 Septic system load (DS) 18 2.1.3 Solid-phase nutrient load 21 2.1.3.1 Rural runoff load (SR) 21 2.1.3.2 Urban runoff load (SU) 25 2.2 Generalized Likelihood Uncertainty Estimation (GLUE) 27 2.2.1 Parameter sampling 30 2.2.2 Identification of an epistemic error 31 2.2.3 Likelihood measure 32 2.2.3.1 Informal likelihood 32 2.2.3.2 Formal likelihood 36 2.2.4 Parameter posterior probability distribution 42 2.2.5 Uncertainty interval 43 2.2.6 Model validation and prediction 45 Chapter 3. Methodology 46 3.1 SNU-WS 46 3.1.1 Pre-processing 47 3.1.2 MGWLF component 49 3.2 Uncertainty Analysis tool 51 3.2.1 Development of uncertainty analysis input interface 51 3.2.2 Development of uncertainty analysis output interface 54 3.3 Application 57 3.3.1 Parameter setting 57 3.3.2 Reducing epistemic uncertainty 58 3.3.3 Informal likelihood definition 63 3.3.4 Formal likelihood definition 63 Chapter 4. Result 72 4.1 Monthly flow 72 4.1.1 Informal likelihood 72 4.1.2 Formal likelihood 76 4.1.3 Comparison 79 4.2 Daily flow 80 4.2.1 Informal likelihood 80 4.2.2 Formal likelihood 90 4.2.3 Comparison 98 Chapter 5. Summary and Conclusion 100 REFERENCES 103 초록 107Maste
    corecore