38,381 research outputs found

    Innovation dynamics and the role of infrastructure

    Get PDF
    This report shows how the role of the infrastructure ā€“ standards, measurement, accreditation, design and intellectual property ā€“ can be integrated into a quantitative model of the innovation system and used to help explain levels and changes in labour productivity and growth in turnover and employment. The summary focuses on the new results from the project, set out in more detail in Sections 5 and 6. The first two sections of the report provide contextual material on the UK innovation system, the nature and content of the infrastructure knowledge and the institutions that provide it. Mixed modes of innovation, the typology of innovation practices developed and applied here, is constituted of six mixed modes, derived from many variables taken from the UK Innovation Survey. These are: Investing in intangibles Technology with IP innovating Using codified knowledge Wider (managerial) innovating Market-led innovating External process modernising. The composition of the innovation modes, and the approach used to compute them, is set out in more detail in Section 4. Modes can be thought of as the underlying process of innovation, a bundle of activities undertaken jointly by firms, and whose working out generates well known indicators such as new product innovations, R&D spending and accessing external information, that are the partial indicators gathered from the innovation survey itself

    Systems Engineering Leading Indicators Guide, Version 2.0

    Get PDF
    The Systems Engineering Leading Indicators Guide editorial team is pleased to announce the release of Version 2.0. Version 2.0 supersedes Version 1.0, which was released in July 2007 and was the result of a project initiated by the Lean Advancement Initiative (LAI) at MIT in cooperation with: the International Council on Systems Engineering (INCOSE), Practical Software and Systems Measurement (PSM), and the Systems Engineering Advancement Research Initiative (SEAri) at MIT. A leading indicator is a measure for evaluating the effectiveness of how a specific project activity is likely to affect system performance objectives. A leading indicator may be an individual measure or a collection of measures and associated analysis that is predictive of future systems engineering performance. Systems engineering performance itself could be an indicator of future project execution and system performance. Leading indicators aid leadership in delivering value to customers and end users and help identify interventions and actions to avoid rework and wasted effort. Conventional measures provide status and historical information. Leading indicators use an approach that draws on trend information to allow for predictive analysis. By analyzing trends, predictions can be forecast on the outcomes of certain activities. Trends are analyzed for insight into both the entity being measured and potential impacts to other entities. This provides leaders with the data they need to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. Version 2.0 guide adds five new leading indicators to the previous 13 for a new total of 18 indicators. The guide addresses feedback from users of the previous version of the guide, as well as lessons learned from implementation and industry workshops. The document format has been improved for usability, and several new appendices provide application information and techniques for determining correlations of indicators. Tailoring of the guide for effective use is encouraged. Additional collaborating organizations involved in Version 2.0 include the Naval Air Systems Command (NAVAIR), US Department of Defense Systems Engineering Research Center (SERC), and National Defense Industrial Association (NDIA) Systems Engineering Division (SED). Many leading measurement and systems engineering experts from government, industry, and academia volunteered their time to work on this initiative

    Performance-Based Financing: Report on Feasibility and Implementation Options Final September 2007

    Get PDF
    This study examines the feasibility of introducing a performance-related bonus scheme in the health sector. After describing the Tanzania health context, we define ā€œPerformance-Based Financingā€, examine its rationale and review the evidence on its effectiveness. The following sections systematically assess the potential for applying the scheme in Tanzania. On the basis of risks and concerns identified, detailed design options and recommendations are set out. The report concludes with a (preliminary) indication of the costs of such a scheme and recommends a way forward for implementation. We prefer the name ā€œPayment for Performanceā€ or ā€œP4Pā€. This is because what is envisaged is a bonus payment that is earned by meeting performance targets1. The dominant financing for health care delivery would remain grant-based as at present. There is a strong case for introducing P4P. Its main purpose will be to motivate front-line health workers to improve service delivery performance. In recent years, funding for council health services has increased dramatically, without a commensurate increase in health service output. The need to tighten focus on results is widely acknowledged. So too is the need to hold health providers more accountable for performance at all levels, form the local to the national. P4P is expected to encourage CHMTs and health facilities to ā€œmanage by resultsā€; to identify and address local constraints, and to find innovative ways to raise productivity and reach under-served groups. As well as leveraging more effective use of all resources, P4P will provide a powerful incentive at all levels to make sure that HMIS information is complete, accurate and timely. It is expected to enhance accountability between health facilities and their managers / governing committees as well as between the Council Health Department and the Local Government Authority. Better performance-monitoring will enable the national level to track aggregate progress against goals and will assist in identifying under-performers requiring remedial action. We recommend a P4P scheme that provides a monetary team bonus, dependent on a whole facility reaching facility-specific service delivery targets. The bonus would be paid quarterly and shared equally among health staff. It should target all government health facilities at the council level, and should also reward the CHMT for ā€œwhole councilā€ performance. All participating facilities/councils are therefore rewarded for improvement rather than absolute levels of performance. Performance indicators should not number more than 10, should represent a ā€œbalanced score cardā€ of basic health service delivery, should present no risk of ā€œperverse incentiveā€ and should be readily measurable. The same set of indicators should be used by all. CHMTs would assist facilities in setting targets and monitoring performance. RHMTs would play a similar role with respect to CHMTs. The Council Health Administration would provide a ā€œcheck and balanceā€ to avoid target manipulation and verify bonus payments due. The major constraint on feasibility is the poor state of health information. Our study confirmed the findings of previous ones, observing substantial omission and error in reports from facilities to CHMTs. We endorse the conclusion of previous reviewers that the main problem lies not with HMIS design, but with its functioning. We advocate a particular focus on empowering and enabling the use of information for management by facilities and CHMTs. We anticipate that P4P, combined with a major effort in HMIS capacity building ā€“ at the facility and council level ā€“ will deliver dramatic improvements in data quality and completeness. We recommend that the first wave of participating councils are selected on the basis that they can first demonstrate robust and accurate data. We anticipate that P4P for facilities will not deliver the desired benefits unless they have a greater degree of control to solve their own problems. We therefore propose - as a prior and essential condition ā€“ the introduction of petty cash imprests for all health facilities. We believe that such a measure would bring major benefits even to facilities that have not yet started P4P. It should also empower Health Facility Committees to play a more meaningful role in health service governance at the local level. We recommend to Government that P4P bonuses, as described here, are implemented across Mainland Tanzania on a phased basis. The main constraint on the pace of roll-out is the time required to bring information systems up to standard. Councils that are not yet ready to institute P4P should get an equivalent amount of money ā€“ to be used as general revenue to finance their comprehensive council health plans. We also recommend that up-to-date reporting on performance against service delivery indicators is made a mandatory requirement for all councils and is also agreed as a standard requirement for the Joint Annual Health Sector Review. P4P can also be applied on the ā€œdemand-sideā€ ā€“ for example to encourage women to present in case of obstetric emergencies. There is a strong empirical evidence base from other countries to demonstrate that such incentives can work. We recommend a separate policy decision on whether or not to introduce demand-side incentives. In our view, they are sufficiently promising to be tried out on an experimental basis. When taken to national scale (all councils, excepting higher level hospitals), the scheme would require annual budgetary provision of about 6 billion shillings for bonus payments. This is equivalent to 1% of the national health budget, or about 3% of budgetary resources for health at the council level. We anticipate that design and implementation costs would amount to about 5 billion shillings over 5 years ā€“ the majority of this being devoted to HMIS strengthening at the facility level across the whole country

    Systems Engineering Leading Indicators Guide, Version 1.0

    Get PDF
    The Systems Engineering Leading Indicators guide set reflects the initial subset of possible indicators that were considered to be the highest priority for evaluating effectiveness before the fact. A leading indicator is a measure for evaluating the effectiveness of a how a specific activity is applied on a program in a manner that provides information about impacts that are likely to affect the system performance objectives. A leading indicator may be an individual measure, or collection of measures, that are predictive of future system performance before the performance is realized. Leading indicators aid leadership in delivering value to customers and end users, while assisting in taking interventions and actions to avoid rework and wasted effort. The Systems Engineering Leading Indicators Guide was initiated as a result of the June 2004 Air Force/LAI Workshop on Systems Engineering for Robustness, this guide supports systems engineering revitalization. Over several years, a group of industry, government, and academic stakeholders worked to define and validate a set of thirteen indicators for evaluating the effectiveness of systems engineering on a program. Released as version 1.0 in June 2007 the leading indicators provide predictive information to make informed decisions and where necessary, take preventative or corrective action during the program in a proactive manner. While the leading indicators appear similar to existing measures and often use the same base information, the difference lies in how the information is gathered, evaluated, interpreted and used to provide a forward looking perspective

    Automated unique input output sequence generation for conformance testing of FSMs

    Get PDF
    This paper describes a method for automatically generating unique input output (UIO) sequences for FSM conformance testing. UIOs are used in conformance testing to verify the end state of a transition sequence. UIO sequence generation is represented as a search problem and genetic algorithms are used to search this space. Empirical evidence indicates that the proposed method yields considerably better (up to 62% better) results compared with random UIO sequence generation

    Understanding requirements engineering process: a challenge for practice and education

    Get PDF
    Reviews of the state of the professional practice in Requirements Engineering (RE) stress that the RE process is both complex and hard to describe, and suggest there is a significant difference between competent and "approved" practice. "Approved" practice is reflected by (in all likelihood, in fact, has its genesis in) RE education, so that the knowledge and skills taught to students do not match the knowledge and skills required and applied by competent practitioners. A new understanding of the RE process has emerged from our recent study. RE is revealed as inherently creative, involving cycles of building and major reconstruction of the models developed, significantly different from the systematic and smoothly incremental process generally described in the literature. The process is better characterised as highly creative, opportunistic and insight driven. This mismatch between approved and actual practice provides a challenge to RE education - RE requires insight and creativity as well as technical knowledge. Traditional learning models applied to RE focus, however, on notation and prescribed processes acquired through repetition. We argue that traditional learning models fail to support the learning required for RE and propose both a new model based on cognitive flexibility and a framework for RE education to support this model

    The importance of understanding computer analyses in civil engineering

    Get PDF
    Sophisticated computer modelling systems are widely used in civil engineering analysis. This paper takes examples from structural engineering, environmental engineering, flood management and geotechnical engineering to illustrate the need for civil engineers to be competent in the use of computer tools. An understanding of a model's scientific basis, appropriateness, numerical limitations, validation, verification and propagation of uncertainty is required before applying its results. A review of education and training is also suggested to ensure engineers are competent at using computer modelling systems, particularly in the context of risk management. 1. Introductio

    Measuring Software Process: A Systematic Mapping Study

    Get PDF
    Context: Measurement is essential to reach predictable performance and high capability processes. It provides support for better understanding, evaluation, management, and control of the development process and project, as well as the resulting product. It also enables organizations to improve and predict its processā€™s performance, which places organizations in better positions to make appropriate decisions. Objective: This study aims to understand the measurement of the software development process, to identify studies, create a classification scheme based on the identified studies, and then to map such studies into the scheme to answer the research questions. Method: Systematic mapping is the selected research methodology for this study. Results: A total of 462 studies are included and classified into four topics with respect to their focus and into three groups based on the publishing date. Five abstractions and 64 attributes were identified, 25 methods/models and 17 contexts were distinguished. Conclusion: capability and performance were the most measured process attributes, while effort and performance were the most measured project attributes. Goal Question Metric and Capability Maturity Model Integration were the main methods and models used in the studies, whereas agile/lean development and small/medium-size enterprise were the most frequently identified research contexts.Ministerio de EconomĆ­a y Competitividad TIN2013-46928-C3-3-RMinisterio de EconomĆ­a y Competitividad TIN2016-76956-C3-2- RMinisterio de EconomĆ­a y Competitividad TIN2015-71938-RED
    • ā€¦
    corecore