266,927 research outputs found

    Identifying and Consolidating Knowledge Engineering Requirements

    Full text link
    Knowledge engineering is the process of creating and maintaining knowledge-producing systems. Throughout the history of computer science and AI, knowledge engineering workflows have been widely used because high-quality knowledge is assumed to be crucial for reliable intelligent agents. However, the landscape of knowledge engineering has changed, presenting four challenges: unaddressed stakeholder requirements, mismatched technologies, adoption barriers for new organizations, and misalignment with software engineering practices. In this paper, we propose to address these challenges by developing a reference architecture using a mainstream software methodology. By studying the requirements of different stakeholders and eras, we identify 23 essential quality attributes for evaluating reference architectures. We assess three candidate architectures from recent literature based on these attributes. Finally, we discuss the next steps towards a comprehensive reference architecture, including prioritizing quality attributes, integrating components with complementary strengths, and supporting missing socio-technical requirements. As this endeavor requires a collaborative effort, we invite all knowledge engineering researchers and practitioners to join us

    Characterizing a Detection Strategy for Transient Faults in HPC

    Get PDF
    Handling faults is a growing concern in HPC; greater varieties, higher error rates, larger detection intervals and silent faults are expected in the future. It is projected that, in exascale systems, errors will occur several times a day, and that they will propagate to generate errors that will range from process crashes to corrupted results, with undetected errors in applications that are still running. In this article, we analyze a methodology for transient fault detection (called SMCV) for MPI applications. The methodology is based on software replication, and it assumes that data corruption is made apparent producing different messages between replicas. SMCV allows obtaining reliable executions with correct results, or, at least, leading the system to a safe stop. This work presents a complete characterization, formally defining the behavior in the presence of faults and experimentally validating it in order to show its efficacy and viability to detect transient faults in HPC systems.Red de Universidades con Carreras en Informática (RedUNCI

    Efficient ICT for efficient smart grids

    Get PDF
    In this extended abstract the need for efficient and reliable ICT is discussed. Efficiency of ICT not only deals with energy-efficient ICT hardware, but also deals with efficient algorithms, efficient design methods, efficient networking infrastructures, etc. Efficient and reliable ICT is a prerequisite for efficient Smart Grids. Unfortunately, efficiency and reliability have not always received the proper attention in the ICT domain in the past

    A document-like software visualization method for effective cognition of c-based software systems

    Get PDF
    It is clear that maintenance is a crucial and very costly process in a software life cycle. Nowadays there are a lot of software systems particularly legacy systems that are always maintained from time to time as new requirements arise. One important source to understand a software system before it is being maintained is through the documentation, particularly system documentation. Unfortunately, not all software systems developed or maintained are accompanied with their reliable and updated documents. In this case, source codes will be the only reliable source for programmers. A number of studies have been carried out in order to assist cognition based on source codes. One way is through tool automation via reverse engineering technique in which source codes will be parsed and the information extracted will be visualized using certain visualization methods. Most software visualization methods use graph as the main element to represent extracted software artifacts. Nevertheless, current methods tend to produce more complicated graphs and do not grant an explicit, document-like re-documentation environment. Hence, this thesis proposes a document-like software visualization method called DocLike Modularized Graph (DMG). The method is realized in a prototype tool named DocLike Viewer that targets on C-based software systems. The main contribution of the DMG method is to provide an explicit structural re-document mechanism in the software visualization tool. Besides, the DMG method provides more level of information abstractions via less complex graph that include inter-module dependencies, inter-program dependencies, procedural abstraction and also parameter passing. The DMG method was empirically evaluated based on the Goal/Question/Metric (GQM) paradigm and the findings depict that the method can improve productivity and quality in the aspect of cognition or program comprehension. A usability study was also conducted and DocLike Viewer had the most positive responses from the software practitioners

    Production of Reliable Flight Crucial Software: Validation Methods Research for Fault Tolerant Avionics and Control Systems Sub-Working Group Meeting

    Get PDF
    The state of the art in the production of crucial software for flight control applications was addressed. The association between reliability metrics and software is considered. Thirteen software development projects are discussed. A short term need for research in the areas of tool development and software fault tolerance was indicated. For the long term, research in format verification or proof methods was recommended. Formal specification and software reliability modeling, were recommended as topics for both short and long term research

    A methodology for producing reliable software, volume 1

    Get PDF
    An investigation into the areas having an impact on producing reliable software including automated verification tools, software modeling, testing techniques, structured programming, and management techniques is presented. This final report contains the results of this investigation, analysis of each technique, and the definition of a methodology for producing reliable software

    Organic agriculture and climate change mitigation - A report of the Round Table on Organic Agriculture and Climate Change

    Get PDF
    Summary and next steps Participants of the workshop were able to draw from their discussions and from the input of guest speakers and synthesize a set of conclusions that can be used to guide future activities concerning LCAs and other activities that seek to identify and quantify the potential contributions of organic agriculture to climate change mitigation. - LCA is the best tool for measuring GHG emissions related to agricultural products. - There is a risk of oversimplification when focusing on climate change as a single environmental impact category. - Farm production and transport (at least for plant products) are important hotspots for agricultural products. - Studies have shown no remarkable difference in GHG emissions between organic and conventional but, traditionally, soil carbon changes have not been included – which can have a major impact, especially for plant products. - The challenges of LCA of organic products – accounting for carbon sequestration and interactions in farming systems, including the environmental costs of manure – need to be addressed. - Attempts should be made to secure a consistent LCA methodology for agricultural products, including organic products

    How to Model Condensate Banking in a Simulation Model to Get Reliable Forecasts? Case Story of Elgin/Franklin

    Get PDF
    Imperial Users onl

    Software Measurement Activities in Small and Medium Enterprises: an Empirical Assessment

    Get PDF
    An empirical study for evaluating the proper implementation of measurement/metric programs in software companies in one area of Turkey is presented. The research questions are discussed and validated with the help of senior software managers (more than 15 years’ experience) and then used for interviewing a variety of medium and small scale software companies in Ankara. Observations show that there is a common reluctance/lack of interest in utilizing measurements/metrics despite the fact that they are well known in the industry. A side product of this research is that internationally recognized standards such as ISO and CMMI are pursued if they are a part of project/job requirements; without these requirements, introducing those standards to the companies remains as a long-term target to increase quality
    corecore