27 research outputs found

    Developing a distributed electronic health-record store for India

    Get PDF
    The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India

    Using embedded hardware monitor cores in critical computer systems

    Get PDF
    The integration of FPGA devices in many different architectures and services makes monitoring and real time detection of errors an important concern in FPGA system design. A monitor is a tool, or a set of tools, that facilitate analytic measurements in observing a given system. The goal of these observations is usually the performance analysis and optimisation, or the surveillance of the system. However, System-on-Chip (SoC) based designs leave few points to attach external tools such as logic analysers. Thus, an embedded error detection core that allows observation of critical system nodes (such as processor cores and buses) should enforce the operation of the FPGA-based system, in order to prevent system failures. The core should not interfere with system performance and must ensure timely detection of errors. This thesis is an investigation onto how a robust hardware-monitoring module can be efficiently integrated in a target PCI board (with FPGA-based application processing features) which is part of a critical computing system. [Continues.

    SERVICE-BASED AUTOMATION OF SOFTWARE CONSTRUCTION ACTIVITIES

    Get PDF
    The reuse of software units, such as classes, components and services require professional knowledge to be performed. Today a multiplicity of different software unit technologies, supporting tools, and related activities used in reuse processes exist. Each of these relevant reuse elements may also include a high number of variations and may differ in the level and quality of necessary reuse knowledge. In such an environment of increasing variations and, therefore, an increasing need for knowledge, software engineers must obtain such knowledge to be able to perform software unit reuse activities. Today many different reuse activities exist for a software unit. Some typical knowledge intensive activities are: transformation, integration, and deployment. In addition to the problem of the amount of knowledge required for such activities, other difficulties also exist. The global industrial environment makes it challenging to identify sources of, and access to, knowledge. Typically, such sources (e.g., repositories) are made to search and retrieve information about software unitsand not about the required reuse activity knowledge for a special unit. Additionally, the knowledge has to be learned by inexperienced software engineers and, therefore, to be interpreted. This interpretation may lead to variations in the reuse result and can differ from the estimated result of the knowledge creator. This makes it difficult to exchange knowledge between software engineers or global teams. Additionally, the reuse results of reuse activities have to be repeatable and sustainable. In such a scenario, the knowledge about software reuse activities has to be exchanged without the above mentioned problems by an inexperienced software engineer. The literature shows a lack of techniques to store and subsequently distribute relevant reuse activity knowledge among software engineers. The central aim of this thesis is to enable inexperienced software engineers to use knowledge required to perform reuse activities without experiencing the aforementioned problems. The reuse activities: transformation, integration, and deployment, have been selected as the foundation for the research. Based on the construction level of handling a software unit, these activities are called Software Construction Activities (SCAcs) throughout the research. To achieve the aim, specialised software construction activity models have been created and combined with an abstract software unit model. As a result, different SCAc knowledge is described and combined with different software unit artefacts needed by the SCAcs. Additionally, the management (e.g., the execution of an SCAc) will be provided in a service-oriented environment. Because of the focus on reuse activities, an approach which avoids changing the knowledge level of software engineers and the abstraction view on software units and activities, the object of the investigation differs from other approaches which aim to solve the insufficient reuse activity knowledge problem. The research devised novel abstraction models to describe SCAcs as knowledge models related to the relevant information of software units. The models and the focused environment have been created using standard technologies. As a result, these were realised easily in a real world environment. Softwareengineers were able to perform single SCAcs without having previously acquired the necessary knowledge. The risk of failing reuse decreases because single activities can be performed. The analysis of the research results is based on a case study. An example of a reuse environmenthas been created and tested in a case study to prove the operational capability of the approach. The main result of the research is a proven concept enabling inexperienced software engineers to reuse software units by reusing SCAcs. The research shows the reduction in time for reuse and a decrease of learning effort is significant

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Certifications of Critical Systems – The CECRIS Experience

    Get PDF
    In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact.Certifications of Critical Systems – The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (acronym for Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools.The CECRIS project took a step forward in the growing field of development, verification and validation and certification of critical systems. It focused on the more difficult/important aspects of critical system development, verification and validation and certification process. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases

    Service Quality Assessment for Cloud-based Distributed Data Services

    Full text link
    The issue of less-than-100% reliability and trust-worthiness of third-party controlled cloud components (e.g., IaaS and SaaS components from different vendors) may lead to laxity in the QoS guarantees offered by a service-support system S to various applications. An example of S is a replicated data service to handle customer queries with fault-tolerance and performance goals. QoS laxity (i.e., SLA violations) may be inadvertent: say, due to the inability of system designers to model the impact of sub-system behaviors onto a deliverable QoS. Sometimes, QoS laxity may even be intentional: say, to reap revenue-oriented benefits by cheating on resource allocations and/or excessive statistical-sharing of system resources (e.g., VM cycles, number of servers). Our goal is to assess how well the internal mechanisms of S are geared to offer a required level of service to the applications. We use computational models of S to determine the optimal feasible resource schedules and verify how close is the actual system behavior to a model-computed \u27gold-standard\u27. Our QoS assessment methods allow comparing different service vendors (possibly with different business policies) in terms of canonical properties: such as elasticity, linearity, isolation, and fairness (analogical to a comparative rating of restaurants). Case studies of cloud-based distributed applications are described to illustrate our QoS assessment methods. Specific systems studied in the thesis are: i) replicated data services where the servers may be hosted on multiple data-centers for fault-tolerance and performance reasons; and ii) content delivery networks to geographically distributed clients where the content data caches may reside on different data-centers. The methods studied in the thesis are useful in various contexts of QoS management and self-configurations in large-scale cloud-based distributed systems that are inherently complex due to size, diversity, and environment dynamicity

    Abstract Model Specification Using the Mobius Modeling Tool

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryDefense Advanced Research Projects Agency, Information Technology Office (DARPA) / DABT63-96-C-0069National Science Foundation / EIA 99-75019U of I OnlyRestricted to UIUC communit

    A Method to Reduce the Cost of Resilience Benchmarking of SelfAdaptive Systems

    Get PDF
    Ensuring the resilience of self-adaptive systems used in critical infrastructure systems is a concern as their failure has severe societal and financial consequences. The current trends in the growth of the scale and complexity of society\u27s workload demands and the systems built to cope with these demands increases the anxiety surrounding service disruptions. Self-adaptive mechanisms instill dynamic behavior to systems in an effort to improve their resilience to runtime changes that would otherwise result in service disruption or failure, such as faults, errors, and attacks. Thus, the evaluation of a self-adaptive system\u27s resilience is critical to ensure expected operational qualities and elicit trust in their services. However, resilience benchmarking is often overlooked or avoided due to the high cost associated with evaluating the runtime behavior of large and complex self-adaptive systems against an almost infinite number of possible runtime changes. Researchers have focused on techniques to reduce the overall costs of benchmarking while ensuring the comprehensiveness of the evaluation as testing costs have been found to account for 50 to 80% of total system costs. These test suite minimization techniques include the removal of irrelevant, redundant, and repetitive test cases to ensure that only relevant tests that adequately elicit the expected system responses are enumerated. However, these approaches require an exhaustive test suite be defined first and then the irrelevant tests are filtered out, potentially negating any cost savings. This dissertation provides a new approach of defining a resilience changeload for self-adaptive systems by incorporating goal-oriented requirements engineering techniques to extract system information and guide the identification of relevant runtime changes. The approach constructs a goal refinement graph consisting of the system\u27s refined goals, runtime actions, self-adaptive agents, and underlying runtime assumptions that is used to identify obstructing conditions to runtime goal attainment. Graph theory is then used to gauge the impact of obstacles on runtime goal attainment and those that exceed the relevance requirement are included in the resilience changeload for enumeration. The use of system knowledge to guide the changeload definition process increased the relevance of the resilience changeload while minimizing the test suite, resulting in a reduction of overall benchmarking costs. Analysis of case study results confirmed that the new approach was more cost effective on the same subject system over previous work. The new approach was shown to reduce the overall costs by 79.65%, increase the relevance of the defined test suite, reduce the amount of wasted effort, and provide a greater return on investment over previous work by a factor of two

    The Essence of Software Engineering

    Get PDF
    Software Engineering; Software Development; Software Processes; Software Architectures; Software Managemen
    corecore