13,708 research outputs found

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    Grand Challenges of Traceability: The Next Ten Years

    Full text link
    In 2007, the software and systems traceability community met at the first Natural Bridge symposium on the Grand Challenges of Traceability to establish and address research goals for achieving effective, trustworthy, and ubiquitous traceability. Ten years later, in 2017, the community came together to evaluate a decade of progress towards achieving these goals. These proceedings document some of that progress. They include a series of short position papers, representing current work in the community organized across four process axes of traceability practice. The sessions covered topics from Trace Strategizing, Trace Link Creation and Evolution, Trace Link Usage, real-world applications of Traceability, and Traceability Datasets and benchmarks. Two breakout groups focused on the importance of creating and sharing traceability datasets within the research community, and discussed challenges related to the adoption of tracing techniques in industrial practice. Members of the research community are engaged in many active, ongoing, and impactful research projects. Our hope is that ten years from now we will be able to look back at a productive decade of research and claim that we have achieved the overarching Grand Challenge of Traceability, which seeks for traceability to be always present, built into the engineering process, and for it to have "effectively disappeared without a trace". We hope that others will see the potential that traceability has for empowering software and systems engineers to develop higher-quality products at increasing levels of complexity and scale, and that they will join the active community of Software and Systems traceability researchers as we move forward into the next decade of research

    SAT based Enforcement of Domotic Effects in Smart Environments

    Get PDF
    The emergence of economically viable and efficient sensor technology provided impetus to the development of smart devices (or appliances). Modern smart environments are equipped with a multitude of smart devices and sensors, aimed at delivering intelligent services to the users of smart environments. The presence of these diverse smart devices has raised a major problem of managing environments. A rising solution to the problem is the modeling of user goals and intentions, and then interacting with the environments using user defined goals. `Domotic Effects' is a user goal modeling framework, which provides Ambient Intelligence (AmI) designers and integrators with an abstract layer that enables the definition of generic goals in a smart environment, in a declarative way, which can be used to design and develop intelligent applications. The high-level nature of domotic effects also allows the residents to program their personal space as they see fit: they can define different achievement criteria for a particular generic goal, e.g., by defining a combination of devices having some particular states, by using domain-specific custom operators. This paper describes an approach for the automatic enforcement of domotic effects in case of the Boolean application domain, suitable for intelligent monitoring and control in domotic environments. Effect enforcement is the ability to determine device configurations that can achieve a set of generic goals (domotic effects). The paper also presents an architecture to implement the enforcement of Boolean domotic effects, and results obtained from carried out experiments prove the feasibility of the proposed approach and highlight the responsiveness of the implemented effect enforcement architectur

    A knowledge based reengineering approach via ontology and description logic.

    Get PDF
    Traditional software reengineering often involves a great deal of manual effort by software maintainers. This is time consuming and error prone. Due to the knowledge intensive properties of software reengineering, a knowledge-based solution is proposed in this thesis to semi-automate some of this manual effort. This thesis aims to explore the principle research question: “How can software systems be described by knowledge representation techniques in order to semi-automate the manual effort in software reengineering?” The underlying research procedure of this thesis is scientific method, which consists of: observation, proposition, test and conclusion. Ontology and description logic are employed to model and represent the knowledge in different software systems, which is integrated with domain knowledge. Model transformation is used to support ontology development. Description logic is used to implement ontology mapping algorithms, in which the problem of detecting semantic relationships is converted into the problem of deducing the satisfiability of logical formulae. Operating system ontology has been built with a top-down approach, and it was deployed to support platform specific software migration [132] and portable software development [18]. Data-dominant software ontology has been built via a bottom-up approach, and it was deployed to support program comprehension [131] and modularisation [130]. This thesis suggests that software systems can be represented by ontology and description logic. Consequently, it will help in semi-automating some of the manual tasks in software reengineering. However, there are also limitations: bottom-up ontology development may sacrifice some complexity of systems; top-down ontology development may become time consuming and complicated. In terms of future work, a greater number of diverse software system categories could be involved and different software system knowledge could be explored

    ICT INDUSTRY INTEGRATED CURRICULA: TOWARDS AN ONTOLOGY BASED COMPETENCY MODEL

    Get PDF
    As technology advances rapidly, the ever changing industry needs for skills and competencies keeps changing in efforts to seize the nearest competitive advantage. This creates a great burden on higher education institutions to accurately be able to supply what the industry currently demands. Understanding and analyzing the gap between the supplied and demanded competencies has been always a topic of debate and research between both domains of knowledge. In this thesis, we have proposed developing an ontology that would help in identifying the gap between the employee and occupation competencies. The objective is to be able to generate the gap analysis utilizing the ontology and provide users with information that would help them in gaining more knowledge about the domain and taking informative decisions based on facts. Two separate ontologies representing classes and object properties of the Education and the Industry domain were successfully modeled. The validation shows that the ontology correctly classifies the employees as Fit or Un-fit to the set of occupations they applied for according to the competency gap analysis. Future work will involve experts validating the results of the ontology from the domain of knowledge point of view.QNRF project ProSkima NPRP 7-1883-5-28

    A SPEMOntology for Software Processes Reusing

    Get PDF
    Reusing the best practices and know-how capitalized from existing software process models is a promising solution to model high quality software processes. This paper presents a part of AoSP (Architecture oriented Software Process) for software processes reuse based on software architectures. The solution is proposed after the study of existing works on software process reusing. AoSP approach deals with the engineering "for" and "by" reusing software processes, it exploits the progress of two research fields that promote reusing in order to improve the software process reusing: domain ontologies and software architectures. AoSP exploits a domain ontology to reuse software process know-how, it allows retrieving, describing and deploring software process architectures. This article details the engineering "for" reusing SPs step of AoSP, it explains how the software process architectures are described and discusses the software process ontology conceptualization and software process knowledge acquisition

    A Health eLearning Ontology and Procedural Reasoning Approach for Developing Personalized Courses to Teach Patients about Their Medical Condition and Treatment

    Get PDF
    We propose a methodological framework to support the development of personalized courses that improve patients’ understanding of their condition and prescribed treatment. Inspired by Intelligent Tutoring Systems (ITSs), the framework uses an eLearning ontology to express domain and learner models and to create a course. We combine the ontology with a procedural reasoning approach and precompiled plans to operationalize a design across disease conditions. The resulting courses generated by the framework are personalized across four patient axes—condition and treatment, comprehension level, learning style based on the VARK (Visual, Aural, Read/write, Kinesthetic) presentation model, and the level of understanding of specific course content according to Bloom’s taxonomy. Customizing educational materials along these learning axes stimulates and sustains patients’ attention when learning about their conditions or treatment options. Our proposed framework creates a personalized course that prepares patients for their meetings with specialists and educates them about their prescribed treatment. We posit that the improvement in patients’ understanding of prescribed care will result in better outcomes and we validate that the constructs of our framework are appropriate for representing content and deriving personalized courses for two use cases: anticoagulation treatment of an atrial fibrillation patient and lower back pain management to treat a lumbar degenerative disc condition. We conduct a mostly qualitative study supported by a quantitative questionnaire to investigate the acceptability of the framework among the target patient population and medical practitioners

    Security-Driven Software Evolution Using A Model Driven Approach

    Get PDF
    High security level must be guaranteed in applications in order to mitigate risks during the deployment of information systems in open network environments. However, a significant number of legacy systems remain in use which poses security risks to the enterprise’ assets due to the poor technologies used and lack of security concerns when they were in design. Software reengineering is a way out to improve their security levels in a systematic way. Model driven is an approach in which model as defined by its type directs the execution of the process. The aim of this research is to explore how model driven approach can facilitate the software reengineering driven by security demand. The research in this thesis involves the following three phases. Firstly, legacy system understanding is performed using reverse engineering techniques. Task of this phase is to reverse engineer legacy system into UML models, partition the legacy system into subsystems with the help of model slicing technique and detect existing security mechanisms to determine whether or not the provided security in the legacy system satisfies the user’s security objectives. Secondly, security requirements are elicited using risk analysis method. It is the process of analysing key aspects of the legacy systems in terms of security. A new risk assessment method, taking consideration of asset, threat and vulnerability, is proposed and used to elicit the security requirements which will generate the detailed security requirements in the specific format to direct the subsequent security enhancement. Finally, security enhancement for the system is performed using the proposed ontology based security pattern approach. It is the stage that security patterns derived from security expertise and fulfilling the elicited security requirements are selected and integrated in the legacy system models with the help of the proposed security ontology. The proposed approach is evaluated by the selected case study. Based on the analysis, conclusions are drawn and future research is discussed at the end of this thesis. The results show this thesis contributes an effective, reusable and suitable evolution approach for software security
    corecore