5,649 research outputs found

    Working Notes from the 1992 AAAI Workshop on Automating Software Design. Theme: Domain Specific Software Design

    Get PDF
    The goal of this workshop is to identify different architectural approaches to building domain-specific software design systems and to explore issues unique to domain-specific (vs. general-purpose) software design. Some general issues that cut across the particular software design domain include: (1) knowledge representation, acquisition, and maintenance; (2) specialized software design techniques; and (3) user interaction and user interface

    CREWS - l'Ecritoire Analysis for the Implementation of a medical image database for mammography

    Get PDF
    International audienceIn this paper, we present our approach in order to implement a Medical Image Database (MIDB) for archiving mammograms and their related information in the Department of Radiology of the Necker Hospital (Paris). The aim of such a database is to help breast cancer screening in clinics, research and education. As implementation of such a MIDB requires the understanding of users' needs, we have analyzed requirements by using the Crews-l'Ecritoire (Cooperative REquirements With Scenarios) approach developed in our laboratory. This approach is based on the "Requirement Engineering" concept. It helps understanding users' needs using a semi-automatic analysis of textual scenarios, i.e. scenarios written in natural language. This approach mixes concepts of goals and of scenarios into the notion of "Requirement Chunk". Authored scenarios and goal discovery are guided by rules, which lead to a structured network of scenarios. Our analysis resulted in 58 Requirements Chunks gathering 72 authored scenarios and 300 goals which represent MIDB services requested by radiologists in the course of their daily practice

    PRISE: An Integrated Platform for Research and Teaching of Critical Embedded Systems

    Get PDF
    In this paper, we present PRISE, an integrated workbench for Research and Teaching of critical embedded systems at ISAE, the French Institute for Space and Aeronautics Engineering. PRISE is built around state-of-the-art technologies for the engineering of space and avionics systems used in Space and Avionics domain. It aims at demonstrating key aspects of critical, real-time, embedded systems used in the transport industry, but also validating new scientific contributions for the engineering of software functions. PRISE combines embedded and simulation platforms, and modeling tools. This platform is available for both research and teaching. Being built around widely used commercial and open source software; PRISE aims at being a reference platform for our teaching and research activities at ISAE

    OpenUP/MDRE: A Model-Driven Requirements Engineering Approach for Health-Care Systems

    Full text link
    The domains and problems for which it would be desirable to introduce information systems are currently very complex and the software development process is thus of the same complexity. One of these domains is health-care. Model-Driven Development (MDD) and Service-Oriented Architecture (SOA) are software development approaches that raise to deal with complexity, to reduce time and cost of development, augmenting flexibility and interoperability. However, many techniques and approaches that have been introduced are of little use when not provided under a formalized and well-documented methodological umbrella. A methodology gives the process a well-defined structure that helps in fast and efficient analysis and design, trouble-free implementation, and finally results in the software product improved quality. While MDD and SOA are gaining their momentum toward the adoption in the software industry, there is one critical issue yet to be addressed before its power is fully realized. It is beyond dispute that requirements engineering (RE) has become a critical task within the software development process. Errors made during this process may have negative effects on subsequent development steps, and on the quality of the resulting software. For this reason, the MDD and SOA development approaches should not only be taken into consideration during design and implementation as usually occurs, but also during the RE process. The contribution of this dissertation aims at improving the development process of health-care applications by proposing OpenUP/MDRE methodology. The main goal of this methodology is to enrich the development process of SOA-based health-care systems by focusing on the requirements engineering processes in the model-driven context. I believe that the integration of those two highly important areas of software engineering, gathered in one consistent process, will provide practitioners with many benets. It is noteworthy that the approach presented here was designed for SOA-based health-care applications, however, it also provides means to adapt it to other architectural paradigms or domains. The OpenUP/MDRE approach is an extension of the lightweight OpenUP methodology for iterative, architecture-oriented and model-driven software development. The motivation for this research comes from the experience I gained as a computer science professional working on the health-care systems. This thesis also presents a comprehensive study about: i) the requirements engineering methods and techniques that are being used in the context of the model-driven development, ii) known generic but flexible and extensible methodologies, as well as approaches for service-oriented systems development, iii) requirements engineering techniques used in the health-care industry. Finally, OpenUP/MDRE was applied to a concrete industrial health-care project in order to show the feasibility and accuracy of this methodological approach.Loniewski, G. (2010). OpenUP/MDRE: A Model-Driven Requirements Engineering Approach for Health-Care Systems. http://hdl.handle.net/10251/11652Archivo delegad

    Advanced Knowledge Technologies at the Midterm: Tools and Methods for the Semantic Web

    Get PDF
    The University of Edinburgh and research sponsors are authorised to reproduce and distribute reprints and on-line copies for their purposes notwithstanding any copyright annotation hereon. The views and conclusions contained herein are the author’s and shouldn’t be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of other parties.In a celebrated essay on the new electronic media, Marshall McLuhan wrote in 1962:Our private senses are not closed systems but are endlessly translated into each other in that experience which we call consciousness. Our extended senses, tools, technologies, through the ages, have been closed systems incapable of interplay or collective awareness. Now, in the electric age, the very instantaneous nature of co-existence among our technological instruments has created a crisis quite new in human history. Our extended faculties and senses now constitute a single field of experience which demands that they become collectively conscious. Our technologies, like our private senses, now demand an interplay and ratio that makes rational co-existence possible. As long as our technologies were as slow as the wheel or the alphabet or money, the fact that they were separate, closed systems was socially and psychically supportable. This is not true now when sight and sound and movement are simultaneous and global in extent. (McLuhan 1962, p.5, emphasis in original)Over forty years later, the seamless interplay that McLuhan demanded between our technologies is still barely visible. McLuhan’s predictions of the spread, and increased importance, of electronic media have of course been borne out, and the worlds of business, science and knowledge storage and transfer have been revolutionised. Yet the integration of electronic systems as open systems remains in its infancy.Advanced Knowledge Technologies (AKT) aims to address this problem, to create a view of knowledge and its management across its lifecycle, to research and create the services and technologies that such unification will require. Half way through its sixyear span, the results are beginning to come through, and this paper will explore some of the services, technologies and methodologies that have been developed. We hope to give a sense in this paper of the potential for the next three years, to discuss the insights and lessons learnt in the first phase of the project, to articulate the challenges and issues that remain.The WWW provided the original context that made the AKT approach to knowledge management (KM) possible. AKT was initially proposed in 1999, it brought together an interdisciplinary consortium with the technological breadth and complementarity to create the conditions for a unified approach to knowledge across its lifecycle. The combination of this expertise, and the time and space afforded the consortium by the IRC structure, suggested the opportunity for a concerted effort to develop an approach to advanced knowledge technologies, based on the WWW as a basic infrastructure.The technological context of AKT altered for the better in the short period between the development of the proposal and the beginning of the project itself with the development of the semantic web (SW), which foresaw much more intelligent manipulation and querying of knowledge. The opportunities that the SW provided for e.g., more intelligent retrieval, put AKT in the centre of information technology innovation and knowledge management services; the AKT skill set would clearly be central for the exploitation of those opportunities.The SW, as an extension of the WWW, provides an interesting set of constraints to the knowledge management services AKT tries to provide. As a medium for the semantically-informed coordination of information, it has suggested a number of ways in which the objectives of AKT can be achieved, most obviously through the provision of knowledge management services delivered over the web as opposed to the creation and provision of technologies to manage knowledge.AKT is working on the assumption that many web services will be developed and provided for users. The KM problem in the near future will be one of deciding which services are needed and of coordinating them. Many of these services will be largely or entirely legacies of the WWW, and so the capabilities of the services will vary. As well as providing useful KM services in their own right, AKT will be aiming to exploit this opportunity, by reasoning over services, brokering between them, and providing essential meta-services for SW knowledge service management.Ontologies will be a crucial tool for the SW. The AKT consortium brings a lot of expertise on ontologies together, and ontologies were always going to be a key part of the strategy. All kinds of knowledge sharing and transfer activities will be mediated by ontologies, and ontology management will be an important enabling task. Different applications will need to cope with inconsistent ontologies, or with the problems that will follow the automatic creation of ontologies (e.g. merging of pre-existing ontologies to create a third). Ontology mapping, and the elimination of conflicts of reference, will be important tasks. All of these issues are discussed along with our proposed technologies.Similarly, specifications of tasks will be used for the deployment of knowledge services over the SW, but in general it cannot be expected that in the medium term there will be standards for task (or service) specifications. The brokering metaservices that are envisaged will have to deal with this heterogeneity.The emerging picture of the SW is one of great opportunity but it will not be a wellordered, certain or consistent environment. It will comprise many repositories of legacy data, outdated and inconsistent stores, and requirements for common understandings across divergent formalisms. There is clearly a role for standards to play to bring much of this context together; AKT is playing a significant role in these efforts. But standards take time to emerge, they take political power to enforce, and they have been known to stifle innovation (in the short term). AKT is keen to understand the balance between principled inference and statistical processing of web content. Logical inference on the Web is tough. Complex queries using traditional AI inference methods bring most distributed computer systems to their knees. Do we set up semantically well-behaved areas of the Web? Is any part of the Web in which semantic hygiene prevails interesting enough to reason in? These and many other questions need to be addressed if we are to provide effective knowledge technologies for our content on the web

    Managing contextual information in semantically-driven temporal information systems

    Get PDF
    Context-aware (CA) systems have demonstrated the provision of a robust solution for personalized information delivery in the current content-rich and dynamic information age we live in. They allow software agents to autonomously interact with users by modeling the user’s environment (e.g. profile, location, relevant public information etc.) as dynamically-evolving and interoperable contexts. There is a flurry of research activities in a wide spectrum at context-aware research areas such as managing the user’s profile, context acquisition from external environments, context storage, context representation and interpretation, context service delivery and matching of context attributes to users‘ queries etc. We propose SDCAS, a Semantic-Driven Context Aware System that facilitates public services recommendation to users at temporal location. This paper focuses on information management and service recommendation using semantic technologies, taking into account the challenges of relationship complexity in temporal and contextual information

    Supporting the Discovery, Reuse, and Validation of Cybersecurity Requirements at the Early Stages of the Software Development Lifecycle

    Get PDF
    The focus of this research is to develop an approach that enhances the elicitation and specification of reusable cybersecurity requirements. Cybersecurity has become a global concern as cyber-attacks are projected to cost damages totaling more than $10.5 trillion dollars by 2025. Cybersecurity requirements are more challenging to elicit than other requirements because they are nonfunctional requirements that requires cybersecurity expertise and knowledge of the proposed system. The goal of this research is to generate cybersecurity requirements based on knowledge acquired from requirements elicitation and analysis activities, to provide cybersecurity specifications without requiring the specialized knowledge of a cybersecurity expert, and to generate reusable cybersecurity requirements. The proposed approach can be an effective way to implement cybersecurity requirements at the earliest stages of the system development life cycle because the approach facilitates the identification of cybersecurity requirements throughout the requirements gathering stage. This is accomplished through the development of the Secure Development Ontology that maps cybersecurity features and the functional features descriptions in order to train a classification machine-learning model to return the suggested security requirements. The SD-SRE requirements engineering portal was created to support the application of this research by providing a platform to submit use case scenarios and requirements and suggest security requirements for the given system. The efficacy of this approach was tested with students in a graduate requirements engineering course. The students were presented with a system description and tasked with creating use case scenarios using the SD-SRE portal. The entered models were automatically analyzed by the SD-SRE system to suggest the security requirements. The results showed that the approach can be an effective approach to assist in the identification of security requirements
    corecore