4,613 research outputs found

    Software Evolution for Industrial Automation Systems. Literature Overview

    Get PDF

    Collected software engineering papers, volume 9

    Get PDF
    This document is a collection of selected technical papers produced by participants in the Software Engineering Laboratory (SEL) from November 1990 through October 1991. The purpose of the document is to make available, in one reference, some results of SEL research that originally appeared in a number of different forums. This is the ninth such volume of technical papers produced by the SEL. Although these papers cover several topics related to software engineering, they do not encompass the entire scope of SEL activities and interests. For the convenience of this presentation, the eight papers contained here are grouped into three major categories: (1) software models studies; (2) software measurement studies; and (3) Ada technology studies. The first category presents studies on reuse models, including a software reuse model applied to maintenance and a model for an organization to support software reuse. The second category includes experimental research methods and software measurement techniques. The third category presents object-oriented approaches using Ada and object-oriented features proposed for Ada. The SEL is actively working to understand and improve the software development process at GSFC

    A Comprehensive Classification of Business Activities in the Market of Intellectual Property Rights-related Services

    Get PDF
    Technology and intellectual property markets have witnessed great developments in the last few decades. Due to intellectual property rights gaining more importance and technology companies opening up their innovation processes, a wide range of intellectual property rights related services have emerged in the last two decades. The goal of this research is to develop a comprehensive classification system of intellectual property rights related services (IPSC). The classification is created by applying an ontology engineering process. The IPSC consists of 72 various IPR services divided into six main categories (100 Legal Service; 200 IP Consulting; 300 Matchmaking and Trading; 400 IP Portfolio Processing; 500 IPR-related Financial Service; 600 IPR-related Communication Service). The implications of the thesis are directed to policy makers, technology transfer managers, C-level executives and innovation researchers. The IPSC enables practitioners and researchers to organize industry data that can be thereafter analyzed for better strategy and policy making. In addition, this contributes towards organizing a more transparent and single intellectual property market.:Acknowledgements I Abstract II Contents IV List of Figures VI List of Tables VII 1. Introduction 1 1.1. Introduction to Technology Markets 1 1.2. Explanation of Key Concepts 5 1.3. Research Questions and Goals 9 1.4. Readers Guide 13 2. Literature Review 15 2.1. Intellectual Property Markets State of the Art Review 15 2.2. Ontology Engineering State of the Art Review 22 3. Methodology 26 3.1. Methontology 26 3.2. Planning the IPSC 29 3.3. Specification 30 3.4. Conceptualization 31 3.5. Formalization 32 3.6. Integration 32 3.7. Evaluation 33 3.8. Documentation 33 3.9. Realization and Maintenance 33 4. Data description and collection framework 34 5. Applying Methontology 46 5.1. Knowledge Acquisition and Planning the IPSC 46 5.2. Specification 46 5.3. Conceptualization 47 5.4. Formalization 54 100 Legal Service 56 200 IP Consulting 60 300 Matchmaking and Trading 65 400 IP Portfolio Processing 72 500 IPR-related Financial Service 76 600 IPR-related Communication Service 81 5.5. Integration 86 5.6. Evaluation 95 5.7. Documentation 104 5.8. Realization and Maintenance of the IPSC 106 6. Interview Results and Further Discussions 108 6.1. Implications for Industry 108 6.2. Contributions of the IPSC 110 6.3. Limitations of the IPSC and Future Work 112 7. Conclusions 116 References 120 List of experts interviewed and the date of interview 129 Appendices 13

    Creation and extension of ontologies for describing communications in the context of organizations

    Get PDF
    Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer ScienceThe use of ontologies is nowadays a sufficiently mature and solid field of work to be considered an efficient alternative in knowledge representation. With the crescent growth of the Semantic Web, it is expectable that this alternative tends to emerge even more in the near future. In the context of a collaboration established between FCT-UNL and the R&D department of a national software company, a new solution entitled ECC – Enterprise Communications Center was developed. This application provides a solution to manage the communications that enter, leave or are made within an organization, and includes intelligent classification of communications and conceptual search techniques in a communications repository. As specificity may be the key to obtain acceptable results with these processes, the use of ontologies becomes crucial to represent the existing knowledge about the specific domain of an organization. This work allowed us to guarantee a core set of ontologies that have the power of expressing the general context of the communications made in an organization, and of a methodology based upon a series of concrete steps that provides an effective capability of extending the ontologies to any business domain. By applying these steps, the minimization of the conceptualization and setup effort in new organizations and business domains is guaranteed. The adequacy of the core set of ontologies chosen and of the methodology specified is demonstrated in this thesis by its effective application to a real case-study, which allowed us to work with the different types of sources considered in the methodology and the activities that support its construction and evolution

    Business Model Enriched With User Experience, as a Systemic Tool in Service Design

    Get PDF
    Service design and business model design are considered in the literature as separate approaches to value creation for the customer. User experience, as a concept that represents a holistic emotional and meaningful result of the interaction with information technologies, is nowadays an important ingredient of the customer value. This paper aims to theoretically set the ground for using the business model concept as a systemic tool in service design that will support the design for user experience. Against this background, we ask: Can the business model concept successfully represent a system that is required for the value proposition-based service exchange? We investigate this question based on service-dominant logic and accompanying service science, and semantically compare elements of the service system, service ecosystem, and ten service science basic concepts. The analysis shows that the business model canvas, the chosen model for business model representation, satisfies the systemic perspective and can serve as a system platform for integrating with service design

    A Life Cycle Approach to the Development and Validation of an Ontology of the U.S. Common Rule (45 C.F.R. § 46)

    Get PDF
    Requirements for the protection of human research subjects stem from directly from federal regulation by the Department of Health and Human Services in Title 45 of the Code of Federal Regulations (C.F.R.) part 46. 15 other federal agencies include subpart A of part 46 verbatim in their own body of regulation. Hence 45 C.F.R. part 46 subpart A has come to be called colloquially the ‘Common Rule.’ Overall motivation for this study began as a desire to facilitate the ethical sharing of biospecimen samples from large biospecimen collections by using ontologies. Previous work demonstrated that in general the informed consent process and subsequent decision making about data and specimen release still relies heavily on paper-based informed consent forms and processes. Consequently, well-validated computable models are needed to provide an enhanced foundation for data sharing. This dissertation describes the development and validation of a Common Rule Ontology (CRO), expressed in the OWL-2 Web Ontology Language, and is intended to provide a computable semantic knowledge model for assessing and representing components of the information artifacts of required as part of regulated research under 45 C.F.R. § 46. I examine if the alignment of this ontology with the Basic Formal Ontology and other ontologies from the Open Biomedical Ontology (OBO) Foundry provide a good fit for the regulatory aspects of the Common Rule Ontology. The dissertation also examines and proposes a new method for ongoing evaluation of ontology such as CRO across the ontology development lifecycle and suggest methods to achieve high quality, validated ontologies. While the CRO is not in itself intended to be a complete solution to the data and specimen sharing problems outlined above, it is intended to produce a well-validated computationally grounded framework upon which others can build. This model can be used in future work to build decision support systems to assist Institutional Review Boards (IRBs), regulatory personnel, honest brokers, tissue bank managers, and other individuals in the decision-making process involving biorepository specimen and data sharing

    Design Ltd.: Renovated Myths for the Development of Socially Embedded Technologies

    Full text link
    This paper argues that traditional and mainstream mythologies, which have been continually told within the Information Technology domain among designers and advocators of conceptual modelling since the 1960s in different fields of computing sciences, could now be renovated or substituted in the mould of more recent discourses about performativity, complexity and end-user creativity that have been constructed across different fields in the meanwhile. In the paper, it is submitted that these discourses could motivate IT professionals in undertaking alternative approaches toward the co-construction of socio-technical systems, i.e., social settings where humans cooperate to reach common goals by means of mediating computational tools. The authors advocate further discussion about and consolidation of some concepts in design research, design practice and more generally Information Technology (IT) development, like those of: task-artifact entanglement, universatility (sic) of End-User Development (EUD) environments, bricolant/bricoleur end-user, logic of bricolage, maieuta-designers (sic), and laissez-faire method to socio-technical construction. Points backing these and similar concepts are made to promote further discussion on the need to rethink the main assumptions underlying IT design and development some fifty years later the coming of age of software and modern IT in the organizational domain.Comment: This is the peer-unreviewed of a manuscript that is to appear in D. Randall, K. Schmidt, & V. Wulf (Eds.), Designing Socially Embedded Technologies: A European Challenge (2013, forthcoming) with the title "Building Socially Embedded Technologies: Implications on Design" within an EUSSET editorial initiative (www.eusset.eu/

    Conceptual modeling in the era of Big Data and Artificial Intelligence: Research topics and introduction to the special issue

    Get PDF
    Since the first version of the Entity–Relationship (ER) model proposed by Peter Chen over forty years ago, both the ER model and conceptual modeling activities have been key success factors for modeling computer-based systems. During the last decade, conceptual modeling has been recognized as an important research topic in academia, as well as a necessity for practitioners. However, there are many research challenges for conceptual modeling in contemporary applications such as Big Data, data-intensive applications, decision support systems, e-health applications, and ontologies. In addition, there remain challenges related to the traditional efforts associated with methodologies, tools, and theory development. Recently, novel research is uniting contributions from both the conceptual modeling area and the Artificial Intelligence discipline in two directions. The first one is efforts related to how conceptual modeling can aid in the design of Artificial Intelligence (AI) and Machine Learning (ML) algorithms. The second one is how Artificial Intelligence and Machine Learning can be applied in model-based solutions, such as model-based engineering, to infer and improve the generated models. For the first time in the history of Conceptual Modeling (ER) conferences, we encouraged the submission of papers based on AI and ML solutions in an attempt to highlight research from both communities. In this paper, we present some of important topics in current research in conceptual modeling. We introduce the selected best papers from the 37th International Conference on Conceptual Modeling (ER’18) held in Xi’an, China and summarize some of the valuable contributions made based on the discussions of these papers. We conclude with suggestions for continued research.The research reported in this paper was partially funded by the ECLIPSE-UA (RTI2018-094283-B-C32) and the AETHER-UA (PID2020-112540RB-C43) Projects from the Spanish Ministry of Science and Innovation

    Implementation of eunethta core model (R) in lombardia: the VTS Framework

    Get PDF
    Objectives: This study describes the health technology assessment (HTA) framework introduced by Regione Lombardia to regulate the introduction of new technologies. The study outlines the process and dimensions adopted to prioritize, assess and appraise the requests of new technologies. Methods: The HTA framework incorporates and adapts elements from the EUnetHTA Core Model and the EVIDEM framework. It includes dimensions, topics, and issues provided by EUnetHTA Core Model to collect data and process the assessment. Decision making is instead supported by the criteria and Multi-Criteria Decision Analysis technique from the EVIDEM consortium. Results: The HTA framework moves along three process stages: (i) prioritization of requests, (ii) assessment of prioritized technology, (iii) appraisal of technology in support of decision making. Requests received by Regione Lombardia are first prioritized according to their relevance along eight dimensions (e.g., costs, efficiency and efficacy, organizational impact, safety). Evidence about the impacts of the prioritized technologies is then collected following the issues and topics provided by EUnetHTA Core Model. Finally, the Multi-Criteria Decision Analysis technique is used to appraise the novel technology and support Regione Lombardia decision making. Conclusions: The VTS (Valutazione delle Tecnologie Sanitarie) framework has been successfully implemented at the end of 2011. From its inception, twenty-six technologies have been processed
    corecore