4,745 research outputs found

    Enabling system artefact exchange and selection through a linked data layer

    Get PDF
    The use of different techniques and tools is a common practice to cover all stages in the systems development lifecycle, generating a very good number of system artefacts. Moreover, these artefacts are commonly encoded in different formats and can only be accessed, in most cases, through proprietary and non-standard protocols. This scenario can be considered a real nightmare for software or systems reuse. Possible solutions imply the creation of a real collaborative development environment where tools can exchange and share data, information and knowledge. In this context, the OSLC (Open Services for Lifecycle Collaboration) initiative pursues the creation of public specifications (data shapes) to exchange any artefact generated during the development lifecycle, by applying the principles of the Linked Data initiative. In this paper, the authors present a solution to provide a real multi-format system artefact reuse by means of an OSLC-based specification to share and exchange any artefact under the principles of the Linked Data initiative. Finally, two experiments are conducted to demonstrate the advantages of enabling an input/output interface based on an OSLC implementation on top of an existing commercial tool (the Knowledge Manager). Thus, it is possible to enhance the representation and retrieval capabilities of system artefacts by considering the whole underlying knowledge graph generated by the different system artefacts and their relationships. After performing 45 different queries over logical and physical models stored in Papyrus, IBM Rhapsody and Simulink, results of precision and recall are promising showing average values between 70-80%.The research leading to these results has received funding from the AMASS project (H2020-ECSEL grant agreement no 692474; Spain's MINECO ref. PCIN-2015-262) and the CRYSTAL project (ARTEMIS FP7-CRitical sYSTem engineering AcceLeration project no 332830-CRYSTAL and the Spanish Ministry of Industry)

    ServeNet: A Deep Neural Network for Web Services Classification

    Full text link
    Automated service classification plays a crucial role in service discovery, selection, and composition. Machine learning has been widely used for service classification in recent years. However, the performance of conventional machine learning methods highly depends on the quality of manual feature engineering. In this paper, we present a novel deep neural network to automatically abstract low-level representation of both service name and service description to high-level merged features without feature engineering and the length limitation, and then predict service classification on 50 service categories. To demonstrate the effectiveness of our approach, we conduct a comprehensive experimental study by comparing 10 machine learning methods on 10,000 real-world web services. The result shows that the proposed deep neural network can achieve higher accuracy in classification and more robust than other machine learning methods.Comment: Accepted by ICWS'2

    Ontology of core data mining entities

    Get PDF
    In this article, we present OntoDM-core, an ontology of core data mining entities. OntoDM-core defines themost essential datamining entities in a three-layered ontological structure comprising of a specification, an implementation and an application layer. It provides a representational framework for the description of mining structured data, and in addition provides taxonomies of datasets, data mining tasks, generalizations, data mining algorithms and constraints, based on the type of data. OntoDM-core is designed to support a wide range of applications/use cases, such as semantic annotation of data mining algorithms, datasets and results; annotation of QSAR studies in the context of drug discovery investigations; and disambiguation of terms in text mining. The ontology has been thoroughly assessed following the practices in ontology engineering, is fully interoperable with many domain resources and is easy to extend

    Application of machine learning techniques to the flexible assessment and improvement of requirements quality

    Get PDF
    It is already common to compute quantitative metrics of requirements to assess their quality. However, the risk is to build assessment methods and tools that are both arbitrary and rigid in the parameterization and combination of metrics. Specifically, we show that a linear combination of metrics is insufficient to adequately compute a global measure of quality. In this work, we propose to develop a flexible method to assess and improve the quality of requirements that can be adapted to different contexts, projects, organizations, and quality standards, with a high degree of automation. The domain experts contribute with an initial set of requirements that they have classified according to their quality, and we extract their quality metrics. We then use machine learning techniques to emulate the implicit expert’s quality function. We provide also a procedure to suggest improvements in bad requirements. We compare the obtained rule-based classifiers with different machine learning algorithms, obtaining measurements of effectiveness around 85%. We show as well the appearance of the generated rules and how to interpret them. The method is tailorable to different contexts, different styles to write requirements, and different demands in quality. The whole process of inferring and applying the quality rules adapted to each organization is highly automatedThis research has received funding from the CRYSTAL project–Critical System Engineering Acceleration (European Union’s Seventh Framework Program FP7/2007-2013, ARTEMIS Joint Undertaking grant agreement no 332830); and from the AMASS project–Architecture-driven, Multi-concern and Seamless Assurance and Certification of Cyber-Physical Systems (H2020-ECSEL grant agreement no 692474; Spain’s MINECO ref. PCIN-2015-262)

    Una arquitectura de referencia para ambientes web de ingeniería ontológica

    Get PDF
    Ontology authoring, maintenance and use are never easy tasks, mostly due to the complexity of real domains and how they dynamically change as well as different background possessed by modellers about methodologies and formal languages. However, although the needs for ontologies are well-understood, not less important is to provide editing tools to manipulate and understand them. In this context, this work proposes and documents a reference architecture for such tools running in web environments. Moreover, it provides the rationale for boosting the collaborative development of a novel tool based on this architecture, named crowd. Previous surveys reveal that few Webbased ontology engineering environments have been developed and in addition, almost all of them are mere visualisers, with limited graphical features and lacking inference services.La definición, mantenimiento y use de ontologías son tareas difíciles debido, en mayor medida, a la complejidad inherente al mundo real y a como éste cambia dinámicamente. Asimismo, también se debe a las diferencias en conocimiento sobre metodologías y lenguajes formales por parte de los modeladores. Sin embargo, aunque la necesidad de crear y obtener ontologías es clave, es también importante contar con herramientas para manipularlas y entenderlas. Este trabajo propone y documenta una arquitectura de referencia para ambientes Web y ofrece los fundamentos para impulsar el desarrollo colaborativo de la herramienta crowd, la cual esta basada sobre dicha architectura. Revisiones previas de la literatura indican la existencia de un numero reducido ambientes para la Ingeniería Ontológica basados en tecnologías Web, sin embargo, casi en su totalidad son solo visualizadores de modelos con soporte gráfico limitado y ausencia de razonamiento lógico integrado.Facultad de Informátic

    Survey of Spectrum Sharing for Inter-Technology Coexistence

    Full text link
    Increasing capacity demands in emerging wireless technologies are expected to be met by network densification and spectrum bands open to multiple technologies. These will, in turn, increase the level of interference and also result in more complex inter-technology interactions, which will need to be managed through spectrum sharing mechanisms. Consequently, novel spectrum sharing mechanisms should be designed to allow spectrum access for multiple technologies, while efficiently utilizing the spectrum resources overall. Importantly, it is not trivial to design such efficient mechanisms, not only due to technical aspects, but also due to regulatory and business model constraints. In this survey we address spectrum sharing mechanisms for wireless inter-technology coexistence by means of a technology circle that incorporates in a unified, system-level view the technical and non-technical aspects. We thus systematically explore the spectrum sharing design space consisting of parameters at different layers. Using this framework, we present a literature review on inter-technology coexistence with a focus on wireless technologies with equal spectrum access rights, i.e. (i) primary/primary, (ii) secondary/secondary, and (iii) technologies operating in a spectrum commons. Moreover, we reflect on our literature review to identify possible spectrum sharing design solutions and performance evaluation approaches useful for future coexistence cases. Finally, we discuss spectrum sharing design challenges and suggest future research directions
    corecore