5,843 research outputs found

    Mass Customization of Cloud Services - Engineering, Negotiation and Optimization

    Get PDF
    Several challenges hinder the entry of mass customization principles into Cloud computing: Firstly, the service engineering on provider side needs to be automated. Secondly, there has to be a suitable negotiation mechanism helping provider and consumer on finding an agreement on Quality-of-Service and price. Thirdly, finding the optimal configuration requires adequate and efficient optimization techniques. The work at hand addresses these challenges through technical and economic contributions

    Coordination of Dynamic Software Components with JavaBIP

    Get PDF
    JavaBIP allows the coordination of software components by clearly separating the functional and coordination aspects of the system behavior. JavaBIP implements the principles of the BIP component framework rooted in rigorous operational semantics. Recent work both on BIP and JavaBIP allows the coordination of static components defined prior to system deployment, i.e., the architecture of the coordinated system is fixed in terms of its component instances. Nevertheless, modern systems, often make use of components that can register and deregister dynamically during system execution. In this paper, we present an extension of JavaBIP that can handle this type of dynamicity. We use first-order interaction logic to define synchronization constraints based on component types. Additionally, we use directed graphs with edge coloring to model dependencies among components that determine the validity of an online system. We present the software architecture of our implementation, provide and discuss performance evaluation results.Comment: Technical report that accompanies the paper accepted at the 14th International Conference on Formal Aspects of Component Softwar

    A Semantic Framework for Declarative and Procedural Knowledge

    Get PDF
    In any scientic domain, the full set of data and programs has reached an-ome status, i.e. it has grown massively. The original article on the Semantic Web describes the evolution of a Web of actionable information, i.e.\ud information derived from data through a semantic theory for interpreting the symbols. In a Semantic Web, methodologies are studied for describing, managing and analyzing both resources (domain knowledge) and applications (operational knowledge) - without any restriction on what and where they\ud are respectively suitable and available in the Web - as well as for realizing automatic and semantic-driven work\ud ows of Web applications elaborating Web resources.\ud This thesis attempts to provide a synthesis among Semantic Web technologies, Ontology Research, Knowledge and Work\ud ow Management. Such a synthesis is represented by Resourceome, a Web-based framework consisting of two components which strictly interact with each other: an ontology-based and domain-independent knowledge manager system (Resourceome KMS) - relying on a knowledge model where resource and operational knowledge are contextualized in any domain - and a semantic-driven work ow editor, manager and agent-based execution system (Resourceome WMS).\ud The Resourceome KMS and the Resourceome WMS are exploited in order to realize semantic-driven formulations of work\ud ows, where activities are semantically linked to any involved resource. In the whole, combining the use of domain ontologies and work ow techniques, Resourceome provides a exible domain and operational knowledge organization, a powerful engine for semantic-driven work\ud ow composition, and a distributed, automatic and\ud transparent environment for work ow execution

    Observational models of requirements evolution

    Get PDF
    Requirements Evolution is one of the main issues that affect development activities as well as system features (e.g., system dependability). Although researchers and practitioners recognise the importance of requirements evolution, research results and experience are still patchy. This points out a lack of methodologies that address requirements evolution. This thesis investigates the current understanding of requirements evolution and explores new directions in requirements evolution research. The empirical analysis of industrial case studies highlights software requirements evolution as an important issue. Unfortunately, traditional requirements engineering methodologies provide limited support to capture requirements evolution. Heterogeneous engineering provides a comprehensive account of system requirements. Heterogeneous engineering stresses a holistic viewpoint that allows us to understand the underlying mechanisms of evolution of socio-technical systems. Requirements, as mappings between socio-technical solutions and problems, represent an account of the history of socio-technical issues arising and being solved within industrial settings. The formal extension of a heterogeneous account of requirements provides a framework to model and capture requirements evolution. The application of the proposed framework provides further evidence that it is possible to capture and model evolutionary information about requirements. The discussion of scenarios of use stresses practical necessities for methodologies addressing requirements evolution. Finally, the identification of a broad spectrum of evolutions in socio-technical systems points out strong contingencies between system evolution and dependability. This thesis argues that the better our understanding of socio-techn..

    Automated Knowledge Base Quality Assessment and Validation based on Evolution Analysis

    Get PDF
    In recent years, numerous efforts have been put towards sharing Knowledge Bases (KB) in the Linked Open Data (LOD) cloud. These KBs are being used for various tasks, including performing data analytics or building question answering systems. Such KBs evolve continuously: their data (instances) and schemas can be updated, extended, revised and refactored. However, unlike in more controlled types of knowledge bases, the evolution of KBs exposed in the LOD cloud is usually unrestrained, what may cause data to suffer from a variety of quality issues, both at a semantic level and at a pragmatic level. This situation affects negatively data stakeholders – consumers, curators, etc. –. Data quality is commonly related to the perception of the fitness for use, for a certain application or use case. Therefore, ensuring the quality of the data of a knowledge base that evolves is vital. Since data is derived from autonomous, evolving, and increasingly large data providers, it is impractical to do manual data curation, and at the same time, it is very challenging to do a continuous automatic assessment of data quality. Ensuring the quality of a KB is a non-trivial task since they are based on a combination of structured information supported by models, ontologies, and vocabularies, as well as queryable endpoints, links, and mappings. Thus, in this thesis, we explored two main areas in assessing KB quality: (i) quality assessment using KB evolution analysis, and (ii) validation using machine learning models. The evolution of a KB can be analyzed using fine-grained “change” detection at low-level or using “dynamics” of a dataset at high-level. In this thesis, we present a novel knowledge base quality assessment approach using evolution analysis. The proposed approach uses data profiling on consecutive knowledge base releases to compute quality measures that allow detecting quality issues. However, the first step in building the quality assessment approach was to identify the quality characteristics. Using high-level change detection as measurement functions, in this thesis we present four quality characteristics: Persistency, Historical Persistency, Consistency and Completeness. Persistency and historical persistency measures concern the degree of changes and lifespan of any entity type. Consistency and completeness measures identify properties with incomplete information and contradictory facts. The approach has been assessed both quantitatively and qualitatively on a series of releases from two knowledge bases, eleven releases of DBpedia and eight releases of 3cixty Nice. However, high-level changes, being coarse-grained, cannot capture all possible quality issues. In this context, we present a validation strategy whose rationale is twofold. First, using manual validation from qualitative analysis to identify causes of quality issues. Then, use RDF data profiling information to generate integrity constraints. The validation approach relies on the idea of inducing RDF shape by exploiting SHALL constraint components. In particular, this approach will learn, what are the integrity constraints that can be applied to a large KB by instructing a process of statistical analysis, which is followed by a learning model. We illustrate the performance of our validation approach by using five learning models over three sub-tasks, namely minimum cardinality, maximum cardinality, and range constraint. The techniques of quality assessment and validation developed during this work are automatic and can be applied to different knowledge bases independently of the domain. Furthermore, the measures are based on simple statistical operations that make the solution both flexible and scalable
    • 

    corecore